Deploying own kubernetes infrastruture on DigitalOcean

Sergey Yanover
6 min readMar 18, 2021

--

You might have only $5 to deploy kubernetes infrastructure and use it for 2 months.

My plan is to Sign UP DigitalOcean, create 2 machines for k8s and 1 as a gate, Private Network and Firewall rules using Terraform, install packages with Ansible. Then I set up Kubernetes with Kubespray, create Load Balancer, install Nexus as a local repository and create local storage with NFS.

1. Sign up to DigitalOcean, pay $5 and get $100 credit for 60 days via PayPal or using your Credit Card, as well you might have another promo code.

2. Create virtual machines (droplets aka instances).

You can create droplets manually but it’s better to automate it. I use a computer with Windows 10, installed Terraform, Putty with generated ssh keys.
Install VPC Network, Cloud Firewall, 3 droplets:
2 GB / 50 GB Disk / FRA1 — Ubuntu 18.04 (LTS) x64 — GW
4 GB / 80 GB Disk / FRA1 — Ubuntu 18.04 (LTS) x64 x 2 Nodes

VPC Network connects all computers with Private IP addresses, assigned according our demand within the network 10.1.2.0/24. If you plan to have thousands of nodes, you should use network /16. OS Ubuntu 18.04 is compatible with many packages and requirements, that’s my choice for this project, but for some reason you may use other Linux versions like CentOS, OpenSUSE etc.
I use Region FRA1 (Frankfurt) but you may choose another one as well as others classes of droplets.
The rules of the firewall: limit inbound connections and allow all outbound connections.
I use tags to control nodes, otherwise if I add a node, firewall rules won’t applied on the node automatically.
On GW IN — tcp: 22, 80, 443
My terraform files are on GitHub, copy them into directory on your local computer:

varset.bat
terraform plan
terraform apply

Now we need to know IP addresses of the created droplets and we can find them in Console or use this:
in Windows

terraform show | findstr "ip"

or in Linux

terraform show | grep ip

I get IPs and add them to /etc/hosts:

10.1.2.2 node1 node01
10.1.2.3 node2 node02
10.1.2.4 gw

3. Prepare Nodes and GW to install Kubernetes.

Login to node1 and node2.
Disable swap:

swapoff -a
free -h

Disable firewall:

systemctl stop ufw
systemctl disable ufw

Disable selinux:

sestatus
vi /etc/selinux/config
SELINUX=disabled
systemctl reboot

I use GW as a host for Ansible but you may use another unix machine.
Install Ansible:

apt-get update
apt-get -y upgrade
apt-get install ansible

Create ssh keys

ssh-keygen -t rsa -b 4096

Create a root password.
We can use Console of DigitalOcean or temporarily set a permit for access with a password:

vi /etc/ssh/sshd_config
PasswordAuthentication yes
sytemctl reload sshd

Copy keys to the nodes:

ssh-copy-id 10.1.2.2
ssh-copy-id 10.1.2.3

Switch off the password access:

vi /etc/ssh/sshd_config
PasswordAuthentication no
systemctl reload sshd

I should be ready to install packages in the future using Ansible and I install minimum set for now (screen, mc):

vi /etc/ansible/hosts
node[01:02]

Download my playbook on GitHub .

ansible-playbook playbook.yaml

To improve security you might use a non root user, add him to /etc/sudoers and use sudo.
To avoid having incidents with bots, I change sshd port on gw in /etc/ssh/sshd_config from 22 to any free port upper 1024, for example 10222. I use Terraform to control Firewall rules, so I change a file fw-pvc.tf 22 → 10222 and apply changes:

terraform plan
terraform apply

To make the cluster private I use Firewall on DigitalOcean to block all inbound connections to the Nodes of the cluster. I use Terraform with resource k8s-firewall-node but you can create the same rules manually via Console.

4. Deploy kubernetes using Kubespray.

I use GW as a host to install cluster on 2 nodes.

apt install python3-pip
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
python get-pip.py
root@gw:~# pip — version
pip 20.3.3 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
root@gw:~# pip3 — version
pip 9.0.1 from /usr/lib/python3/dist-packages (python 3.6)
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
pip install -r requirements.txt
cp -rfp inventory/sample inventory/k8scluster
declare -a IPS=(10.1.2.2 10.1.2.3)
vi inventory/k8scluster/inventory.ini
[all]
node1 ansible_host=10.1.2.2 ip=10.1.2.2 # etcd_member_name=etcd1
node2 ansible_host=10.1.2.3 ip=10.1.2.3 # etcd_member_name=etcd2
[kube-master]
node1
[etcd]
node1
[kube-node]
node1
node2

You may find instructions https://github.com/kubernetes-sigs/kubespray :

touch inventory/k8scluster/hosts.yml
CONFIG_FILE=inventory/k8scluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

And you get default settings in hosts.yaml that you should change according your plans.
Instead, I prefer to change inventory.ini
Check files:

vi inventory/k8scluster/hosts.yml
vi inventory/k8scluster/group_vars/all/all.yml
vi inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml

Use Calico and IPVS.

ansible-playbook -i inventory/k8scluster/inventory.ini cluster.yml
ansible-playbook -i inventory/k8scluster/inventory.ini --become --become-user=root cluster.yml
ansible-playbook -i inventory/k8scluster/hosts.yml --become --become-user=root cluster.yml

Install kubectl to manage Kubernetes:

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectlchmod a+x kubectl
mv kubectl /usr/local/bin/kubectl

Make a copy of the config:

ssh 10.1.2.2 sudo cp /etc/kubernetes/admin.conf /root/config

Copy the config to the local machine:

scp 10.1.2.2:~/config .
mkdir .kube
mv config .kube/
ssh 10.1.2.2 sudo rm /root/config

Check the cluster:

kubectl cluster-info
kubectl get nodes

5. Load Balancer

Now I have an isolated cluster with private IPs but the droplets also have real IP addresses. That helps us to create our own LoadBalancer for StatefulSet Applications instead of paying extra $10 per month for a small LoadBalancer even though it doesn’t officially supported.

Having Ingress Controller based on Nginx is not enough for my purposes, I use MetallB Load Balancer. https://metallb.universe.tf/installation/

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yamlkubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml# On first install onlykubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=”$(openssl rand -base64 128)”

Create ConfigMap, use Layer 2 Configuration:

https://metallb.universe.tf/configuration/

vi config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 167.99.99.99–167.99.99.99

The trick is to use range of IP addresses with the one external IP address of the Node02 (Node01 is a Master), in my case: 167.99.99.99. It allows to have full access from the Internet to our resources in Kubernetes.

vi /etc/netplan/50-cloud-init.yaml
netplan apply
kubectl apply -f config.yaml

If you need to correct IP address:

kubectl edit cm -n metallb-system config

Create a web server and test connection.

kubectl create deployment nginx --image nginx:alpine --port 80 --replicas=1
kubectl expose deployment nginx --name=nginx-svc --port=80 --type=LoadBalancer
kubectl get nginx-svc

You can open 80 port on Firewall using Console and open http://167.99.99.99 for a test.

6. Nexus Sonatype

Instead of using DockerHub I deploy opensource version of Nexus as a local Repository to store and cache images.

You can install Nexus into the Kubernetes cluster as a StatefulSet application, mount PersistentVolumes, use IngressController, set limitations, control and monitor the Pod using Kubernetes tools.

https://hub.docker.com/r/sonatype/nexus3

Node02 has enough memory to run Nexus, so log into the node02 and run docker:

docker pull sonatype/nexus3
docker run -d -p 18081:8081 --name nexus sonatype/nexus3

To stop Nexus:

docker stop --time=120 159fff41e48e

I find initial password in file:

docker ps | grep nexus
docker exec -it 08ad87415087 bash
cat /nexus-data/admin.password
login: admin
password: 6825b949-c461–46a0–9e3b-225a2efc4564

http://167.99.99.99:18081/

Change the password and disable anonymous access.

7. Persistent Volumes with NFS to have enough space on GW droplet.

I turn on plain NFS server on the server GW and as a client I use kubernetes provisioning in ReadWriteMany mode.

vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
path: /var/nfs/kube
server: 10.1.2.4
vi pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
apt update
apt install nfs-kernel-server
mkdir /var/nfs/kube -p
ls -la /var/nfs/kube
chown nobody:nogroup /var/nfs/kube
vi /etc/exports
/var/nfs/kube 10.1.2.4(rw,sync,no_subtree_check)
systemctl restart nfs-kernel-server
kubectl apply -f pv.yaml
kubectl get pv
Status should be "Available"
kubectl apply -f pvc.yaml
kubectl get pv
Status should be "Bound"

8. Next steps would be: improve security on OS level and k8s level, add external storages for Stateful Applications using robin.io, Storage OS, Portworx etc, to monitor and control cluster use Prometheus, Grafana, Victoria metrics, Amixr, New Relic, Logz etc.

--

--