Rancher on Ubuntu with Terraform and EKS Clusters in AWS Cloud

Sergey Yanover
5 min readDec 30, 2021

I’d like to install bastion host with Rancher based on docker image in AWS Cloud. I am going to install whole infrastructure from scratch, install Docker, Git, Rancher 2.6 on Ubuntu 20.04 using Terraform. I create EKS Cluster with Rancher, create EKS Clusters in AWS Console and with eksctl, then import 2 Clusters into Rancher.

You should have AWS account, have generated SSH keys, have installed git, terraform on your computer and enough skills to avoid paying more then need.

Using AWS console, choose IAM and create user terraform with Access type “Programmatic access” and attach group of policies “AdministratorAccess” directly. Install scripts from GitHub on your local computer:

mkdir rancher
cd rancher
git clone https://github.com/sergeyanover/rancher-ubuntu-eks-aws.git
cd rancher-ubuntu-eks-aws

Generated keys for a user terraform you should put into a file varset.bat AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and set the Region, I use eu-central-1.

Generate SSH public and private keys with puttygen.exe and save them in folder “keys”.

Edit file terraform.tfvars in folder “terraform” and put your public key like “ssh-rsa …” into it as a value of my_public_key. In addition, you should set your current IP address or your network in ip_admin.

You may know your IP address here https://www.whatismyip.com or somewhere else.

Install AWS Infrastructure from scratch using Terraform and setup Rancher

cd terraform
terraform init
terraform plan
terraform apply -auto-approve

You will see the proccess of creation: key pair, vpc, internet gateway, subnet, security groups, route table, instance t2.medium 2CPU 4Gb and Rancher 2.6. That’s enough to control 10–15 clusters.

Also, you will see the Output: ec2_public_ip = “xx.xx.xx.xx” which you can use to connect to the bastion-host.

If you have a domain, you may use Route 53, choose Hosted zones, your Domain name and Create record with ec2_public_ip rancher.<your domain>

In a script bastion-entry-script.sh you may find installed Rancher in Docker with the persistent data and all installed utilities like aws, eksctl, kubectl, efs, ansible, docker, git, terraform, packer, jdk11.

The idea is to manage EKS clusters, other resources with Rancher and save data on EBS volume instead of using Rancher inside the kubernetes. You may backup and restore Rancher to S3 using internal options.

It’s done automatically and you may have an access via ssh on port 12000 using installed ssh keys with user ubuntu. Connect to Rancher — just go to https://ec2_public_ip or https://rancher.<your domain> and follow Rancher’s instructions:

docker ps
docker logs container-id 2>&1 | grep “Bootstrap Password:”

And you will have something like that:
[INFO] Bootstrap Password: qwmc98sv5phl68zsqcxbwzbf7q6kfs288qcths9f7n44pqc5vmzrw2
which you can put into the form and generate/install a new password.

We don’t have SSL certificates here, so just allow this not secure connection in your browser and install them later.

Create EKS cluster.

We have some ways to create EKS cluster and here are only 3: Rancher, AWS Console and eksctl.

According to your needs, you may choose other parameters of the cluster, I will use t3.meduim and m5.large machines in 3 AZ or 2AZ. Be careful, it will cost some money for you.

You may check administrative privileges of the IAM user and find Roles here:
https://console.aws.amazon.com/iam/

If you can’t find roles AWSServiceRoleForAmazonEKS and AWSServiceRoleForAmazonEKSNodegroup, you should create them using policy AmazonEKSServiceRolePolicy or create it manually.

Create EKS Cluster in AWS Management Console (without Fargate):

1. Create Role for a Cluster. Choose IAM: Roles; Create; EKS; EKS — Cluster; Permissions — AmazonEKSClusterPolicy; Role name — rancher-eks

2. Create Role for Node Groups. Choose IAM: Roles; Create; EC2; Policies: AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy and AmazonEC2ContainerRegistryReadOnly; Role name — rancher-eks-ng

3. Create the Cluster. Choose EKS: Amazon EKS; Add Cluster; Create; Name: consoleks; Kubernetes version: 1.21; Select role: rancher-eks; VPC: eks_vpc; use 3 public subnets for 3AZ; Security group: allow_inbound_from_rancher; Endpoint access: Public;
It takes some time to create Cluster depends on the Region.

4. Create Node Groups. Choose EKS: Amazon EKS; Clusters; consoleks; Compute; Add Node Group; Name: ng-consoleks; Select role: rancher-eks-ng; Compute: AL2_x86_64, on-demand, t3.medium, 20GB; Scaling: 2–2–2; Update: Number — 1 Node; 2 our subnets;

5. Create an access to the Cluster from the Bastion host, kubeconfig file.

aws sts get-caller-identity

As you can see, by default that Arn: “arn:aws:iam::xxxxx:user/terraform” but Cluster was created in Console by another admin or root. So, change keys in file awskeys.sh AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to admin’s keys and create kubeconfig:

source awskeys.sh
aws eks update-kubeconfig — region eu-central-1 — name consoleks
kubectl get svc
kubectl get nodes

Create EKS Cluster with Rancher

Connect to Rancher in a browser, create AWS credentials:
You should choose: Menu, Cluster Management, Cloud Credentials, Create, Amazon.

Fill out fields: Name, Access Key, SecretKey, Default Region and create Cloud Credential.

Choose Clusters in the Menu and Create Amazon EKS.

Cluster Name: prod; Region: eu-central-1; Kubernetes Version: 1.21; ServiceRole Custom: rancher-eks; disable Project Network Isolation; Public Access; Subnets custom: VPC and Subnets eks_subnet1, eks_subnet2, eks_subnet3; Security Group: allow_inbound_from_rancher; Instance Type: m5.large; Node Volume Size: 50GB; Node Group Name: prod-nodes; ASG: 2–2–2;

I don’t specify AMI and by default that would be AmazonLinux2 image amazon-eks-node-1.21-vxxx. You can choose another AMI or create custom AMI with Packer.

Click Create and wait, depends on number and type of chosen instances, the Cluster will be ready in some minutes.

Create EKS Cluster with eksctl

Use the same admin’s credentials to import cluster as used for creating it before.
First of all, let’s create a simple cluster in AWS:
edit awskeys.sh and put your credentials and region there, not terraform.

source ./awskeys.sh
eksctl create cluster — name eksctlcluster — region eu-central-1
kubectl get svc

It creates a new VPC, subnets, internet gateway, route tables and Node Group with 2 m5.large instances in 2 AZ with Kubernetes v.1.21. If you need to customize the Cluster, for example to use t3.medium instances or use existing VPC, you should create eksctlcluster.yaml and apply it:

eksctl create cluster -f eksctlcluster.yaml
kubectl get svc

Examples: https://github.com/weaveworks/eksctl/tree/main/examples
You might delete the cluster later:

eksctl delete cluster — name eksctlcluster

Import EKS Cluster to Rancher

In Rancher, you should choose: Menu, Cluster Management, Clusters, Import Existing, Amazon EKS.

Cluster Name: consoleksaws; Region: eu-central-1; Choose Cluster: consoleks; Register Cluster;

And do it again to import another cluster:

In Rancher, you should choose: Menu, Cluster Management, Clusters, Import Existing, Amazon EKS.

Cluster Name: eksctlclusteraws; Region: eu-central-1; Choose Cluster: eksctlcluster; Register Cluster;

You may see imported Clusters in Rancher and click View YAML to verify a field imported: true. It means, that if you delete it in Rancher it won’t be deleted in AWS and you should use initial tools to delete it.

https://rancher.com/docs/rancher/v2.6/en/installation/other-installation-methods/single-node-docker/advanced/
https://eksctl.io/usage/creating-and-managing-clusters/
https://github.com/weaveworks/eksctl/tree/main/examples
https://aws.amazon.com/premiumsupport/knowledge-center/eks-custom-linux-ami/
https://docs.aws.amazon.com/eks/latest/userguide/service_IAM_role.html
https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

--

--