Provision and Configure Kubernetes Cluster on AWS using Ansible

Inshiya Nalawala
4 min readApr 19, 2021

In this blog, we will see how to provision and configure Kubernetes Cluster on AWS. We will automate this process using Ansible.

Overview:

1. Launch 3 EC2 instances on AWS.

2. Set up dynamic inventory to fetch AWS instance IPs.

3. Configure the Master node

4. Configure the Worker nodes

5. Allow the worker to join the cluster

Step 1: Launch EC2 instances

For Launching the instances we will simply run a play on localhost. For connecting with AWS, we will require the IAM user credentials of our account. You can get your credentials by creating an IAM user. Copy the Access_key and Secret_key in a local file on your system. For simplicity, you can give Administrative Access to the IAM user.

We will use the ec2 module of ansible to launch the instance, giving all the necessary arguments including the Access_key and Secret_key.

The system running this module should also have certain python libraries like boto, boto3, and botocore.

After the installing the required libraries, on running the playbook we will have 3 instances launched in the region specified.

Note: The tags we gave are really important as will be discussed in the next step. Do not omit mentioning appropriate tags to identify master, workers and the cluster as a whole. In this case, all the 3 launched instances have the tag — “cluster”:”kubernetes”, whereas the master node is tagged as — “kubernetes”:”master” and the workers as — “kubernetes”:”workers” .

You will have to define the variables according in vars/aws_vars.yml file. Create the vars directory in your workspace.

Step 2: Set up Inventory

Now, since we have launched instances in the cloud and want to further configure the instances, we should first have them included in our inventory. Here, we will create a dynamic ec2 inventory.

In your workspace, download two files as follows.

wget https://raw.githubusercontent.com/vshn/ansible-dynamic-inventory-ec2/master/ec2.pywget https://raw.githubusercontent.com/vshn/ansible-dynamic-inventory-ec2/master/ec2.ini

Before, we can begin to create the dynamic inventory we need to export a few variables.

export AWS_ACCESS_KEY_ID=’AK123'
export AWS_SECRET_ACCESS_KEY=’abc123'
export EC2_INI_PATH=/path/to/my_ec2.ini
export ANSIBLE_HOSTS=/path/to/ec2.py

Next, we need to make these files executable

chmod +x ec2.py
chmod +x ec2.ini

The ec2.py script is written using the Boto EC2 library and will query AWS for your running Amazon EC2 instances. The ec2.ini file is the config file for ec2.py, and can be used to limit the scope of Ansible’s reach.

To see the dynamic inventory, run the following command

./ec2.py

You will see a lot of information, and as you scroll you will see host ips mentioned under different tag names. Here is where the tags we gave earlier will help us.

You will see that under the group tag_cluster_kubernetes , all the 3 instance IPs will be mentioned. This also applies for the other two tags we defined.

Now, we can configure these instances using these host groups.

Further, remember that we use a key to log in to the instances launched on AWS. For this we will use ssh agent forwarding.

SSH’s agent forwarding feature allows our local SSH agent to reach through an existing SSH connection and transparently authenticate on a more distant server.

To achieve this, first we need to copy the .pem key file in the ~/.ssh directory.

Then, we run the following command:

ssh-agent bash
ssh-add ~/.ssh/<your-key>.pem

To check if the key is visible to ssh agent, run

ssh-add -L

You should see an identity for your key when you run the above command.

Now, we can seamlessly connect with our ec2 instances.

Feeling powerful, right?

Let’s move forward!

Note: Before you set to run your playbooks, consider updating the following variables in the ansible.cfg file.

[defaults]inventory = ec2.py
roles_path = <path-to-role-directory>
host_key_checking = False
remote_user = ec2-user
[privilege_escalation]become=True
become_method=sudo
become_user=root

Step 3: Configure the Master

Now, we will create a Play to run on the master.

— Installing docker and starting services

— Configuring repository for Kubernetes packages

— Installing kubectl kubeadm and kubelet

— Starting kubelet services

— Other configuration

— Pulling images using kubeadm

— Initiating master using kubeadm init and a further configuration

The master is almost ready and is now waiting for the workers to join. So, let’s configure the worker instances.

Step 4: Configure the Workers

Now, we will create a Play to run on the workers.

— Installing docker and starting services

— Configuring repository for Kubernetes packages

— Installing kubectl kubeadm and kubelet

— Starting kubelet services

— Updating some files

— Worker specific configuration

— Run the token generated by the master to join the cluster.

Notice how we have transferred the variable token from one play to another

Run the playbook and your cluster is ready! (hurray)

Let’s simplify our lives more. Instead of having to run these plays one by one, let’s pack them into ansible roles.

Create three roles

ansible-galaxy role init cluster
ansible-galaxy role init k8s-master
ansible-galaxy role init k8s-worker

Include all the tasks from the playbooks we created earlier in the respective tasks/main.yml file.

Finally, create a setup.yml file to include all the roles.

Source Code

Connect with me on LinkedIn

Thank You!

--

--