Provision and Configure Kubernetes Cluster on AWS using Ansible

In this blog, we will see how to provision and configure Kubernetes Cluster on AWS. We will automate this process using Ansible.


1. Launch 3 EC2 instances on AWS.

Step 1: Launch EC2 instances

For Launching the instances we will simply run a play on localhost. For connecting with AWS, we will require the IAM user credentials of our account. You can get your credentials by creating an IAM user. Copy the Access_key and Secret_key in a local file on your system. For simplicity, you can give Administrative Access to the IAM user.

Step 2: Set up Inventory

Now, since we have launched instances in the cloud and want to further configure the instances, we should first have them included in our inventory. Here, we will create a dynamic ec2 inventory.

export AWS_ACCESS_KEY_ID=’AK123'
export AWS_SECRET_ACCESS_KEY=’abc123'
export EC2_INI_PATH=/path/to/my_ec2.ini
export ANSIBLE_HOSTS=/path/to/
chmod +x
chmod +x ec2.ini
ssh-agent bash
ssh-add ~/.ssh/<your-key>.pem
ssh-add -L
[defaults]inventory =
roles_path = <path-to-role-directory>
host_key_checking = False
remote_user = ec2-user

Step 3: Configure the Master

Now, we will create a Play to run on the master.

Step 4: Configure the Workers

Now, we will create a Play to run on the workers.

ansible-galaxy role init cluster
ansible-galaxy role init k8s-master
ansible-galaxy role init k8s-worker

Student | ARTH Learner | Exploring Technologies

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store