Setting Up AWS EKS with eksctl
Get a production-ready Kubernetes cluster on AWS
Need a Kubernetes cluster on AWS? EKS is the managed solution, and eksctl is the easiest way to set it up. No messing with CloudFormation templates or clicking through the console - just a few commands and you're running.
This guide covers installing eksctl and creating a cluster with worker nodes.
What You Need
- AWS account with appropriate permissions
- AWS CLI configured with credentials
- Linux system (works on Ubuntu, Amazon Linux, etc.)
- kubectl installed
Cost warning: EKS isn't free. You'll pay for the control plane ($0.10/hour) plus EC2 instances. This example with 2 t2.medium nodes costs roughly $1-2 per day. Delete everything when you're done testing.
Part 1: Install eksctl
Step 1: Download eksctl
Download and extract the latest version:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
This downloads eksctl, extracts it to /tmp, and automatically detects your OS.
Step 2: Move to System Path
Make eksctl available system-wide:
sudo mv /tmp/eksctl /usr/local/bin
Step 3: Verify Installation
Check that it works:
eksctl version
You should see the version number. Now you're ready to create clusters.
Part 2: Create EKS Cluster
Step 1: Create Control Plane
Create the cluster without worker nodes first:
eksctl create cluster --name=wanderlust --region=us-east-1 --without-nodegroup
This takes 10-15 minutes. It creates the VPC, subnets, security groups, and EKS control plane. Grab some coffee.
Why without nodes? Creating the control plane first, then adding nodes separately gives you more control and makes troubleshooting easier if something goes wrong.
Step 2: Associate IAM OIDC Provider
Set up IAM roles for service accounts:
eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster wanderlust --approve
This lets your pods assume IAM roles, which you'll need for things like accessing S3 or other AWS services.
Step 3: Create Node Group
Add worker nodes to your cluster:
eksctl create nodegroup --cluster=wanderlust --region=us-east-1 --node-type=t2.medium --nodes-min=2 --nodes-max=2 --node-volume-size=28
This creates 2 t2.medium instances with 28GB EBS volumes each. Takes another 5-10 minutes.
Options explained:
--node-type=t2.medium- 2 vCPU, 4GB RAM per node--nodes-min=2- Minimum nodes (no autoscaling below this)--nodes-max=2- Maximum nodes (no autoscaling above this)--node-volume-size=28- 28GB root volume per node
Verify Your Cluster
Check that your cluster is ready:
# Check cluster info
kubectl cluster-info
# See your nodes
kubectl get nodes
# Check all system pods
kubectl get pods -A
You should see 2 nodes in Ready state and various system pods running.
All Commands Together
Here's the complete setup:
# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
# Create cluster
eksctl create cluster --name=wanderlust --region=us-east-1 --without-nodegroup
# Set up IAM OIDC
eksctl utils associate-iam-oidc-provider --region us-east-1 --cluster wanderlust --approve
# Add worker nodes
eksctl create nodegroup --cluster=wanderlust --region=us-east-1 --node-type=t2.medium --nodes-min=2 --nodes-max=2 --node-volume-size=28
Deploy a Test App
Make sure everything works with a simple deployment:
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=LoadBalancer
kubectl get service nginx
Wait a minute for the LoadBalancer to provision. When you see an external IP, copy it and visit it in your browser. You should see the nginx welcome page.
Managing Your Cluster
Useful eksctl commands:
# List clusters
eksctl get cluster --region us-east-1
# List node groups
eksctl get nodegroup --cluster=wanderlust --region=us-east-1
# Scale node group
eksctl scale nodegroup --cluster=wanderlust --nodes=3 --name=nodegroup-name --region=us-east-1
# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name wanderlust
Delete Everything
When you're done, delete the cluster to stop charges:
eksctl delete cluster --name=wanderlust --region=us-east-1
This removes everything - nodes, control plane, VPC, the works. Takes about 10 minutes.
Don't forget this step: Leaving a cluster running will cost you money. Always delete test clusters when you're done.
Pro tip: Use eksctl with YAML config files for production. They're version-controllable and let you define everything declaratively. Check the eksctl docs for examples.
That's It
You now have a production-ready Kubernetes cluster running on AWS EKS. Deploy your apps, set up CI/CD, configure monitoring - you're ready for whatever comes next.
Happy clustering! ☸️
Comments
Post a Comment