Minimal setup for AWS EKS

aws kubernetes

Minimal Setup

Sometimes I just wanted to quickly spin up a working EKS cluster, while still being able to somewhat customize the solution, and being able to understand how it was deployed.

That’s why I decided to write some modules, and this quick guide describing the setup.

Using the default values, the following steps will create a VPC with 3 public subnets, security groups, 1 AWS EKS control plane with a public endpoint, and 1 node group with 1 t3.medium instance.

The example can be found here.


git clone https://github.com/serbangilvitu/terraform-examples.git
cd terraform-examples/aws/eks

Customize AWS profile and region

The minimum customization that is required is to update the aws_profile and aws_region values in the values-common.auto.tfvars file.

Apply changes

terraform init && terraform apply

The plan is presented, type yes if you agree with the resources being created.

Update .kube/config

Update the ~/.kube/config file with the following command (replacing the placeholders)

aws eks --region <aws_region> update-kubeconfig --name <stack_name>-<aws_region> --profile <aws_profile>

With the default values, the command would be:

aws eks --region eu-west-1 update-kubeconfig --name demo-c1 --profile example


The following command will create the aws-auth-cm config map

export eks_node_group_1_role_arn="$(terraform output eks_node_group_1_role_arn)" && \
curl -so - https://amazon-eks.s3.us-west-2.amazonaws.com/cloudformation/2020-07-23/aws-auth-cm.yaml \
| sed -e 's/<ARN.*>/${eks_node_group_1_role_arn}/g' | envsubst \
| tee | kubectl apply -f -

Watch nodes

After a short interval, the node should become ready

kubectl get nodes -w

Quick check

How about quickly deploying something and exposing it? These following commands are OK for a quick test, but this is obviously not the best way to deploy or expose a service.

kubectl create deployment ghost --image=ghost

The following command will create a service of type LoadBalancer, allowing public access. If you’re wondering how the traffic is allowed - this command also automatically adds an ingress rule to the security groups used for the worker nodes

kubectl create service loadbalancer ghost --tcp=80:2368
kubectl get svc ghost

When using the DNS record shown by the previous command, you should see something like:


Remove the service - to cleanup the load balancer

kubectl delete svc ghost

Remove all resources

terraform destroy


There are multiple values which can be updated, I’ll just go through a few I found interesting.

Enable logging

Logging categories can be uncommended for eks_enabled_cluster_log_types in the values-eks-cp.auto.tfvars file.

Spot instances

Making the following changes in the values-eks-ng-1.auto.tfvars file will cause all newly provisioned nodes to be spot instances.

eks_node_group_1_on_demand_base_capacity = "0"
eks_node_group_1_on_demand_percentage_above_base_capacity = "0"

eks_node_group_1_spot_max_price might also have to be adjusted - if the price is too low.

Changing the instance type and count

This can be achieved by modifying the following in values-eks-ng-1.auto.tfvars

eks_node_group_1_instance_type = "t3.medium"
eks_node_group_1_min_size = "1"
eks_node_group_1_max_size = "3"

More options

To further customize the solution, I tried to provide meaningful descriptions for the variables used in the example https://github.com/serbangilvitu/terraform-examples/tree/master/aws/eks

The main.tf of the example should hopefuly also be easy to understand and modify.