In this section we will deploy the instance types we selected and request nodegroups that adhere to Spot diversification best practices. For that we will use eksctl create nodegroup and eksctl configuration files to add the new nodes to the cluster.
Let’s first create the configuration file:
cat <<EoF > ~/environment/spot_nodegroups.yml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: eksworkshop-eksctl region: $AWS_REGION nodeGroups: - name: dev-4vcpu-16gb-spot minSize: 0 maxSize: 5 desiredCapacity: 1 instancesDistribution: instanceTypes: ["m5.xlarge", "m5d.xlarge", "m4.xlarge","t3.xlarge","t3a.xlarge","m5a.xlarge","t2.xlarge"] onDemandBaseCapacity: 0 onDemandPercentageAboveBaseCapacity: 0 spotInstancePools: 4 labels: lifecycle: Ec2Spot intent: apps taints: spotInstance: "true:PreferNoSchedule" tags: k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot k8s.io/cluster-autoscaler/node-template/label/intent: apps k8s.io/cluster-autoscaler/node-template/taint/spotInstance: "true:PreferNoSchedule" iam: withAddonPolicies: autoScaler: true cloudWatch: true albIngress: true - name: dev-8vcpu-32gb-spot minSize: 0 maxSize: 5 desiredCapacity: 1 instancesDistribution: instanceTypes: ["m5.2xlarge", "m5d.2xlarge", "m4.2xlarge","t3.2xlarge","t3a.2xlarge","m5a.2xlarge","t2.2xlarge"] onDemandBaseCapacity: 0 onDemandPercentageAboveBaseCapacity: 0 spotInstancePools: 4 labels: lifecycle: Ec2Spot intent: apps taints: spotInstance: "true:PreferNoSchedule" tags: k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot k8s.io/cluster-autoscaler/node-template/label/intent: apps k8s.io/cluster-autoscaler/node-template/taint/spotInstance: "true:PreferNoSchedule" iam: withAddonPolicies: autoScaler: true cloudWatch: true albIngress: true EoF
This will create a
spot_nodegroups.yml file that we will use to instruct eksctl to create two nodegroups, both with a diversified configuration.
eksctl create nodegroup -f spot_nodegroups.yml
The creation of the workers will take about 3 minutes.
There are a few things to note in the configuration that we just used to create these nodegroups.
spotInstance: "true:PreferNoSchedule". PreferNoSchedule is used to indicate we prefer pods not be scheduled on Spot Instances. This is a “preference” or “soft” version of NoSchedule – the system will try to avoid placing a pod that does not tolerate the taint on the node, but it is not required.
If you are wondering at this stage: Where is spot bidding price ? you are missing some of the changes EC2 Spot instances had since 2017. Since November 2017 EC2 Spot price changes infrequently based on long term supply and demand of spare capacity in each pool independently. You can still set up a maxPrice in scenarios where you want to set maximum budget. By default maxPrice is set to the On-Demand price; Regardless of what the maxPrice value, spot instances will still be charged at the current spot market price.
Aside from familiarizing yourself with the kubectl commands below to obtain the cluster information, you should also explore your cluster using kube-ops-view and find out the nodes that were just created.
Confirm that the new nodes joined the cluster correctly. You should see the nodes added to the cluster.
kubectl get nodes
You can use the node-labels to identify the lifecycle of the nodes
kubectl get nodes --show-labels --selector=lifecycle=Ec2Spot
The output of this command should return Ec2Spot nodes. At the end of the node output, you should see the node label lifecycle=Ec2Spot
Now we will show all nodes with the lifecycle=OnDemand. The output of this command should return OnDemand nodes (the ones that we tagged when creating the cluster).
kubectl get nodes --show-labels --selector=lifecycle=OnDemand
You can use the
kubectl describe nodes with one of the spot nodes to see the taints applied to the EC2 Spot Instances.
Explore your cluster using kube-ops-view and find out the nodes that have just been created.
When deploying nodegroups, eksctl creates a CloudFormation template that deploys a LaunchTemplate and an Autoscaling Group with the settings we provided in the configuration. Autoscaling groups using LaunchTemplate support not only mixed instance types but also purchasing options within the group. You can mix On-Demand, Reserved Instances, and Spot within the same nodegroup.
The configuration we used creates two diversified instance groups with just Spot instances. We have attached to all nodes in both groups the same
lifecycle: Ec2Spot Label and a
spotInstance: "true:PreferNoSchedule" taint. When using a mix of On-Demand and Spot instances within the same nodegroup, we need to implement conditional logic on the back of the instance attribute InstanceLifecycle and set the labels and taints accordingly.
This can be achieved in multiple ways by extending the bootstrapping sequence.
cloudformation_mixed_workers.yml Provides a cloudformation template to set up an autoscaling group with mixed on-demand and spot workers and insert bootstrap parameters to each depending on the node “InstanceLifecycle”.
It will take time to provision and decommision capacity. If you are running this workshop at a AWS event or with limited time, we recommend to come back to this section once you have completed the workshop, and before getting into the cleanup section.
lifecycle: OnDemand. Spot instances must have a label
lifecycle: Ec2Spotand a taint
You can delete the previous nodegroup created using
eksctl delete nodegroup -f spot_nodegroups.yml
Download the example file eksctl_mixed_workers_bootstrap.yml, change the region to the current one where your cluster is running and create the nodegroups using the following command:
eksctl create nodegroup -f eksctl_mixed_workers_bootstrap.yml