Multi-Architecture deployments

Please note: this EKS and Karpenter workshop version is now deprecated since the launch of Karpenter v1beta, and has been updated to a new home on AWS Workshop Studio here: Karpenter: Amazon EKS Best Practice and Cloud Cost Optimization.

This workshop remains here for reference to those who have used this workshop before, or those who want to reference this workshop for running Karpenter on version v1alpha5.

In the previous section we defined two Provisioners, both supporting amd64 (x86_64) and arm64 architectures. In this section we will deploy applications that require a specific architecture.

If you are not familiar with the AWS support for arm64 instances, we recommend to take a look at the documentation for AWS Graviton instances. AWS Graviton processors are custom built by Amazon Web Services using 64-bit Arm Neoverse. They power Amazon EC2 instances such as: M6g, M6gd, T4g, C6g, C6gd, C6gn, R6g, R6gd, X2gd. Graviton instances provide up to 40% better price performance over comparable current generation x86-based instances for a wide variety of workloads.

At re:Invent 2021 tuesday 30th Nov Keynote we announced the new release of a new third generation of Graviton instances. You can learn more about this anouncement here

Creating Multi-Architecture Deployments

Let’s create our new deployments. Like in the previous section, we will create two new deployments, one for each architecture. We will start the deployments with 0 replicas.

First, let’s create the amd64 deployment. Run the following command.

cat <<EOF > inflate-amd64.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate-amd64
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate-amd64
  template:
    metadata:
      labels:
        app: inflate-amd64
    spec:
      nodeSelector:
        intent: apps
        kubernetes.io/arch: amd64
        karpenter.sh/capacity-type: on-demand
      containers:
      - image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
        name: inflate-amd64
        resources:
          requests:
            cpu: "1"
            memory: 256M
EOF
kubectl apply -f inflate-amd64.yaml

Let’s create now the arm64 Deployment.

cat <<EOF > inflate-arm64.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate-arm64
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate-arm64
  template:
    metadata:
      labels:
        app: inflate-arm64
    spec:
      nodeSelector:
        intent: apps
        kubernetes.io/arch: arm64
        node.kubernetes.io/instance-type: c6g.xlarge
      containers:
      - image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
        name: inflate-arm64
        resources:
          requests:
            cpu: "1"
            memory: 256M
EOF
kubectl apply -f inflate-arm64.yaml

As you see the main difference between both Deployments, is the nodeSelector kubernetes.io/arch and the names, all pointing to the architecture selection for that deployment.

In this part off the workshop we will keep using Deployments with the Pause Image. Notice as well how both deployments point to the same container image. Amazon ECR (Elastic Container Repository) does support multi-architecture container images. You can read more about it here

Challenge

You can use Kube-ops-view or just plain kubectl cli to visualize the changes and answer the questions below. In the answers we will provide the CLI commands that will help you check the resposnes. Remember: to get the url of kube-ops-view you can run the following command kubectl get svc kube-ops-view | tail -n 1 | awk '{ print "Kube-ops-view URL = http://"$4 }'

Answer the following questions. You can expand each question to get a detailed answer and validate your understanding.

1) How would you Scale the inflate-amd64 deployment to 2 replicas ? What nodes were selected by Karpenter ?

Click here to show the answer

2) How would you Scale the inflate-arm64 deployment to 2 replicas ? What nodes were selected by Karpenter ?

Click here to show the answer

3) Scale both deployments to 0 replicas ?

Click here to show the answer

What Have we learned in this section :

In this section we have learned:

  • Karpenter uses well-known labels in the NodeSelector pods to Override the type instance selected. In this section we used the NodeSelector kubernetes.io/arch to select instances of type amd64 x86_64 and arm64. We also learned that we can select a specific instance type by using the well-known lable node.kubernetes.io/instance-type (i.e c6g.xlarge).

  • When NodeSelectors are not specified, Karpenter will revert to the default configuration setup for that label. In this case, the property for kubernetes.sh/capacity-type was not set in the case of the arm64 deployment, meaning that spot instances were selected even if the default provisioner supports both spot and on-demand.

  • Karpenter scales on-demand instances using a diversified selection as well. Similar to Spot, instances are chosen by the ability of those to bin-pack well the pending pods. Karpenter uses the OnDemand allocation strategy lowest-price to select which instance to pick from the those with available capacity.