diff --git a/docs/README.md b/docs/README.md index fc0fe57..a83f0fa 100644 --- a/docs/README.md +++ b/docs/README.md @@ -8,15 +8,6 @@ TODO: fix index - [Karpenter Configuration](karpenter-configuration-pre-v0-31.md) - [EC2 Instance Selector](ec2-instance-selector.md) -## Why this is needed? - -- Running many nodes in EKS can cause IP address exhaustion in the VPC. -- How many IP addresses are available to a node is determined by nodes ENI capacity. - - Because of this, EKS requires running many nodes to keep up with the Pod count. -- Using a VPC with Secondary CIDR block allows us to have more IP addresses available to our pods. -- Karpenter is a faster option for cluster autoscaling than the default EKS Cluster Autoscaler. -- Karpenter can be configured to use Spot Instances, which can save a lot of money. - ## ENI Custom Networking Demo - Creates an EKS Cluster with a VPC with Secondary CIDR block. diff --git a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/1-vpc-secondary-cidr-and-subnets.md b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/1-vpc-secondary-cidr-and-subnets.md index 7e45d3b..13395fb 100644 --- a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/1-vpc-secondary-cidr-and-subnets.md +++ b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/1-vpc-secondary-cidr-and-subnets.md @@ -4,19 +4,19 @@ ### Export Variables ```bash -export AWS_PAGER="" # disable the aws cli pager +export AWS_PAGER="" # disable the aws cli pager export AWS_PROFILE=hepapi export AWS_REGION=eu-central-1 -export CLUSTER_NAME="tenten" # will be created with eksdemo tool -export CLUSTER_VPC_CIDR="194.151.0.0/16" # your main EKS Cluster VPC CIDR -export SECONDARY_CIDR_BLOCK="122.64.0.0/16" # your secondary CIDR block that will be used for pods -export AZ1_CIDR="122.64.0.0/19" # -> make sure to -export AZ2_CIDR="122.64.32.0/19" # -> use the correct -export AZ3_CIDR="122.64.64.0/19" # -> AZ CIDR blocks and masks +export CLUSTER_NAME="tenten" # will be created with eksdemo tool +export CLUSTER_VPC_CIDR="194.151.0.0/16" # main EKS Cluster VPC CIDR +export SECONDARY_CIDR_BLOCK="122.64.0.0/16" # secondary CIDR block, will be used for pod IPs +export AZ1_CIDR="122.64.0.0/19" # -> make sure to +export AZ2_CIDR="122.64.32.0/19" # -> use the correct +export AZ3_CIDR="122.64.64.0/19" # -> AZ CIDR blocks and masks export AZ1="eu-central-1a" export AZ2="eu-central-1b" export AZ3="eu-central-1c" -export NODEGROUP_NAME="main" # default is 'main', keep this value +export NODEGROUP_NAME="main" # default is 'main', keep this value ``` ### Create eksdemo EKS cluster @@ -183,7 +183,6 @@ existing_node_group_subnets=$(aws eks describe-nodegroup \ | awk -F'\t' '{for (i = 1; i <= NF; i++) print $i}') echo "Existing Node Group Subnets: \n${existing_node_group_subnets:-'ERROR: should have existing_node_group_subnets, fix before continuing'}" -# Use a for loop to iterate through the lines and echo them while IFS=$'\t' read -r subnet_id ; do # echo "${subnet_id}" @@ -192,11 +191,11 @@ while IFS=$'\t' read -r subnet_id ; do "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" done <<< $existing_node_group_subnets - ``` ```bash +# tag the subnets aws ec2 create-tags --resources "$CUST_SNET1" --tags \ "Key=Name,Value=SecondarySubnet-A-${CLUSTER_NAME}" \ "Key=kubernetes.io/role/internal-elb,Value=1" \ @@ -215,7 +214,8 @@ aws ec2 create-tags --resources "$CUST_SNET3" --tags \ "Key=alpha.eksctl.io/cluster-name,Value=${CLUSTER_NAME}" \ "Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=shared" - +# tag Cluster Security Group as well +# (NOTE: the tag "kubernetes.io/cluster/${CLUSTER_NAME}=shared" is required and is probably already there) aws ec2 create-tags --resources "$CLUSTER_SECURITY_GROUP_ID" --tags \ "Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \ "Key=alpha.eksctl.io/cluster-name,Value=${CLUSTER_NAME}" \ diff --git a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/2-aws-vpc-cni-configuration.md b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/2-aws-vpc-cni-configuration.md index 9e43998..e8c4374 100644 --- a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/2-aws-vpc-cni-configuration.md +++ b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/2-aws-vpc-cni-configuration.md @@ -1,8 +1,7 @@ # 2. AWS VPC CNI & ENIConfig configuration for Custom Networking - - ### AWS-Node and CNI Configuration + ```bash # Get the current env vars of aws-node kubectl get daemonset aws-node -n kube-system -o jsonpath='{.spec.template.spec.containers[0].env}' | jq -r '.[] | .name + "=" + .value' @@ -13,36 +12,42 @@ kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone ``` -#### Environment Variables Explained - -- `kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone`: - - Means that AWS CNI is using the _topology.kubernetes.io/zone_ label to determine the `ENIConfig` name(`kubectl get eniconfig`) for that node. - - _topology.kubernetes.io/zone_ label is automatically added to the nodes by the kubelet as `eu-west-1a` or `eu-west-1b` or `eu-west-1c`, so we don't need any extra node tagging to do. - - This way we have a consistent way of applying the ENIConfig to the nodes. - - `ENIConfig` has the info about which Subnet and Security Groups should be used for the ENI. - - Our nodes will have their 1st ENI configured with the default VPC CIDR block, and the 2nd ENI will be configured with the Secondary CIDR block. - - Pods get their IPs from 2nd ENI, and the 1st ENI is used for the node communication. - - We will have 1st ENI reserved for pods, and all other ENIs will be used for the pod communication and in the Secondary CIDR block. -- `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true`: - - AWS CNI will use the `ENIConfig` objects which we create to configure the ENIs. - - Means that we are enabling custom networking on the CNI level. This change will help us to use the secondary CIDR block for the pods. - - This configuration **requires the existing node EC2 Instances be be restarted to take effect**. - -### ENIConfig k8s CRDs + +#### Environment Variables Explained + +- `kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone`: + - Means that AWS CNI is using the _topology.kubernetes.io/zone_ label to determine the `ENIConfig` name(`kubectl get eniconfig`) for that node. + - _topology.kubernetes.io/zone_ label is automatically added to the nodes by the kubelet as `eu-west-1a` or `eu-west-1b` or `eu-west-1c`, so we don't need any extra node tagging to do. + - This way we have a consistent way of applying the ENIConfig to the nodes. + - `ENIConfig` has the info about which Subnet and Security Groups should be used for the ENI. + - Our nodes will have their 1st ENI configured with the default VPC CIDR block, and the 2nd ENI will be configured with the Secondary CIDR block. + - Pods get their IPs from 2nd ENI, and the 1st ENI is used for the node communication. + - We will have 1st ENI reserved for pods, and all other ENIs will be used for the pod communication and in the Secondary CIDR block. +- `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true`: + - AWS CNI will use the `ENIConfig` objects which we create to configure the ENIs. + - AWS CNI will look for the label `${ENI_CONFIG_LABEL_DEF}` on the node, and will use the value of that label to find the `ENIConfig` object by name. + - This means that we are enabling custom networking on the CNI level. This change will help us to use the secondary CIDR block for the pods. + - This configuration **requires the existing node EC2 Instances be be restarted to take effect**. + +#### ENIConfig k8s CRDs + - ENIConfig CRD is used by AWS CNI to create ENIs with the specified configuration for that Availability Zone. - The deamonset `aws-node` has env. var. called `ENI_CONFIG_LABEL_DEF`, and it is used to match - ``` - NodeLabels: - topology.kubernetes.io/zone=eu-west-1a - ... - - AWS CNI makes the following configuration - (selected ENIConfig name for node) = NodeLabels[ENI_CONFIG_LABEL_DEF] - ``` -- We are informing AWS CNI to look for the node label `topology.kubernetes.io/zone`. - - For example, if the label value is `eu-west-1a`, AWS CNI will use the `ENIConfig` named `eu-west-1a`. - -#### Let's create the ENIConfig objects + + ``` + NodeLabels: + topology.kubernetes.io/zone=eu-west-1a + ... + + AWS CNI makes the following configuration + (selected ENIConfig name for node) = NodeLabels[ENI_CONFIG_LABEL_DEF] + ``` + +- We are informing AWS CNI to look for the node label `topology.kubernetes.io/zone`. + - For example, if the label value is `eu-west-1a`, AWS CNI will use the `ENIConfig` named `eu-west-1a`. + +### Let's create the ENIConfig objects + ```bash cat << EOF | kubectl apply -f - apiVersion: crd.k8s.amazonaws.com/v1alpha1 @@ -79,14 +84,12 @@ kubectl get eniconfig ${AZ2} -o yaml; echo "---"; kubectl get eniconfig ${AZ3} -o yaml; echo "---"; ``` - ### Restart the Node Group Instances - Terminate the Node Group instances to have them recreated with the new ENI configuration. - After you create the Node Group, **check the instances to see if they got their IP Addresses from VPC Secondary CIDR Block** - ![Managed Node EC2 Instance should have ips](../images/managed-node-instance-ip-addrs-on-secondary-cidr.png) - ### Test Pods having IP addresses from Secondary CIDR Block ```bash @@ -99,4 +102,4 @@ kubectl get pods -o wide kubectl port-forward svc/nginx 8000:80 # check localhost:8000 on browser -``` \ No newline at end of file +``` diff --git a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/README.md b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/README.md index 4460196..2eed4cd 100644 --- a/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/README.md +++ b/docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/README.md @@ -7,6 +7,16 @@ - This tutorial also includes karpenter configuration for make use of the secondary CIDR block. - This demo is for pre `v0.32` or `v1alpha` Karpenter version, but should work fine for AWS CNI and ENIConfig. +## Why this is needed? + +- Running many nodes in EKS can cause IP address exhaustion in the VPC. +- How many IP addresses are available to a node is determined by nodes ENI capacity. + - Because of this, EKS requires running many nodes to keep up with the Pod count. +- Using a VPC with Secondary CIDR block allows us to have more IP addresses available to our pods. +- Karpenter is a faster option for cluster autoscaling than the default EKS Cluster Autoscaler. +- Karpenter can be configured to use Spot Instances, which can save a lot of money. + + ## Hands-on Demo ### Prerequisites