Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Oguzhan Yilmaz committed Nov 3, 2023
1 parent 121bbce commit 1be5d26
Show file tree
Hide file tree
Showing 4 changed files with 57 additions and 53 deletions.
9 changes: 0 additions & 9 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,6 @@ TODO: fix index
- [Karpenter Configuration](karpenter-configuration-pre-v0-31.md)
- [EC2 Instance Selector](ec2-instance-selector.md)

## Why this is needed?

- Running many nodes in EKS can cause IP address exhaustion in the VPC.
- How many IP addresses are available to a node is determined by nodes ENI capacity.
- Because of this, EKS requires running many nodes to keep up with the Pod count.
- Using a VPC with Secondary CIDR block allows us to have more IP addresses available to our pods.
- Karpenter is a faster option for cluster autoscaling than the default EKS Cluster Autoscaler.
- Karpenter can be configured to use Spot Instances, which can save a lot of money.


## ENI Custom Networking Demo
- Creates an EKS Cluster with a VPC with Secondary CIDR block.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,19 +4,19 @@
### Export Variables

```bash
export AWS_PAGER="" # disable the aws cli pager
export AWS_PAGER="" # disable the aws cli pager
export AWS_PROFILE=hepapi
export AWS_REGION=eu-central-1
export CLUSTER_NAME="tenten" # will be created with eksdemo tool
export CLUSTER_VPC_CIDR="194.151.0.0/16" # your main EKS Cluster VPC CIDR
export SECONDARY_CIDR_BLOCK="122.64.0.0/16" # your secondary CIDR block that will be used for pods
export AZ1_CIDR="122.64.0.0/19" # -> make sure to
export AZ2_CIDR="122.64.32.0/19" # -> use the correct
export AZ3_CIDR="122.64.64.0/19" # -> AZ CIDR blocks and masks
export CLUSTER_NAME="tenten" # will be created with eksdemo tool
export CLUSTER_VPC_CIDR="194.151.0.0/16" # main EKS Cluster VPC CIDR
export SECONDARY_CIDR_BLOCK="122.64.0.0/16" # secondary CIDR block, will be used for pod IPs
export AZ1_CIDR="122.64.0.0/19" # -> make sure to
export AZ2_CIDR="122.64.32.0/19" # -> use the correct
export AZ3_CIDR="122.64.64.0/19" # -> AZ CIDR blocks and masks
export AZ1="eu-central-1a"
export AZ2="eu-central-1b"
export AZ3="eu-central-1c"
export NODEGROUP_NAME="main" # default is 'main', keep this value
export NODEGROUP_NAME="main" # default is 'main', keep this value
```

### Create eksdemo EKS cluster
Expand Down Expand Up @@ -183,7 +183,6 @@ existing_node_group_subnets=$(aws eks describe-nodegroup \
| awk -F'\t' '{for (i = 1; i <= NF; i++) print $i}')
echo "Existing Node Group Subnets: \n${existing_node_group_subnets:-'ERROR: should have existing_node_group_subnets, fix before continuing'}"
# Use a for loop to iterate through the lines and echo them
while IFS=$'\t' read -r subnet_id ; do
# echo "${subnet_id}"
Expand All @@ -192,11 +191,11 @@ while IFS=$'\t' read -r subnet_id ; do
"Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}"
done <<< $existing_node_group_subnets
```


```bash
# tag the subnets
aws ec2 create-tags --resources "$CUST_SNET1" --tags \
"Key=Name,Value=SecondarySubnet-A-${CLUSTER_NAME}" \
"Key=kubernetes.io/role/internal-elb,Value=1" \
Expand All @@ -215,7 +214,8 @@ aws ec2 create-tags --resources "$CUST_SNET3" --tags \
"Key=alpha.eksctl.io/cluster-name,Value=${CLUSTER_NAME}" \
"Key=kubernetes.io/cluster/${CLUSTER_NAME},Value=shared"
# tag Cluster Security Group as well
# (NOTE: the tag "kubernetes.io/cluster/${CLUSTER_NAME}=shared" is required and is probably already there)
aws ec2 create-tags --resources "$CLUSTER_SECURITY_GROUP_ID" --tags \
"Key=karpenter.sh/discovery,Value=${CLUSTER_NAME}" \
"Key=alpha.eksctl.io/cluster-name,Value=${CLUSTER_NAME}" \
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,7 @@
# 2. AWS VPC CNI & ENIConfig configuration for Custom Networking



### AWS-Node and CNI Configuration

```bash
# Get the current env vars of aws-node
kubectl get daemonset aws-node -n kube-system -o jsonpath='{.spec.template.spec.containers[0].env}' | jq -r '.[] | .name + "=" + .value'
Expand All @@ -13,36 +12,42 @@ kubectl set env daemonset aws-node -n kube-system AWS_VPC_K8S_CNI_CUSTOM_NETWORK
kubectl set env daemonset aws-node -n kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone

```
#### Environment Variables Explained

- `kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone`:
- Means that AWS CNI is using the _topology.kubernetes.io/zone_ label to determine the `ENIConfig` name(`kubectl get eniconfig`) for that node.
- _topology.kubernetes.io/zone_ label is automatically added to the nodes by the kubelet as `eu-west-1a` or `eu-west-1b` or `eu-west-1c`, so we don't need any extra node tagging to do.
- This way we have a consistent way of applying the ENIConfig to the nodes.
- `ENIConfig` has the info about which Subnet and Security Groups should be used for the ENI.
- Our nodes will have their 1st ENI configured with the default VPC CIDR block, and the 2nd ENI will be configured with the Secondary CIDR block.
- Pods get their IPs from 2nd ENI, and the 1st ENI is used for the node communication.
- We will have 1st ENI reserved for pods, and all other ENIs will be used for the pod communication and in the Secondary CIDR block.
- `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true`:
- AWS CNI will use the `ENIConfig` objects which we create to configure the ENIs.
- Means that we are enabling custom networking on the CNI level. This change will help us to use the secondary CIDR block for the pods.
- This configuration **requires the existing node EC2 Instances be be restarted to take effect**.

### ENIConfig k8s CRDs

#### Environment Variables Explained

- `kube-system ENI_CONFIG_LABEL_DEF=topology.kubernetes.io/zone`:
- Means that AWS CNI is using the _topology.kubernetes.io/zone_ label to determine the `ENIConfig` name(`kubectl get eniconfig`) for that node.
- _topology.kubernetes.io/zone_ label is automatically added to the nodes by the kubelet as `eu-west-1a` or `eu-west-1b` or `eu-west-1c`, so we don't need any extra node tagging to do.
- This way we have a consistent way of applying the ENIConfig to the nodes.
- `ENIConfig` has the info about which Subnet and Security Groups should be used for the ENI.
- Our nodes will have their 1st ENI configured with the default VPC CIDR block, and the 2nd ENI will be configured with the Secondary CIDR block.
- Pods get their IPs from 2nd ENI, and the 1st ENI is used for the node communication.
- We will have 1st ENI reserved for pods, and all other ENIs will be used for the pod communication and in the Secondary CIDR block.
- `AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true`:
- AWS CNI will use the `ENIConfig` objects which we create to configure the ENIs.
- AWS CNI will look for the label `${ENI_CONFIG_LABEL_DEF}` on the node, and will use the value of that label to find the `ENIConfig` object by name.
- This means that we are enabling custom networking on the CNI level. This change will help us to use the secondary CIDR block for the pods.
- This configuration **requires the existing node EC2 Instances be be restarted to take effect**.

#### ENIConfig k8s CRDs

- ENIConfig CRD is used by AWS CNI to create ENIs with the specified configuration for that Availability Zone.
- The deamonset `aws-node` has env. var. called `ENI_CONFIG_LABEL_DEF`, and it is used to match
```
NodeLabels:
topology.kubernetes.io/zone=eu-west-1a
...
AWS CNI makes the following configuration
(selected ENIConfig name for node) = NodeLabels[ENI_CONFIG_LABEL_DEF]
```
- We are informing AWS CNI to look for the node label `topology.kubernetes.io/zone`.
- For example, if the label value is `eu-west-1a`, AWS CNI will use the `ENIConfig` named `eu-west-1a`.
#### Let's create the ENIConfig objects

```
NodeLabels:
topology.kubernetes.io/zone=eu-west-1a
...
AWS CNI makes the following configuration
(selected ENIConfig name for node) = NodeLabels[ENI_CONFIG_LABEL_DEF]
```

- We are informing AWS CNI to look for the node label `topology.kubernetes.io/zone`.
- For example, if the label value is `eu-west-1a`, AWS CNI will use the `ENIConfig` named `eu-west-1a`.

### Let's create the ENIConfig objects

```bash
cat << EOF | kubectl apply -f -
apiVersion: crd.k8s.amazonaws.com/v1alpha1
Expand Down Expand Up @@ -79,14 +84,12 @@ kubectl get eniconfig ${AZ2} -o yaml; echo "---";
kubectl get eniconfig ${AZ3} -o yaml; echo "---";
```


### Restart the Node Group Instances

- Terminate the Node Group instances to have them recreated with the new ENI configuration.
- After you create the Node Group, **check the instances to see if they got their IP Addresses from VPC Secondary CIDR Block**
- ![Managed Node EC2 Instance should have ips](../images/managed-node-instance-ip-addrs-on-secondary-cidr.png)


### Test Pods having IP addresses from Secondary CIDR Block

```bash
Expand All @@ -99,4 +102,4 @@ kubectl get pods -o wide

kubectl port-forward svc/nginx 8000:80
# check localhost:8000 on browser
```
```
10 changes: 10 additions & 0 deletions docs/eksdemo-secondary-cidr-and-cni-custom-netwoking/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,16 @@
- This tutorial also includes karpenter configuration for make use of the secondary CIDR block.
- This demo is for pre `v0.32` or `v1alpha` Karpenter version, but should work fine for AWS CNI and ENIConfig.

## Why this is needed?

- Running many nodes in EKS can cause IP address exhaustion in the VPC.
- How many IP addresses are available to a node is determined by nodes ENI capacity.
- Because of this, EKS requires running many nodes to keep up with the Pod count.
- Using a VPC with Secondary CIDR block allows us to have more IP addresses available to our pods.
- Karpenter is a faster option for cluster autoscaling than the default EKS Cluster Autoscaler.
- Karpenter can be configured to use Spot Instances, which can save a lot of money.


## Hands-on Demo

### Prerequisites
Expand Down

0 comments on commit 1be5d26

Please sign in to comment.