Kubernetes can restrict the amount of CPU and Memory for application pods. This is especially useful when Kubernetes cluster is not set to auto-scale and has contrained resources being shared by multiple teams.
The revelvant Kubernetes documentation is found here
These steps are to be executed from your local machine!
$ cd /[LOCATION YOU CLONED THIS REPO]/GKE-hands-on-training
$ kubectl apply -f examples/resource-quotas/service.yaml
$ kubectl apply -f examples/resource-quotas/deployment.yaml
$ kubectl get pods -o wide
You should now see:
NAME READY STATUS RESTARTS AGE IP NODE
resource-quota-demo-4067652524-01jh5 1/1 Running 0 41s 172.16.235.216 worker1
resource-quota-demo-4067652524-72xxh 1/1 Running 0 41s 172.16.235.217 worker1
resource-quota-demo-4067652524-wnhgn 1/1 Running 0 41s 172.16.235.218 worker1
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
resource-quota-demo 3 3 3 3 1m
Create 100 replicas of this crappy app!
$ kubectl scale deployment resource-quota-demo --replicas=100
If you closed the Kuberentes Dashboard, follow the instructions to open Dashboard here
Now click on pods
from the left hand navigation menu.
You should see healthy pods (with green ticks next to them)
However you should see many pods displaying the following:
pod (resource-quota-demo-4067652524-jxlrw) failed to fit in any node fit failure summary on nodes : Insufficient cpu (1), Insufficient memory (1)
We can also see the information by describing the deployment using:
$ kubectl describe deployment resource-quota-demo
You will see the section:
Type Status Reason
Available false MinimumReplicasUnavailable
$ kubectl scale deployment resource-quota-demo --replicas=3
Finally execute the following command to tidy away the demo:
$ kubectl delete -f examples/resource-quotas/service.yaml
$ kubectl delete -f examples/resource-quotas/deployment.yaml