Table of Contents
A simplified way to provide a monitoring structure with Grafana within a k8s cluster (preferably local) with 2 replicas running on different Nodes. Using nginx as a load balancer, distributing load among the pods through the NodePort of the cluster
- Having a local (or cloud provider) Kubernetes infrastructure with at least two worker nodes;
- Set your IP's;
- Make sure you have kubernetes and load balancer ports relesead in your network;
- Make sure you have docker installed.
- Clone the repo
git clone https://github.com/gui-sousa/grafana-k8s-lb.git
- Build docker image
docker run -d -p 80:80 --name lb-grafana gsousa/grafana-bwg:latest
- Deploy k8s statements
kubectl apply -f pv.yaml && \ kubectl apply -f pvc.yaml && \ kubectl apply -f service.yaml && \ kubectl apply -f deployment.yaml
- Access you grafana initial setup page in http://localhost
In pv.yaml change the nfs server ip adress for your environment
nfs:
server: <YOUR STORAGE IP>
path: "/mnt/apps/grafana-vol"
The same in upstream block in nginx.conf
upstream grafana {
server <K3S-NODE-1>:32009 weight=2 max_fails=3 fail_timeout=10;
server <K3S-NODE-2>:32009 backup;
}
This file, there is a load balancing configuration between the two Nodes running Grafana. All traffic is prioritized to NODE-1 while NODE-2 remains as a backup. In this scenario, if Grafana on NODE-1 experiences 2 connection failures within 10 seconds, all traffic is redirected to the Grafana hosted on NODE-2.
Gui Sousa - https://www.linkedin.com/in/guilherme-sousa-rodrigues/
Project Link: https://github.com/github_username/repo_name