Skip to content
This repository has been archived by the owner on Apr 25, 2024. It is now read-only.

201 monitoring update #518

Open
wants to merge 5 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 24 additions & 2 deletions 02-path-working-with-clusters/201-cluster-monitoring/readme.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,17 @@ Prometheus is now scraping metrics from the different scraping targets and we fo
$ kubectl port-forward $(kubectl get po -l prometheus=prometheus -n monitoring -o jsonpath={.items[0].metadata.name}) 9090 -n monitoring
Forwarding from 127.0.0.1:9090 -> 9090

Now open the browser at http://localhost:9090/targets and all targets should be shown as `UP` (it might take a couple of minutes until data collectors are up and running for the first time). The browser displays the output as shown:
Now open the browser at http://localhost:9090/targets.

If you are running this in the Cloud9 IDE, you will need to run the following to be able to visualize your dashboard:

$ kubectl port-forward $(kubectl get po -l prometheus=prometheus -n monitoring -o jsonpath={.items[0].metadata.name}) 8080:9090 -n monitoring
Forwarding from 127.0.0.1:8080 -> 9090
Forwarding from [::1]:8080 -> 9090

The dashboard will be available at https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/targets.

All targets should be shown as `UP` (it might take a couple of minutes until data collectors are up and running for the first time). The browser displays the output as shown:

image::monitoring-grafana-prometheus-dashboard-1.png[]
image::monitoring-grafana-prometheus-dashboard-2.png[]
Expand Down Expand Up @@ -287,7 +297,17 @@ Lets forward the grafana dashboard to a local port:
$ kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath={.items[0].metadata.name} -n monitoring) 3000 -n monitoring
Forwarding from 127.0.0.1:3000 -> 3000

Grafana dashboard is now accessible at http://localhost:3000/. The complete list of dashboards is available using the search button at the top:
Grafana dashboard is now accessible at http://localhost:3000/.

If you are running this in the Cloud9 IDE, you will need to run the following to be able to visualize your dashboard:

$ kubectl port-forward $(kubectl get pod -l app=grafana -o jsonpath={.items[0].metadata.name} -n monitoring) 8080:3000 -n monitoring
Forwarding from 127.0.0.1:8080 -> 3000
Forwarding from [::1]:8080 -> 3000

The dashboard will be available at https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/.

The complete list of dashboards is available using the search button at the top:

image::monitoring-grafana-prometheus-dashboard-dashboard-home.png[]

Expand Down Expand Up @@ -316,6 +336,8 @@ Convenient link for other dashboards are listed below:
* http://localhost:3000/dashboard/db/kubernetes-resource-requests?orgId=1
* http://localhost:3000/dashboard/db/pods?orgId=1

(For Cloud9 users, just replace `http://localhost:3000/` by `https://<ENV_ID>.vfs.cloud9.<REGION_ID>.amazonaws.com/`

=== Cleanup

Remove all the installed components:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ spec:
- args:
- --kubelet-service=kube-system/kubelet
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
image: quay.io/coreos/prometheus-operator:v0.14.1
image: quay.io/coreos/prometheus-operator:v0.21.0
name: prometheus-operator
ports:
- containerPort: 8080
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ spec:
serviceAccountName: kube-state-metrics
containers:
- name: kube-state-metrics
image: quay.io/coreos/kube-state-metrics:v1.0.1
image: quay.io/coreos/kube-state-metrics:v1.3.1
ports:
- name: metrics
containerPort: 8080
Expand All @@ -171,7 +171,7 @@ spec:
initialDelaySeconds: 5
timeoutSeconds: 5
- name: addon-resizer
image: k8s.gcr.io/addon-resizer:1.0
image: k8s.gcr.io/addon-resizer:1.7
resources:
limits:
cpu: 100m
Expand Down Expand Up @@ -225,7 +225,7 @@ metadata:
spec:
replicas: 2
version: v2.0.0-rc.1
serviceAccountName: prometheus-operator
serviceAccountName: prometheus
serviceMonitorSelector:
matchExpressions:
- {key: k8s-app, operator: Exists}
Expand All @@ -246,6 +246,45 @@ spec:
name: alertmanager-main
port: web
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
namespace: monitoring
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
Expand Down