Persiapan

Membuat namespace baru untuk monitoring dengan perintah berikut :

1
kubectl create ns monitoring

Lalu memasangan helm yang bisa diterapkan pada postingan helm packet manager. Lnalu menambahkan repo prometheus-comunity dengan perintah berikut :

1
2
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

Membuat secret akses certificate etcd client

run as root user

1
2
3
4
sudo su
export KUBECONFIG=/etc/kubernetes/admin.conf
kubectl -n monitoring create secret generic etcd-client-cert --from-file=/etc/kubernetes/pki/etcd/ca.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt --from-file=/etc/kubernetes/pki/etcd/healthcheck-client.key
exit

Membuat helm values kube-prometheus-stack

buat file dan isi data sebagai berikut :
nano kube-prometheus-stack-helm-values.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
alertmanager:
  enabled: false

grafana:
  defaultDashboardsTimezone: Asia/Jakarta
  adminPassword: P@ssw0rd
  image:
    repository: grafana/grafana
    tag: "8.2.7"
  persistence:
    enabled: true
    storageClassName: "local-path"
    accessModes:
      - ReadWriteOnce
    resources:
      requests:
        storage: 10Gi

prometheus:
  prometheusSpec:
    secrets: ['etcd-client-cert']
    storageSpec:
      volumeClaimTemplate:
        spec:
          storageClassName: "local-path"
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi
kubeEtcd:
  service:
    targetPort: 2379
  serviceMonitor:
    scheme: https
    insecureSkipVerify: false
    serverName: localhost
    caFile: /etc/prometheus/secrets/etcd-client-cert/ca.crt
    certFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.crt
    keyFile: /etc/prometheus/secrets/etcd-client-cert/healthcheck-client.key

kubeScheduler:
  service:
    targetPort: 10259
  serviceMonitor:
    https: "true"
    insecureSkipVerify: "true"

storageClassName didapat pada pemasangan local-path pada postingan menambahkan local path provisioner

Memasang kube-prometheus-stack dengan helm

1
helm install monitoring prometheus-community/kube-prometheus-stack --namespace monitoring -f kube-prometheus-stack-helm-values.yaml

Membuat ingress kube-prometheus-stack

Membuat ingress untuk subdomain grafana.syslog.my.id dan prometheus.syslog.my.id dengan manifest file berikut :
nano kube-prometheus-stack-ingress.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: grafana
  namespace: monitoring
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: grafana.syslog.my.id
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: monitoring-grafana
            port:
              number: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: prometheus
  namespace: monitoring
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  rules:
  - host: prometheus.syslog.my.id
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: monitoring-kube-prometheus-prometheus
            port:
              number: 9090

Menerapkan manifest dengan perintah

1
kubectl apply -f kube-prometheus-stack-ingress.yaml

Lalu lihat service den ingress kube-prometheus-stack dengan perintah
kubectl -n monitoring get svc dan kubectl -n monitoring get ingress image

Memperbaiki masalah yang terjadi

image

Update configmap kube-proxy

1
kubectl -n kube-system edit cm kube-proxy

tambahkan value 0.0.0.0:10249 pada key metricsBindAddress

1
    metricsBindAddress: "0.0.0.0:10249"

Update ip 127.0.0.1 ke ip 0.0.0.0

1
2
3
sudo nano /etc/kubernetes/manifests/etcd.yaml
    # Ubah bagian ini
    # - --listen-metrics-urls=http://127.0.0.1:2381
1
2
3
sudo nano /etc/kubernetes/manifests/kube-scheduler.yaml
    # Ubah bagian ini
    # - --bind-address=127.0.0.1
1
2
3
sudo nano /etc/kubernetes/manifests/kube-controller-manager.yaml
    # Ubah bagian ini
    # - --bind-address=127.0.0.1

Sumber Referensi

Persistent volumes issue:

etcd monitoring issue:

kube-prometheus-stack scraping metrics issue: