Patching AWS EKS

Bryan Kroger
2 min readJan 26, 2020

I just started using AWS EKS and found a few little gotchas that have been causing issues with the monitoring system. The software stack I’m using here:

  • AWS::EKS ( 1.14 )
  • Prometheus operator ( prometheus-operator-8.5.14 )
  • metrics-server from latest

Installing the metrics server:

Once everything is up, I create a tunnel to the prom frontend:

kubectl port-forward --address 0.0.0.0 -n monitoring prometheus-prometheus-operator-prometheus-0 9090 &

When I load it, the kube-proxy’s are all in a down state:

monitoring/prometheus-operator-kube-proxy/0 (0/6 up)

The fix is to patch the kube-proxy deployment as such:

kubectl get cm -n kube-system kube-proxy-config -o yaml | sed "s/metricsBindAddress: 127.0.0.1:10249/metricsBindAddress: 0.0.0.0:10249/g" | k apply -f - kubectl delete pod -n kube-system $(kubectl -n kube-system get pods|grep kube-proxy|awk '{print $1}')

This will patch and kick the kube-system deployment. And fix this issue. The second problem I found was that the metrics server wasn't scrapping data, so I wasn't able to use top:

# k top nodes error: metrics not available yet

Thefix is to apply this patch:

When applied, this will kick the metric-server pod and enable metrics to start flowing. It takes a minute to scrape the data, but you should start seeing this:

krogebry@cclab8-ht-esx-11 insight-deploy # k top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-172-100-19-9.ec2.internal 54m 2% 424Mi 15% ip-172-100-46-148.ec2.internal 166m 8% 802Mi 28% ip-172-100-57-153.ec2.internal 45m 2% 443Mi 15% ip-172-100-7-220.ec2.internal 42m 2% 503Mi 17% ip-172-100-80-160.ec2.internal 43m 2% 450Mi 15% ip-172-100-92-112.ec2.internal 32m 1% 421Mi 14%

There seems to be other complications regarding the ServiceMonitoring things, but I haven’t figured that out yet.

Originally published at https://medium.com on January 26, 2020.

--

--

Bryan Kroger

Exploring the space at the intersection of technology and spirituality.