I’m having an issue with what I believe to be the k8s the autoscaler.
The autoscaler launched a new cluster after a recent deploy (and I can see that instance on EC2, where our k8s deployment’s hosted), but it doesn’t show up when I do kubectl get nodes.
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-20-110-212.ec2.internal Ready master 322d v1.5.1 ip-172-20-129-59.ec2.internal Ready master 322d v1.5.1 ip-172-20-153-170.ec2.internal Ready <none> 322d v1.5.1 ip-172-20-160-119.ec2.internal Ready master 322d v1.5.1 ip-172-20-162-94.ec2.internal Ready <none> 316d v1.5.1 ip-172-20-166-194.ec2.internal Ready <none> 322d v1.5.1 ip-172-20-79-1.ec2.internal Ready <none> 112d v1.5.1 ip-172-20-92-163.ec2.internal Ready <none> 322d v1.5.1
Further, a kube-proxy pod that matches this “missing” node’s IP does show up, but is killed and relaunched every 30 seconds.
kubectl get pods kube-proxy-ip-172-20-181-122.ec2.internal 1/1 Running 0 17s
✓ Extra quality
ExtraProxies brings the best proxy quality for you with our private and reliable proxies
✓ Extra anonymity
Top level of anonymity and 100% safe proxies – this is what you get with every proxy package
✓ Extra speed
1,ooo mb/s proxy servers speed – we are way better than others – just enjoy our proxies!
USA proxy location
We offer premium quality USA private proxies – the most essential proxies you can ever want from USA
99,9% servers uptime
No usage restrictions
Perfect for SEO
We are working 24/7 to bring the best proxy experience for you – we are glad to help and assist you!