Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The Kubernetes folks' solution to this is the addition of `kubectl debug` (added as `kubectl alpha debug` in Kube 1.18, graduated to `kubectl debug` in Kube 1.20) as an alternative to `kubectl exec`. It takes an existing Pod and lets you attach a new container with whichever image you like, so that your production images don't need debugging tools.


Also, before `kubectl debug`[0] existed, you could always edit a Deployment and add a sidecar container of `alpine` or `busybox` and enable process namespace sharing[1] get some leverage to debug with.

A bunch of other options in the docs as well

[0]: https://kubernetes.io/docs/tasks/debug-application-cluster/d...

[1]: https://kubernetes.io/docs/tasks/configure-pod-container/sha...


That would trigger a restart, no? What if you want to live debug without restarting the scratch container?


Yup it would trigger a restart -- that first link details some other options if restarting the workload is absolutely not an option. You can debug a copy, start a privileged pod, or jump on to the node and actually enter the namespace manually. At the end of the day all these containers are just sandboxed+namespaced processes running on a machine somewhere, so if you can get on that machine with the appropriate permissions then you can get to the process and debug it.

Of course, if you're in a tightly managed environment and can't get on the nodes themselves things get harder, but probably not completely impossible :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: