It's more like a ping-pong. Things start off simply, but over time as the layers of abstraction pile up, things become brittle and unworkable.
I view containers as more of a reworking of a key computational abstraction (VMs) than an evolution of them. We finally have operating systems with enough inter-process isolation, sufficiently capable filesystems (layering), etc. that we can throw out 80% of the other unnecessary junk of VMs like second kernels, duplicate schedulers, endless duplication of standard system libraries, etc.
So it's more like we've hacked/refactored virtualization into a more usable state, and gotten rid of a lot of useless garbage that it turns out we didn't actually need. It's a lot like how a big software system evolves, now that I think about it.
I'm genuinely curious, although a bit naive WRT containers. Outside of an aesthetic preference (for being able to remove 80% unnecessary cruft), what is the advantage of containers? I was under the impression that VM overhead was marginal in terms of today's computing.
I ask because I'm familiar with VMs, having worked with them extensively for a number of years. VMs work quite well for any application I've needed, so what would be the benefit of switching to containers? I've got lots to do, and lots to learn, but I can't see learning containers (and being out of sync with the rest of my coworkers) being a priority.
But I'm willing to change my mind if there's a concrete benefit. Right now, VMs work just fine, but maybe there's something I'm missing...
VM overhead isn't trivial. It still remains a pretty big factor in terms of cost bloat for CPU-bound stuff. Also, VMs take a godawful long time to start up; if you care about, say, responding to load within ten seconds, VMs aren't a great choice.
They're fine for a lot of things, of course. I use them all the time. But I use containers for other things.
I recall several reliable testers confirming that the CPU overhead of virtualisation was negligible, somewhere around 2%. Unfortunately I could not quickly find those papers now, but I did find a old VMWare whitepaper[1] showing they had ~7% overheard 5+ years ago, which sounds about right considering what kind of advancements they would have made in half a decade.
Sounds feasible, but CPU usage isn't really talked about as an advantage of containers.
I expect startup time and memory usage would be lower, but to my mind the advantages are mainly around flexibility... e.g. How long it takes to create or upload an image file. How long it takes to set up a minimal infrastructure with several components to it on a single EC2 instance. Decoupling the operating system patch cycle from the app deployment image generation cycle. etc.
It's just MUCH more memroy efficient to run containers and also VMs typically have worse I/O throughput.
CPU scores are fine though.
As an example i am running around 20 containerized servers on my Laptop in a 4GB VM which would typically be run on 20 distinct VMs on one or more hypervisors. It's not very fast but the density of servers you can put on your hardware is MUCH bigger.
Ah sorry! I didn't think you meant literally "10 seconds", was assuming you just meant quickly (a few minutes).
I can't really think of a use case though where someone would need more capacity in sub 10 seconds. Maybe if you only intend to scale horizontally with a bunch of 500Mb instances and had little to no room to set an appropriate scaling threshold? What would be a couple examples? With the apps I've seen the past several years generally they have scaling thresholds at 'X' resource and 3 minutes is more than enough to provision extra capacity for their needs.
Containers are just a way to launch threads without polluting your local namespace or system. It's a way to say "hey, this stuff shouldn't interfere with anything else".
Well, we've had various containers such as BSD jails, for decades. The useless garbage wasn't necessary. Seems like ping pong happens whenever "kids these days" don't know why the status is quo then have to relearn the old lessons.
I view containers as more of a reworking of a key computational abstraction (VMs) than an evolution of them. We finally have operating systems with enough inter-process isolation, sufficiently capable filesystems (layering), etc. that we can throw out 80% of the other unnecessary junk of VMs like second kernels, duplicate schedulers, endless duplication of standard system libraries, etc.
So it's more like we've hacked/refactored virtualization into a more usable state, and gotten rid of a lot of useless garbage that it turns out we didn't actually need. It's a lot like how a big software system evolves, now that I think about it.