Hacker Newsnew | past | comments | ask | show | jobs | submit | laumars's commentslogin

As someone who's written commercial software in well over a dozen different languages for nearly 40 years, I completely disagree.

Go has its warts for sure. But saying the simplicity of Go is "just virtue signaling" is so far beyond ignorant that I can only conclude this opinion of yours is nothing more than the typical pseudo-religious biases that lesser experienced developers smugly cling to.

Go has one of the easiest tool chains to get started. There's no esconfig, virtualenv and other bullshit to deal with. You don't need a dozen `use` headers just to define the runtime version nor trust your luck with a thousand dependencies that are impossible to realistically audit because nobody bothered to bundle a useful standard library with it. You don't have multi-page indecipherable template errors, 50 different ways to accomplish the same simple problem nor arguments about what subset of the language is allowed to be used when reviewing pull requests. There isn't undefined behaviour nor subtle incompatibilities between different runtime implementations causing fragmentation of the language.

The problem with Go is that it is boring and that's boring for developers. But it's also the reason why it is simple.

So it's not virtue signaling at all. It's not flawless and it's definitely boring. But that doesn't mean it isn't also simple.

Edit: In case anyone accuses me of being a fanboy, I'm not. I much preferred the ALGOL lineage of languages to the B lineage. I definitely don't like a lot of the recent additions to Go, particularly around range iteration. But that's my personal preference.


You are comparing Go to Python, JS, and C++, arguably the three most complex languages to build. (JS isn't actually hard, but there are a lot of seemingly arbitrary decisions that have to be made before you can begin.) There are languages out there that are easy to build, have a reasonable std lib, and don't offload the complexity of the world onto the programmer.


> You are comparing Go to Python, JS, and C++, arguably the three most complex languages to build.

No, I'm comparing to more than a dozen different languages that I've used commercially. And there were direct references there to Perl, Java, Pascal, procedural SQL, and many, many others too.

> There are languages out there that are easy to build, have a reasonable std lib

Sure. And the existence of them doesn't mean Go isn't also simple.

> and don't offload the complexity of the world onto the programmer.

I disagree. Every language makes tradeoffs, and those tradeoffs always end up being complexities that the programmer has to negotiate. This is something I've seen, without exception, in my 40 years of language agnosticism and part-time language designer.


Not just using it daily, but I have scripts I've written literally a decade ago, and which haven't been modified at all, which still work just as well as the day I wrote them. Whereas there's a whole plethora of Python projects I've had to abandon in that time because they were never ported to Python 3.


I didn’t read the GPs comment as a justification.


Where I live we also call it “trump”. Which is why many of us never understood why people would claim that Donald Trump has a “powerful name”.

However it did amuse use that a man full of hot air would be named after the expelling of warm but noxious air.


I think the (originally German) name is derived from trump cards - which are also powerful, but in a somewhat different way...


That’s only a modern day theory and feels somewhat retrofitted interpretation of the facts (from what I’ve seen when researching this myself). There are various spellings of the surname, since names often didn’t have a formal spelling in the early days of book keeping. And some of the spellings begun with a ‘T’.

So the more likely scenario is the name was only picked because it sounded similar to their original German family name while being an English-styled spelling.

Given we are taking 30+ years ago though, the best we can do at this stage is speculate.

However even if the card game theory were true, it still doesn’t make Trump a powerful name in the U.K. because the fart connotation is still more prevalent.


Okay, can you please give me a hint as to where that is? As googling "trump fart" just leads to a lot of terrible Youtube videos.


I’m not the GP but in the UK “trumping” is what we call it (amongst other things, lots of fun words for body sounds).


I was with you right up until “quietly doing it without much fanfare”.

Personal opinions of Rust aside, it has gained so much fanfare that “rewrite it Rust” is now practically a meme.


The other side of that is the back end of what Rust folks are doing. There is a vocal segment that are doing a lot of surface level things. But there are also people quietly building the language and toolchain up to be something that you can do true low level embedded work with while maintaining (most of) the guarantees of Rust. That is what I was alluding to. I don't think "rewrite it in Rust" is always smart or even productive.

edit: It is also worth exploring why and how a systems programming language has generated this much excitement in folks that are "rewriting it in Rust". These people are also in their way making it that much easier for everyone else to transition to Rust, proving these projects work just fine in Rust. I do agree it is a meme, but it is a good one for us all. As an infosec practitioner nothing could make me happier than seeing people excited about a language that eradicates one of the worst and most pernicious classes of C/C++ bugs.


The sad part is that this was inflicted by the industry themselves,

https://www.schneier.com/blog/archives/2007/09/the_multics_o...

> The combination of BASED and REFER leaves the compiler to do the error prone pointer arithmetic while having the same innate efficiency as the clumsy equivalent in C. Add to this that PL/1 (like most contemporary languages) included bounds checking and the result is significantly superior to C.


Thanks for the link!

One data point more backing my theory that we're in the middle(?) of "computing dark ages", where the biggest crap and nonsense dominates everything (and people not even know how crappy everything is).

Do you think we will ever leave the dark age?

I mean before our AI overlords get in charge and kill and replace all the nonsense we've built, of course.


Not before our lifetimes.

This is a matter of quality, and like everything in computing, quality only matters when money or law is involved, so in a way returning digital goods with refund is one way to make companies take quality more seriously, other are stricter liability laws for when exploits ocurr in the wild.


It's not fair to judge an entire ecosystem full of extremely talented people by the vocal (and insufferable) 1%. Every group has them. What has become a meme has zero relationship to the quality of the thing.


I wasn’t judging anyone. I was just saying Rust isn’t exactly flying under the radar.


> Rather, I mean that nobody's fabbing GB game carts, printing glossy color manuals for them, putting them in boxes, and shipping them to Kickstarter backers.

Maybe not Kickstarter specifically, but they are doing the other stuff.

> Nobody's translating these games into 10 languages.

That's a little unreasonable ask. Given how little money these games will bring in you can't expect that.

> Nobody's QA-testing the heck out of them to ensure they don't have any obscure edge-case bugs. Etc. (This was why I brought up Planet X3: it's an example of a modern retro-game that is polished in exactly this sense.)

There is QA testing happening but like with the above, you can't expect smaller operations to pay people to QA. So you end up either having "QA" be demo builds, in which case most people end up with a copy so you can't do the premium carts that you're also asking for. Or you just have to accept that QA is a smaller operation but still have the beautiful hardware copy.

Even really well polished games on other platforms with much bigger audiences, like Xeno Crisis for the Mega Drive / Genesis, have had their share of bugs too. Even in the original days of these consoles, when games from big studios with big budgets to spend on testing, we'd still see new releases with bugs-galore.

> But a desire to go through all the effort to create polished modern GB games, is the sort of thing that requires market demand — a willingness to spend real money to buy new GB games (whether digitally or physically.) No hobbyist game developer is going to spend a year on "things that aren't programming" to make a nice shippable product out of their game, if that isn't going to result in at least breaking even on their effort.

This doesn't even happen with new games on current generation consoles. So I don't understand why you're expecting it to happen with retro-consoles.

> And a prerequisite for that demand, is public awareness of the fact that modern GB games are being created — and, more importantly, a public "brand" awareness for the people creating them!

I don't think awareness is the problem. I think most people just don't care. If you're a retro gamer you are probably already aware, and if you're not then you're never going to play these games anyway. And I honestly can't blame people for not caring. There's so much content out there these days that GB games (with all the warts that the GB hardware had, by modern day standards) simple aren't going to appeal to most people.

I think you're setting your expectations far too high.

> Indie game jams are great and all; but they don't create brand awareness

Again, there are plenty of releases that aren't the results of game jams.


> This doesn't even happen with new games on current generation consoles. So I don't understand why you're expecting it to happen with retro-consoles.

It absolutely does. What do you think https://www.nicalis.com/ as a studio does? They take hobbyists' indie games, acquire distribution rights, and polish them so they can see console release.

Also consider: ports. What is the re-release of Shantae for Switch, if not someone spending a year polishing an (admittedly already polished by 2002 standards) GBC game into a product non-retro-gamers are willing to pay for?

> If you're a retro gamer you are probably already aware, and if you're not then you're never going to play these games anyway.

...why not? I think you're setting your expectations too low.

People pay for (polished) Steam and console releases of RPG Maker games; and those often have far less effort or thought put into the gameplay, replayability, etc. than these games do.

Though the real point of comparison that should be made, is the market for indie commercial homebrew releases for non-portable consoles; which is thriving even among non-retro-gamers, in a way that the market for portables, isn't.

• Consider: there are more people out there commercializing just the ROMhacks of Super Mario World (by doing full engine rewrites of games that were already full asset replacements, to get away from Nintendo's IP, and then selling the results on console stores!) than there are GB/GBC/GBA game authors attempting to commercialize their games.

• Consider: there are tons of game streamers, speedrunners, genre-specific content creators (e.g. horror-game Lets Play-ers), etc, who play new console retro-games if and only if they come packaged in some accessible format. So they'll play Steam or console-store releases of these retro games; they'll play PC-accessible downloadables like Mario Multiverse or PokeWilds; but they won't touch a raw emulator. And, by-and-large, the developers of these new old-home-console titles are aware of that, and produce/port/polish their releases so that they can be consumed in this way, and so can generate virality. I don't see anything like that happening in the new old-portable games space.


> It absolutely does. What do you think https://www.nicalis.com/ as a studio does? They take hobbyists' indie games, acquire distribution rights, and polish them so they can see console release.

They're not the games studio doing it, so it literally proves my point. These people could just as easily do it for the GB games too but they know there's no money in it.

> ...why not? I think you're setting your expectations too low. People pay for (polished) Steam and console releases of RPG Maker games; and those often have far less effort or thought put into the gameplay, replayability, etc. than these games do.

Because those games are cheap as chips and don't require an original Gameboy to play them. Given your point was about polished cart releases, the requirement to own a Gameboy is a pretty big hurdle for people who are only casually interested in retro games.

Then there's the other points I've already outlined: Gameboy graphics, much as I loved them at the time, haven't aged well. Even modern pixel art is very different to the 4 toned shades of grey on a 160x144 matrix.

> Though the real point of comparison that should be made, is the market for indie commercial homebrew releases for non-portable consoles; which is thriving even among non-retro-gamers, in a way that the market for portables, isn't.

Thriving is over-stating the market. There's a lot of resellers driving the prices up but the market for actual retro gamers is a lot smaller than the market bubble suggests. A lot of non-retro gamers bought their NES or PlayStation "mini's" but then went back to their current gen PlayStation/Xbox a few weeks later; leaving their retro system to collect dust. Even the emulators bundled with Nintendo's Switch Online membership mostly gets played by people who are already retro gamers, while the rest of the Switch's demographic prefer either Nintendo's first party games or the range of indie offerings.

And even if we take your comment at face value, you're still ignoring my point that the Gameboy has aged probably the worst of any console. Except maybe the Atari 2600. Don't get me wrong, I do love my Gameboy. But serious concessions were made in it's hardware design to facilitate it's long battery life. This made the device a fantastic handheld in the 90s but a terrible platform for modern gamers who aren't already bought into retro gaming. Sure you might get the odd non-retro gamer pick it up for nostalgia purposes (like with the NES Classic) but that's not going to be a sustainable source of income.

> Consider: there are more people out there commercializing just the ROMhacks of Super Mario World (by doing full engine rewrites of games that were already full asset replacements, to get away from Nintendo's IP, and then selling the results on console stores!) than there are GB/GBC/GBA game authors attempting to commercialize their games.

Are there? Do you actually have some data to back up that claim? Have you done any in-depth analysis here or just pulling guestimates out of your arse? And even if you are correct (which I doubt), what difference does it make? The two aren't related. They're not even mutually exclusive.

> Consider: there are tons of game streamers, speedrunners, genre-specific content creators (e.g. horror-game Lets Play-ers), etc, who play new console retro-games if and only if they come packaged in some accessible format. So they'll play Steam or console-store releases of these retro games; they'll play PC-accessible downloadables like Mario Multiverse or PokeWilds; but they won't touch a raw emulator. And, by-and-large, the developers of these new old-home-console titles are aware of that, and produce/port/polish their releases so that they can be consumed in this way, and so can generate virality. I don't see anything like that happening in the new old-portable games space.

There are absolutely shit loads of retro gamer streamers out there too. In fact I'm doing a stream myself tonight.

Disclaimer: as well as being a retro gamer and streamer myself, I know a number of relatively high profile people in the gaming and retro-gaming circles. Some who are resellers, some who are games researchers and some who are games journalists too. A couple of which are also really big fans of the GB, GBC and GBA so frequently get sent new games for review (albeit those games don't usually get published in magazines because, and I quote "not enough people are interested in the Gameboy". But they will publish reviews online).


Same here.

I find I need to listen to most of his albums a couple of times before I fall for them whereas Syro clicked first time.


Daft Punks first two albums weren't like that. They were a little darker and had a rawer production value with very simplistic (read: not very diverse) sounds running through them.

To be clear, this isn't a dig at Daft Punk. I love their early stuff. I'm just making a point that you cannot distill an artist or genre down to a single sentence and use that as an explanation for why you like something.


Why do some people prefer Daft Punk to Led Zepplin? Or Tchaikovsky, Frank Sinatra, Taylor Swift, Bolt Thrower, Sex Pistols, Tupac or any other artist?

People like what they like and are interested in what they're interested in.

A thousand people could post intellectual insights into their personal preferences but a thousand more could cite the exact same arguments as reasons why they don't like the same music. It's just part of the colourful tapestry of human culture.


People's individual preference is a thing, but it does not explain anything.

But one can seek to understand the aesthetics of something none the less.


You cannot explain it though.

I could say I'm drawn to Aphex Twin because it all sounds really diverse but someone else would say they disagree and prefer rock music because Aphex Twin sounds really samey.

The problem with conversations about personal preference is it's not a rational decision. You either like something or you don't. Someone could try to intellectualize why they like something but ultimately that reasoning only applies to them.


Then why talk about it at all? I disagree with this argument. It has observable qualities, and those qualities can be discussed. Preference is one thing, but there's more to discussions about art than what one prefers.


I agree however the question asked was specifically about preference.


try to think of it as an artist with complete and total mastery of rhythm and electronic music production at play. play being the key word here-- he's having fun


Making containers was easy long before Docker came along:

- FreeBSD Jails

- Solaris Zones

- Proxmox (which was an abstraction over OpenVZ, back before LXC came along)

In fact because of all of the above, I was a latecomer to Docker and didn't understand the appeal.

What Docker changed was that it made containers "sexy", likely due to the git-like functionality. It took containers from a sysadmin world and into a developers world. But it certainly didn't make containers any easier in the process.


It did make it easier, at least the barrier to entry. I remember reading about jails years ago, when I had a lot less sysadmin knowledge, and I couldn't wrap my head around it.

With docker, many people still can't wrap their head around how it works and will do stupid things if they need to run them in a serious environment, but they can still run a bunch of containers to run some hard to install software easily on their local machine!

Sure, jails were easy in some ways, but boiling docker's success to sexyness, instead of usefulness, sounds a bit like yet another "Dropbox is just rsync". Docker wasn't solving the isolation issue (which had been obviously solved for years) but mostly the distribution issue.


So you mean you could take your FreeBSD Jails configuration, upload it on a well known public website like dockerhub, then get someone on Windows or Mac transparently install the image and run it with a few cmdlines ?

Because docker container are called container for this reason. That comes from the boat container analogy.


Many years before Linux containers, FreeBSD jails were easily packaged up via tar, deployed via scp, and started with minimal script. There wasn't much hype, it just worked. It was an excellent software packaging and distribution tool.

There wasn't a hub that I recall, nor was there tooling to use VMs so they could run on Windows/Mac. However, the main challenge, being able to distribute an "image" without requiring VM overhead, was solved elegantly. It just wasn't Linux, so it didn't make news.


Running Docker on non-Linux platforms requires a Linux VM to run in the background. It's not as cross platform as people make out. The other container technologies can be managed via code too, that code can be shared to public sites. And you can run those containers in a VM too, if you'd want.

What's more, with ZFS you could not only ship the container as code, but even the container as a binary snapshot, very much like Docker's push/pull but predating Docker. Even on Linux, for a long time before Docker, you could ship container snapshots as tarballs.

Also worth mentioning is that early versions of Proxmox even had (and likely still does) a friendly GUI to select which image you wanted to download, thus making even the task of image selection user friendly. This was more than 10 years ago, long before Docker's first release and something Docker Desktop still doesn't even have to this day.

> Because docker container are called container for this reason. That comes from the boat container analogy.

The term computer "container" predates Docker by a great many years. Containers were in widespread use on other UNIXes with Linux being late to the game. It was one of the reasons I preferred to run FreeBSD or Solaris on production in the 00s despite Linux being my desktop OS of choice. Even when Linux finally caught with containerisation, Docker was still a latecomer to Linux.

Furthermore, for a long time Docker wasn't even containers (still isn't strictly that now but it at least offers more in the way of process segregation than the original implementations did). Albeit this was a limitation of the mainline Linux kernel so I don't blame Docker for that. Whereas FreeBSD Jails and Solaris Zones offered much more separation, even at a network level.

If we are being picky about the term "container" (not something I normally like to do) then Docker is the least "container"-like of all containerisation technologies available. But honestly, I don't like to get hung up on jargon because it helps no-one. I only raise this because you credited the term to Docker.

---

Now to be clear, I don't hate Docker. It may have it's flaws but there are aspects of it I do also really like; and thus I do use it regularly on my Linux hosts these days despite my original reluctance to move away from Jails (Jails is still much nicer if you need to do anything complicated with networking, but Docker is "good enough" for most cases). However what I really dislike is this rewriting of history where people seem to think Docker stood out as a better designed technology - either from a UX or engineering perspective.

I personally think what made Docker successful was being in the right place at the right time. Linux was already a popular platform, containers were beginning to become widely known outside of the sysadmin circles but Linux (at that time) still sucked for containerisation. So it got enough hype early on to generate the snowball effect that saw it become dominant. But lets also not forget just how unstable it was, for a long time it was frequently plagued with regression bugs from one release to another. Which caused a great many sysadmins to groan whenever a new release landed.

(sorry for the edits, the original post was a flow of thoughts without much consideration to readability. Hopefully I've tidied it up)


If anything this is testament to the failure of previous solutions to popularize it.

Docker invented absolutely zero on the OS side and reused what LXC did but the invention here is not "putting things in containers" but "making it easy to put things in containers" and "making it easy to run those containers. Every solution before that required a bunch more knowledge.

> Running Docker on non-Linux platforms requires a Linux VM to run in the background. It's not as cross platform as people make out.

Which people ? I never seen anyone saying Docker makes it easy to run cross platform stuff, and it was always one of it's pain points.


> If anything this is testament to the failure of previous solutions to popularize it.

Maybe. But I'd rather not argue about popularity in a conversation about technical merit. The two aren't mutually inclusive and popularity is a subjective quality. Nothing good ever comes from conversations about popularity and preference.

> Every solution before that required a bunch more knowledge.

I'm not sure I fully agree with that. Docker has a lot of bespoke knowledge whereas the previous solutions built on top of existing knowledge. Where they differed was that Docker was an easier learning curve for people with previously zero existing systems knowledge. Which is something I didn't really appreciate until reading these responses because (possibly because I'm an old timer developer. I want to understand how my code works at a systems level so made it my job to understand the OS and even hardware too - though that's gotten harder as tech has progressed. But that was expected of developers when I started out).

> Which people ? I never seen anyone saying Docker makes it easy to run cross platform stuff, and it was always one of it's pain points.

The comment I replied to said: "get someone on Windows or Mac transparently install the image and run it with a few cmdlines"


>> If anything this is testament to the failure of previous solutions to popularize it.

>Maybe. But I'd rather not argue about popularity in a conversation about technical merit. The two aren't mutually inclusive and popularity is a subjective quality. Nothing good ever comes from conversations about popularity and preference.

The popularity is directly related to technical merit of it being very easy to start, both to run and to create containers. It isn't "just" popular, it got popular because it was solution to the problem that near-zero infrastructure skill developer could apply. We have countless example of solutions winning almost purely on having low barrier to entry, and docker was just that for containers.

The previous solutions ignored that, and assumed the target audience is mildly competent sysadmin, not a developer that has no idea what UID is, let alone the rest of ops stuff.

And it got buy in on the other side of the fence too, as now sysadmin instead of installing a spider's net of PHP or Ruby deps just had to install docker and deploy a container.

>> Every solution before that required a bunch more knowledge.

> I'm not sure I fully agree with that. Docker has a lot of bespoke knowledge whereas the previous solutions built on top of existing knowledge. Where they differed was that Docker was an easier learning curve for people with previously zero existing systems knowledge.

Well, you got the point. At the point where you need that knowledge (and I'd argue debugging docker container is in every way harder than just having a process in system) you already bought into the ecosystem. There is no initial hurdle to go thru like there was with previous systems trying to do same thing, even if you end up with harder to debug end result.

Just like with other things, PHP got popular because it was easier than anything CGI related, "just write code inside your HTML", Ruby got popular off Rails and 15 minute blog engine demo, Python being just all around easy to learn.


> The previous solutions ignored that, and assumed the target audience is mildly competent sysadmin, not a developer that has no idea what UID is, let alone the rest of ops stuff.

But that doesn't mean that the previous solutions weren't popular for others outside of the developer community. The comments here are heavily developer orientated but that's only part of the story in terms of the wider container ecosystem.

> Just like with other things, PHP got popular because it was easier than anything CGI related, "just write code inside your HTML", Ruby got popular off Rails and 15 minute blog engine demo, Python being just all around easy to learn.

The point I was making wasn't that "Docker doesn't deserve popularity" nor any confusion as to why it's popular. It was saying that the stuff that came before it was also easy.

Your example about languages here is a apt because PHP is an easier language for people from a zero coding background. But if your background is in C then PHP is going to be much harder to use compared to learning Nim, Zig or Rust.

Saying the containerisation solutions that came before were garbage, as people have done, isn't accurate. I'm not being critical of Docker; I'm defending the elegance of Jails. It's just that elegance is exposed in a different way and for a different audience to who Docker targets.


i think a lot of previous solutions focused on the "5x" engineer that was willing to comb through manpages and dig through the source (or at least the Makefile) if something unexpected happened.

many, many, MANY engineers are not like that.

many just want to build and push their features, and that's fine.

Docker knew that LXC was onto something and focused on the latter audience; that combined with their VC funding after they hit a critical mass is why they are heralded as having "invented" containers (even though they didn't).


and what I really dislike is dissing a technology because piece of it had existed in other forms before.

yes, zones on solaris offered a lot of the modern SDN networking stuff. was it popular? no.

yes, with zfs, in theory, you could ship a binary file and the other side can load it nicely (if you're thinking send/recv). was it popular to ship things like that in the open, public, in an easy to use fashion? no.

just admit docker popularized a lot of these and let's move along. while the tech might have existed, the previous ecosystems sucked and docker changed this for good.


> and what I really dislike is dissing a technology because piece of it had existed in other forms before.

How am I "dissing" Docker here? The comments before me are saying other technologies weren't easy or as feature rich. I'm saying they were. That's not a criticism of Docker. It's just a fact about other technologies.

> yes, zones on solaris offered a lot of the modern SDN networking stuff. was it popular? no.

Popularity means jack shit about the quality of a product. Reddit, Twitter and Facebook are popular but the UX is appalling on each of them. Just as plenty of really well built technologies never gain traction.

I suspect the reason Zones wasn't popular was because Solaris wasn't popular. Had Zones or Jails existed in the Linux mainline kernel at the same time as they had in Solaris/FreeBSD then we might not have seen a need for Docker. Or maybe it might still be around and popular...who knows? It's pointless to speculate over why something is popular because it's unscientific and unprovable. But we can discuss the UX and capabilities.

> yes, with zfs, in theory, you could ship a binary file and the other side can load it nicely (if you're thinking send/recv). was it popular to ship things like that in the open, public, in an easy to use fashion? no.

I can't speak for others did or did not, but I certainly did. (also see my comment above regarding popularity).

> just admit docker popularized a lot of these and let's move along.

I wasn't arguing that Docker didn't popularise these things. I was arguing against the point that the other tools were sub-par.

> while the tech might have existed, the previous ecosystems sucked and docker changed this for good.

And here's the crux of problem: you're conflating popularity with technical excellence. They're two unrelated metrics.


Just for clarification, because zoltan isn't making very compelling arguments.

Docker made containers easy and effective. Solaris zones were not as good as Docker is, BSD jails are/were not as good or easy to use as Docker is. Popularity has nothing to do with it, except that the popularity is an indication of the fact that Docker was revolutionary in the way it made these technologies accessible to a very large professional audience.

Docker was not created in isolation, it was inspired by jails and zones and all the fancy new features that were added to the linux kernel at the time.

Using just the words FROM, ADD and CMD, you can make a container definition that effectively isolates a runtime for just about any application in a 3 lines. Beyond the couple simple keywords all you need to have is absolutely basic linux knowledge, the level you can teach any developer in an afternoon.

There's no need to pollute that developers mind with any other system administration garbage. Nothing about networking, policies, filesystems, whatever. Just basic bash and a couple keywords.

Then when you want to go to production, you just hand the shit your developers wrote over to a professional system administrator and they'll make it run perfect at any scale. It's magic. Before Docker the world was darkness and bullshit, and after Docker the world was drenched in light and all that is good.

The fact that it's 2022 and there's still people that are going "hur-dur Solaris zones, BSD jails amiright" as if any of those technologies have any relevance is ridiculous.

Docker is technologically excellent.


I was with you right up until this part:

> The fact that it's 2022 and there's still people that are going "hur-dur Solaris zones, BSD jails amiright" as if any of those technologies have any relevance is ridiculous.

Having diversity in the computing ecosystem is a good thing, not bad.

I'll take your point that Docker brought containers to the developers (frankly, I made that point myself) but that doesn't mean that Jails doesn't solve some problems that Docker (currently) struggles with. Nor does Docker's success mean that a little competition isn't healthy for the wider industry.

Dismissing the stuff that went before it as "systems administration garbage" because it was targeted at a different audience to yourself is a really poor attitude in my opinion. Especially when there are countless examples of when audiences different from developers also need to make use of software. Frankly, I thought by now we were past the sysadmin vs developer flamewars. But clearly not.

Aside from that minor rant, I do want to thank for your post. It was an informative read.


My apologies, I meant relevance to the problem that Docker solves, which is enabling developers to neatly specify and package their dependencies. I am not trying to diminish jails and zones usefulness to system administrators. I'm just saying if you put Docker in a comparison list to other technologies, jails and zones wouldn't even be in that list.

The annoyance comes from system administrators looking at the set of technologies inside Docker and saying "we already have that", and then just assuming Docker must be some sort of marketing scheme. I deployed docker in my organisation within a week of its first (beta?) release, when all of its "marketing" was a single blog post.

Docker solved an enormous real problem in the software industry, even if from a system administrators perspective it's just a new way of packaging applications, as there have been many in the past and probably will be many in the future.


Oh I never meant any of my comments to undermine Docker. While I do have some specific frustrations with Docker, the same is true with any technology stack: Jails and Zones included.

I'd never describe Docker as being a marketing gimmick. It was definitely a "right time, right place" tool. But that speaks more about how the market (and particularly Linux) was yearning for something better.

Thanks for the interesting conversation :)


Docker is the best because it did something its predecessors could not: made it accessible, easy to run and easy to share. On Linux. Now the technology has industry standardization and so much inertia it’s surviving the monetization drive by docker (the company). The existing software out there was sysadmin stuff because it was mostly DIY. Or required learning to administer and build tooling for another OS.


It's worth noting that Docker was primarily a godsend for people working in scripting languages like Ruby or Python, where they have very messy packaging systems, depend on tons of native Linux libraries and so on.

For people working on the JVM the world was in some sense already 'drenched in light'. You could just send the sysadmin a fat jar you developed on Windows or macOS and tell them to deploy it, done. Or maybe you'd use an app server, so you'd send a less fat jar and they'd deploy it via a GUI and it already gets high level services like db connections, backups, message queues etc.

Also, Docker doesn't really solve the common case of an app that depends on a DB, maybe email etc. Those are services that need administration, you can't just start up a random server and expect things to go well. At least you need backups, proper access control and so on.

So deployment difficulty was very much dependent on what ecosystem you were in.


That's certainly true and describes my circumstances at the time. But it's also good to note that it is great for C/C++ dependencies as well, our GIS applications require PROJ, and our machine learning projects various Nvidia cuda things.

Also, just last week I needed new smbd features, and the only way to deploy a recent smbd version whilst retaining sanity seems on Ubuntu seems to be to just use docker. Normally there's a PPA but there wasn't in this case for some reason.


That's _before_ we consider that most folks aren't running BSD or Solaris, in any capacity, and that jails were never truly ported over to Linux.


popularity has won every single time in history over technical excellence. I know this is HN and a techbro echo chamber, but technical excellence is not even in the top 3 that people care about.


Exactly my point. People in here are conflating popularity with it being better


> Running Docker on non-Linux platforms requires a Linux VM to run in the background. It's not as cross platform as people make out. The other container technologies can be managed via code too, that code can be shared to public sites. And you can run those containers in a VM too, if you'd want.

right, and this is something else that Docker made incredibly easy to do as well. It's almost transparent now; so much so, that you need to use nsenter to connect to the underlying VM on Mac and Windows.


Jails and Zones are kernel mechanisms, they aren't easy any more than cgroups/namespaces are easy (to be fair, yes, they're easier than linux's tools by default, albeit less flexible). What docker changed was absolutely to make things "easy". A Dockerfile is really no more than a shell script, the docker command line is straightforward, the docker registry is filled with good stuff ready for use.


HP-UX vaults, introduced around 1999.


Yes: "What Docker changed was that it made containers "sexy", likely due to the git-like functionality.

Post cloud era and all those new markets to tap into. Jails etc: like a tool hammering nails. Why, when or how to build a city is something else entirely.

-but, probably git-like sexy functionality indeed. yeeha


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: