At my job we do native C and C++, some Java, some C#, scripting in Shell, Python, and Perl. When the left-pad incident happened someone said something to the room about it, we all looked it up, and spent a good 15 minutes mind-boggled, laughing and being grateful we weren't web devs. "Wait, you're telling me these people need NPM and GitHub to deploy? Seriously?"
I'm not the poster you're replying to, but I think I understand it.
npm is not just their package management tool... the way most people use it, it depends on someone else's package registry/repository to deploy to your own servers.
And github is someone else's source code management tool/server.
As a matter of policy, if I can't have something on my own server (or one my org controls) I don't get to rely on it to deploy/run my application.
So I think I get the parent's comment... it's a really foreign situation, to me, to depend on the availability of stuff like this on servers I (or my org) don't control in order to deploy my application.
I'm sure the people who depend on these things look at me and say "Wait. You have to set up your own package repository and source control before you can deploy instead of using all this nice stuff that's available in the cloud? Seriously?"
Yeah. I've been on both sides of this coin. If I'm deploying cloud software (which I am, these days), then I have no problem relying on cloud software to make that deployment smoother. But if I ever go back to writing native applications, I sure as hell won't be reliant on the internet in order to manage intranet deployments. These are two different paradigms, and what works well in one doesn't make any sense in the other.
A public package manager and a public source code management tool, both of which are outside of your control. You should be able to deploy from a local [verified and audited] cache of your dependencies.
That's a good goal to strive for, but isn't necessary or practical for everyone. Maintaining local/hosted artifact caches, verifying them, and auditing them is a big hassle, and unless you make something (e.g. fintech, healthtech) that might need such an audit or emergency release, might not be worth the trouble.
Itty bitty company making a social website on a shoestring budget/runway with very few developers? Might just be worth postponing a release a day or two if NPM or GitHub are having issues.
How does vrtualenv make maintaining, auditing, and using a local mirror of dependencies trivial? Seems to me I can download a poisoned package into a venv cache just as easily as I can download it with wget, and unless I take the time to check, I’m none the wiser either way.
I was referring specifically to not being able to deploy due to a package manager being down. Of course there are still issues that can crop up with using virtualenv.
I haven’t dug too much, but I believe at my work, we run a server that hosts all our jars, and is the source of truth for all our builds. Nothing that’s been checked in goes straight to the Internet (you can add new dependencies to uncommitted code). And we’re only ~30 devs.
And you should also be aware of what it takes to rebuild your stack, and have something in place if that disappears. If you think it's OK to rely on external tools like that to build your system, you deserve all the fallout you get when it fails.