The pricing isn’t due to AWS. Even if you used standard S3 and paid for data retrieval for your entire backup every single month, tarsnap is over 3x the price of just using S3 yourself. The markup on tarsnap is wild.
Using something like restic or borgbackup+rclone is pretty much the same experience as tarsnap but a fraction of the price.
Yeah that pricing is crazy for something without any of the security that comes with using a BigCo. I've bounced off it in the past as soon as I got to their cutesy pricing model but I just played with the calculator linked here to model my needs -- three thousand USD a year for 1Tb of cold storage??
I appreciate you using the calculator! It's at [1] for anyone who wants to futz around with it.
$3000 per TB-year is accurate to my knowledge, and yes, it is at least one, and probably two, orders of magnitude what you can get with more general purpose systems. Backblaze B2 is $72 per TB-year; AWS Glacier is $12 per TB-year I believe; purchasing two 20 TB Seagate drives for $300 apiece, mirroring them, and replacing them every 3 years gives you about $10 per TB-year (potentially - most of us don't have 20 TB to back up in our personal lives). Those are the best prices I've been able to find with some looking [2].
To me, when I was building out the digital resiliency audit, the pricing and model just seemed to tell me that tarsnap was for very specific kinds of critical data backups, and was not a great fit for general purpose stuff. Like a lot of other people here I also have a general-purpose restic based 3-2-1 backup going for the ~150 GB in /home I back up. [3] My use of tarsnap is partly a cheap hedge for the handful of bytes of data I genuinely cannot afford to lose against issues with restic, Backblaze B2, systemd, etc.
Tarsnap has always been expensive. More than a decade ago (April 2014, to be precise), @patio11 suggested that tarsnap should increase its pricing. [1] Here’s the HN thread on that post. [2]
All the granular calculations (picodollars) on storage used plus time are fine. But tarsnap was always very expensive for larger amounts of data, especially data that cannot be well deduplicated.
Do they charge for actual bandwidth as well? Seems like it. From tarsnap.com:
> Tarsnap uses a prepaid model based on actual usage: Storage: 250 picodollars / byte-month of encoded data ($0.25 / GB-month)
Bandwidth: 250 picodollars / byte of encoded data ($0.25 / GB)
Is there anything similar ("central point of SSH access/keys management" ) that is not Cloudflare ? I know about Tailscale and it's SSH but recently it introduced so much latency (even tho they say it's P2P between A and B) that is unusable.
Ideally something self hosted but not hard requirement
Please note that Hetzner does this only in couple cases: a) fake account data ; b) previous strikes (unpaid invoices, abuse etc..) with them ; c) in some cases customer is from country they do not do business with. I bet gazzilion OP above is within a or b
Their ToS also classifies Crypto-Mining, farming or plotting (whatever that is) as grounds for cancellation.
Also everything forbidden by German law results in a cancellation. So if you would have for example posted something before 2017 about a state leader that might seem like an insult your server might be gone. ( https://www.loc.gov/item/global-legal-monitor/2017-07-26/ger... )
Make sure you avoid p2p stuff that scans for other servers on the hetzner network, too, even if it's legitimate and not infringing on copyright. They won't ban you right away, but their systems will detect it and give you 24h to confirm you're not infected with malware before shutting down the server. For IPFS or Bittorrent, you should be able to use it for legitimate purposes after blocking the local IPs.
I'm awaiting donwvotes, but honestly, if they don't give you a reason, everything can be one, and refusing a service on arbitrary grounds seems illegal to me :D
Are you sure about that? I know of a precedent in Poland, where a company refused to service (print some kind of invitation cards) some LGBT people, and was punished for it.
I imagine the laws in Germany are similar to the laws in Poland in that regard, both countries being in the European Union.
In USA, I really doubt any company could refuse to serve black people. Or women.
I wonder if the downvoters of my previous message share your (I'd say quite wrong) opinion or there's another reason for the downvotes - I'm up for a discussion!
I don't know about the downvotes, and I'm not a lawyer so take this with a grain of salt, but there's a difference between a service to the public (e.g. a café or a restaurant) and a service such as Hetzner.
Found similar thing other day [0] but thing is.. If this is not an App it's not usable. People tend to listen this while resting (in bed for example) so makes no sense have this in browser. For example [0] stops playing when screen is off/locked
Yeah, it's actually surprisingly tricky to get sound to persist with the screen off, especially on iOS, but I managed it in the end.
I'm testing a PWA version at the moment too so it'll be installable to your home screen - the test version at https://test.ambiph.one is PWA-enabled if anyone would like to try it out
You should be able to set "smtpd_data_restrictions = reject_unauth_pipelining" in your main.cf
This option is available in "older" postfix versions and even works with postfix 2.10.
Don't know if it is as good of a measure as the 'smtpd_forbid_unauth_pipelining' that is recommended for newer versions.
Does everything really need to be Docker these days? Specially "network stuff". I mean, it really makes me want to go and grow potatoes instead doing any "IT"
It makes life so much easier. Time is non renewable, and if you want to pull a project apart for whatever reason, you still can.
“docker pull”, deploy, and one can move on to the next whatever. You can deploy this to a Synology NAS, a Raspberry Pi, or Heroku with a few clicks (or even an appropriately configured router that supports containers if you’re not running something providing this functionality natively).
(DevOps/infra monkey before moving to infosec, embrace the container concept)
Let's not overstate things here. It may well look like "docker pull", deploy, nothing, ok, how do I configure this thing, oh goodie here's the uncommented yaml, deploy again, strange error, headscratch, oh it's dependent on using the .68.x network which I've already used elsewhere, let's rename those docker networks, deploy again, what?, oh it must have initialized a temporary password to the database when it didn't come up, let's wipe it all clean and pull again because I have no idea what kind of state is in those persistent volumes, deploy, rats! forgot the network renumbering, wipe clean, confiure again, deploy again, yay!
Provided you already turned off everything that can interfere with this stuff, including IPv6, any security like SELinux, grsecurity and friends, and you let it administer your netfilter firewall for you. Don't forget to check if you accidentally exposed some redis instance to the public Internet.
(And yes, I have embraced the concept and work daily with similar things, albeit in a larger scale. Let's just not kid ourselves it's easier than it is though. Just because an out of the box deploy goes sideways doesn't mean you are dumb.)
Almost none of what you just mentioned has anything to do with Docker, and you can easily have that much trouble just running a binary. (In fact, I've found that many projects have better documentation for their Docker image than for running it natively.) Yes, there are some Docker-specific things you sometimes have to debug (especially with networking), but I've had far more trouble getting software running natively on my machine due to mismatches in local configuration, installed library versions, directory conventions, etc vs what's expected. It's also far easier to blow away all the containers and volumes and start over with Docker; no need to hunt down that config file in an obscure place that's still messing with the deployment.
This is a strange argument to me. It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex. The problem is complexity. There are many great projects that are just “build the binary, edit config file, and run it,” and why should things be more complex than that? It’s wild to me what people will put up with.
> It’s essentially that the additional complexity of docker compose is acceptable because other things are unnecessarily complex.
Not quite. My point was that the complexity of Docker is, in many cases, worth it because it hides a lot of the complexity of running software. Yes, you trade one problem for another, but the nice thing about Docker is, if you really go all in on it, the knowledge of how to use it transfers to pretty much any software you want to run.
For example, I wanted to run a JVM-based service the other day. Having never done this before, spinning it up with Docker took two minutes—I didn't have to figure out JDK vs runtime, which version I needed to install, etc. And yet, if I want to configure it past the defaults in the future, the image exposes several environment variables that make it easy.
> none of what you just mentioned has anything to do with Docker
[...]
> there are some Docker-specific things you sometimes have to debug
Not sure what to make of this. Networking was specifically called out as an example.
But there are stories to share about the storage layer too. Lots of driver specific things that leak through that abstraction.
One may use Docker for a lot of things but ease of operations is not one of them. There's a reason both Red Hat and Ubuntu had to make up their own formats, neither of which is trivial to use, but there was just no way they could have done it with Docker instead. They're unlikely both wrong here.
I upgraded my PiHole running on an Allwinner H3 SBC last year. It wouldn't start, turned out some indirect dependency wasn't compiled for the ARMv7 platform.
No worries, just specify the previous version in my launch script, literally changing a couple of digits, and I'm back up and running in seconds.
I'm sure I could get it done using apt, but it was literally changing some numbers in a script and rerunning it.
As someone who just wants things to work, Docker has made things significantly better.
To add to this, for me it's not specifically about the ease of setup which isnt that much easier (although it's nice that it's standardized). It's more about the teardown if it's not something for you. Services can leave a lot of residuals in the system, files in different places, unwanted dependencies, changes in system configuration. Removing a docker container is very clean, with the remaining stuff easily identifiable.
If the Linux ecosystem could get its act together, standardize, and consolidate all the totally needless and pointless distribution fragmentation we could challenge this.
Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.
One illustrative example: OpenSSL has divergent naming and versioning schemes across different versions of distributions that use the same Debian package manager. So you either build your packages at least four or five times, Dockerize, or statically link OpenSSL. That’s just for dpkg based distros too! Then there is RPM, APK, and several others I can’t recall right now.
BTW Windows has a bit of the same disease and being from one company has a lot less of an excuse. OS standardization and dependency standardization is very hard to get right, especially at scale.
Apple macOS is the only OS you can ship software for without statically linking or bundling everything and be reasonably sure it will work… as long as you are not going back more than two or three versions.
There are several issues here which tends to get mixed up a lot.
Yes, a dpkg is built for a distribution, and not only that but a specific version of a distribution. So they tend to get re-built a lot. But this is something buildhosts do. What you upload is the package source.
If you want to distribute a package to work on "Linux" in general, then you can't build it for a specific distribution. Then you bundle all the shared libraries and other dependencies. (Or make a static build, but for various reasons this is less common.) Do not try to rely on the naming scheme of openssl, or anything else really. This is what most games do, and the firefox tarball, and most other commercical software for Linux.
There are of course downsides to this. You have to build a new package if your openssl has a security issue, for example. But that's how most software is distributed on most other operating systems, including Windows. This is also how Docker images are built.
The alternative is to build packages for a specific distribution and release, and as stated above, that takes a bit of integration work.
There are issues with both alternatives, but they should not be confused.
> Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.
That's not an accurate description of the main motivation for Docker. It's a nice secondary benefit, sure.
To some degree "there can be a ton of different versions of things" only applies to core OS packages. You mention Mac. but what version of python ships with macOS? What if I need a version other than what ships by default?
At a certain point you need to start defining the environment regardless of OS, and docker works as a tool that handles environment definition for literally any program (same thing works for ruby, java, python, etc). It handles more complex environment definition than packages, but is lighter than a VM. It's a middle ground, which is a great compromise for some cases and not for others.
Varying use cases and lots of flexibility is also the reason why linux is never going to just standardize the ecosystem and say "ok, there is only 1 openSSL package now." Some people see the ability to have a version of linux that is completely bonkers in versioning as a strength, akin to how some places have old windows 95 computers they still run because newer versions don't work properly. On linux, you could have old 1995 packages from a specific app, but the rest modern secure packages.
It used to be completely free hosting, that's one thing that was great about it. Same thing made Sourceforge so completely dominant that it took many years for projects to move off it even after more suitable alternatives were made available.
But the main use case was probably convenience. It's a very quick way for Mac and Windows users to get a small Linux VM up and running, and utilize the copious amount of software written for it.
These days it's mostly standard, for better or worse. There are a handful vendor independent ways to distribute software but this works with most cloud vendors. Is it good? Probably not, but few industry standards are.
> If the Linux ecosystem could get its act together, standardize, and consolidate all the totally needless and pointless distribution fragmentation we could challenge this.
Maybe, but that will never happen because the ecosystem got here by being open enough that people could be dissatisfied with existing stuff and make their own thing, and to a remarkable degree things are intercompatible. It's always been like this; just because there are 20 people working on distro A and 20 people working on distro B doesn't mean combining them would get 40 people working on distro AB. (In practice, attempting it would probably result in the creation of distros C-F as dissidents forked off.)
> Docker took off because there is no Linux. There are 50 different slightly incompatible OSes. So the best way to distribute software is to basically tar up the entire filesystem and distribute that. Dependency management has failed because there’s just too much sprawl.
I think I agree with you; part of the problem is that people treat "Linux" as an OS, when it's a piece that's used by many OSs that appear similar in some ways.
> Apple macOS is the only OS you can ship software for without statically linking or bundling everything and be reasonably sure it will work… as long as you are not going back more than two or three versions.
...but then by the same exact logic as the previous point, I think this falls apart; macOS isn't the only OS you can target as a stable system. In fact, I would argue that there are a lot of OSs where you can target version N and have your software work on N+1, N+2, and likely even more extreme removes. Last I looked, for example, Google's GCP SDK shipped a .deb that was built against Ubuntu 16.04 specifically because that let them build a single package that worked on everything from that version forward. I have personally transplanted programs from RHEL 5 to (CentOS) 7 and they just worked. Within a single OS, this is perfectly doable.
I have feeling whole Docker (or application containers) took of when "non Linux people" (read: developers) tried to be sys admins too and failed.
Best thing after sliced bread is apps/software packed in single GO binary. Runs everywhere, you only need to rsync/scp it to million of other places and it "acts" (usually) as normal Linux program/daemon
That’s true but IMHO that’s an indictment if Linux not them. It’s 2023 and there is no reason system administration should be this hard unless you are doing very unusual things.
The Go approach is just static linking. Rust often does the same though it’s not always the default like in Go, and you can do the same with C and C++ for all but libc with a bit of makefile hacking.
Statically linking the world is the alternative approach to containers.
One problem with SysAdmin stuff is that, like crypto, we keep telling folk it's too hard and just out-source. While I think don't roll your own crypto makes sense - we've done a dis-service to the trade to discourage self-hosting and other methods to practice the craft. Don't run your own infra, use AWS. Don't host your own email it's too hard, just use a provider. Etc. Then a decade later...hey, how come nobody is good at SysAdmin?
Most of the "don't do X it's too hard" is just $corp who wants to sell their preferred solution trying to convince you to buy their SaaS equivalent of a Bash script.
There is a myriad of amazing toolage out there that the everyday person could greatly benefit from in their day-to-day life. A lot of that has a very high barrier to entry for technical knowledge. By simplifying this setup down to a simple Docker compose file I believe that I have allowed the lay person to play and experiment in the freedom of their own home with technology they may have otherwise been eyeing.
I completely agree and want to add that the readme file does a good job of letting me know what this thing is and why I should use it. I really appreciate when developers take the time to be inclusive by writing for a less technical audience. I will at least try it out and see what it is all about. I have been looking to add more services to my pihole.
no, not everything has to be docker. for example, none of wireguard, pihole, or unbound have to be docker. you are welcome to install all those things yourself.
but the whole project here is to wrap up a bunch of other projects in a way that makes them easy to install and configure with minimal fuss. docker is perfect for that. if you want to be fussy and complain about the tools other people choose, then projects like this probably aren't much interest to you.
Can you easily debug stuff? Can you tail -f /var/fing/log and see what X or Y does not work (without introducing another container/whatever just for this) ? I know I am minority.. but whole concept This runs X and This runs Y but storage/data is over there having nothing to do with both X or Y is F'd up.
Yeah, you can easily pull and run things but you have no idea how or what it does and when things break whole idea is pull it again and run.
I have nothing against containers.. real system ones (LXC for example)
It seems there's a bit of a misunderstanding about how containers work. Firstly, debugging in containers is not inherently more difficult than on a traditional system. You can indeed `tail -f /var/log/...` within a container just as you would on the host system. Tools like Docker provide commands like `docker exec` to run commands within a running container, making debugging straightforward.
The concept of separating runtime (X or Y) from data storage is not unique to containers; it's a best practice in software design called separation of concerns. This separation makes applications more modular, easier to scale, and allows for better resource optimization.
The "pull it again and run" mentality is a simplification. While containers do promote immutability, where if something goes wrong you can restart from a known good state, it's not the only way to troubleshoot issues. The idea is to have a consistent environment, but it doesn't prevent you from debugging or understanding the internals.
Lastly, while there are differences between application containers (like Docker) and system containers (like LXC), they both leverage Linux kernel features to provide isolation. It's more about the use case and preference than one being "real" and the other not.
I'm not the original poster but with default config logs are worse with docker. Running `docker exec` to check the /var/log in a container is pointless, application writes to stdout. So you do `docker logs`
And by default logs are stored in a json format in a single file per container, grepping `docker logs` feels slower than grepping a file. And the option to read logs for n last hours is incredibly slow -- I think it reads file from the beginning until it reaches the desired timestamp
you can tail -f the container logs, which are in /var/lib/docker I think
I've recently come across a talk related to running openstack in kubernetes. Which sounded like a crazy idea, openstack needs to do all kinds of things not allowed by default for containers, e.g. create network interfaces and insert kernel modules. But people still did it for some reason -- on of them was that it's easier to find someone with k8 experience than with openstack one. And they liked the self-healing properties of k8.
My personal biggest peeve is how Docker still doesn't play well with a VPN running on the host. It's incredibly annoying and an issue I frequently run into on my home setup.
It's crazy to me that people push it so much given this issue, aren't VPNs even more common in corporate settings, especially with remote work nowadays?
I find it easier to just spin up a full VM than deal with docker's sensitivities, and it feels a bit ridiculous to run a VM and then setup docker within it instead of just having appropriate VM images.
I think that has more to do with not understanding routing and firewalls. Vpns usually have something called a kill switch that force tunnels all traffic to avoid leaks.
While I can see it does at times make it more difficult to do certain things with the proper permissions, know how and set up there is nothing it cannot do.
So we're back to where we started, just tinker "a little" with the setup to try to make it work, exactly the issue Docker claimed to be aimed at solving.
I tried running a docker based setup for a year on my homeserver, thinking that using it for some time would help me get over my instinctive revulsion towards software that makes Docker the only way to use it, the way that forcing myself to use Python had helped me get over disdain for it back during the early days of the transition from 2 to 3. Didn't help at all, it was still a pita to rely on. Went back to proper installs, couldn't be happier.
What Nix provides in reproducibility and ease of deployment, it certainly makes up for with poor documentation and opaque error messages. I've been trying to learn it for the past few weeks in my spare time for a personal project, and I still struggle with basic things. I love the idea but they really need to invest in better docs, tutorials, and error messages.
Docker is great, with docker volumes I can move things between different machines with ease. Do pretty much everything with docker compose these days. Also it doesn’t clutter up my base install, and it’s a lot lighter weight than a virtual machine.
Looks like much for both Colin and us could be solved moving this away from AWS