Which is at it should be. Wikipedia isn't a news source, and especially for something like this should be careful about allowing edits to stand until they can cite sources.
We did this at OpsLevel a few years back. Went from AWS managed NAT gateway to fck-nat (Option 1 in the article).
It’s a (small) moving part we now have to maintain. But it’s very much worth the massive cost savings in NATGateway-Bytes.
A big part of OpsLevel is we receive all kinds of event and payload data from prod systems, so as we grew, so did our network costs. fck-nat turned that growing variable cost into an adorably small fixed one.
I looked at using fck-nat, but decided it was honestly easier to build my own Debian Trixie packer images. See my comment below[1]. How has your experience been with fck-nat?
When mosh came out back in 2013, it solved a pretty real problem of ssh crapping out when you changed networks (like moving from in-office to home). It solves it at the app layer and uses UDP and is designed to work in high loss / latency environments. Very cool.
At the same time, in recent years, I've found that ssh running on top of Wireguard / Tailscale is way more usable than 2013 days. Those latter tools address the roaming IP issues directly at the network layer.
So while there are still issues with ssh / TCP if you're on a really crappy network (heavy packet loss, satellite link, etc), those have been less common in my experience compared to IP changes.
The “killer use case” for Mosh feels a lot less killer now.
The killer use case was roaming IPs, but I'd say the killer use case today is battling latency. A lot more people are computing remotely now, even on their phones. Even with 5Guw, I still get bursts of crappy latency. And now some people are using 5G as their home internet.
It definitely solves problems when traveling and dealing with crappy airport/hotel/AirBnB/conference wifi that is slow or overloaded.
I used to use mosh when riding Amtrak and using the free wifi. Without it, I rarely could even stay connected long enough to run more than a command or two, but using mosh completely solved it. I had no idea people considered handling changes in the IP to be the primary use case.
Even my home wifi sometimes has enough packet loss to kill SSH connections. And if my computer sleeps for a even a quarter-second, yeah, connection dead.
Mosh means a lot less, "Sigh..." up-arrow, enter. A small thing, but why live with it when you can just not?
I feel a bit silly for not noticing this before. Over the last year or so I've often wondered when ssh added protocol-level support for session resume. I'd open my laptop on a new network and everything would be ready to go. But of course, it's nothing to do with ssh, it's just that I started using tailscale.
And really they didn't even do anything special. This was a killer reason we loved Wireguard at our company and pitched heavily to keep it around to he company who acquired us and wanted us to switch to their VPN Appliance instead.
The main thing a big company IT admin wants is control over the users. At a previous company, they would ship really crappy software, by our own admission, to "enterprise" customers and all we had to do to keep them happy was to give a fancy control-panel that make then feel like king.
Yes, flattery works, pandering-to-ego works. Too bad, you can only push it so far...at some point, CTO/CEO notices.
Agreed. In this company the IT team was being spread thin without their budget being increased so Tailscale was the obvious solution here, but a non-starter for them. "We already pay for a VPN. Let's just use that."
We managed to survive with our solution for a while thanks to it being super simple and "free" besides the instance running wireguard. Last I heard (I left), they shut that all down a few years ago.
1. whether your IP is persistent (ie you can reuse the same socket)
2. your SSH keep alive settings
3. and how quickly your OS can wake up it’s network stack
If the socket persists, then it should be possible to allow SSH to survive longer periods of network inactivity given the right keep alive settings.
When I used to work with on prem systems, I’d run non-standard ssh keep alive so I could bounce network switches without losing access to servers sat in between.
You're not wrong to think Tailscale is primarily a software company, and yes, salaries are a big part of any software company's costs. But it's definitely more complex than just payroll.
A few other things:
1. Go-to-market costs
Even with Tailscale's amazing product-led growth, you eventually hit a ceiling. Scaling into enterprise means real sales and marketing spend—think field sales, events, paid acquisition, content, partnerships, etc. These aren't trivial line items.
2. Enterprise sales motion
Selling to large orgs is a different beast. Longer cycles, custom security reviews, procurement bureaucracy... it all requires dedicated teams. Those teams cost money and take time to ramp.
3. Product and infra
Though Tailscale uses a control-plane-only model (which helps with infra cost), there's still significant R&D investment. As the product footprint grows (ACLs, policy routing, audit logging, device management), you need more engineers, PMs, designers, QA, support. Growth adds complexity.
4. Strategic bets
Companies at this stage often use capital to fund moonshots (like rethinking what secure networking looks like when identity is the core primitive instead of IP addresses). I don't know how they're thinking about it, but it may mean building new standards on top of the duct-taped 1980s-era networking stack the modern Internet still runs on. It's not just product evolution, it's protocol-level reinvention. That kind of standardization and stewardship takes a lot of time and a lot of dollars.
$160M is a big number. But scaling a category-defining infrastructure company isn't cheap and it's about more than just paying engineers.
> but it may mean building new standards on top of the duct-taped 1980s-era networking stack the modern Internet still runs on.
That’s a path directly into a money burning machine that goes nowhere. This has been tried so many times by far larger companies, academics, and research labs but it never works (see all proposals for things like content address networking, etc). You either get zero adoption or you just run it on IPv4/6 anyway and you give up most of the problems.
IPv6 is still struggling to kill IPv4 20 years after support existing in operating systems and routers. That’s a protocol with a clear upside, somewhat socket compatible, and was backed by the IETF and hundreds of networking companies.
But even today it’s struggling and no company got rich on IPv6.
IPv6 has struggled in adoption not because it’s bad, but because it requires a full-stack cutover, from edge devices all the way to ISP infra. That’s a non-starter unless you’re doing greenfield deployments.
Tailscale, on the other hand, doesn’t need to wait for the Internet to upgrade. Their model sits on top of the existing stack, works through NATs, and focuses on "identity-first networking". They could evolve at the transport or app layer rather than rip and replacing at the network layer. That gives them way more flexibility to innovate without requiring global consensus.
Again, I don’t know what their specific plans are, but if they’re chasing something at that layer, it’s not crazy to think of it more like building a new abstraction on top of TCP/IP vs. trying to replace it.
I’m the CTO at OpsLevel, where we’ve been running a Rails monolith for ~6 years. We started on Rails 5, upgraded to 7, and are currently moving to 8. Before this, I worked on Rails at PagerDuty (including splitting a monolith into microservices) and on Shopify’s “majestic” monolith.
The best thing about Rails is its strong, opinionated defaults for building web applications. It handles HTTP request routing, data marshalling, SQL interactions, authentication/authorization, database migrations, job processing, and more. That means you can focus on business logic instead of wiring up the basics.
Rails isn’t as fast or lightweight as Go, but they solve different problems. For most web apps, the bottleneck is I/O, not CPU. Rails optimizes for developer productivity, not raw performance, and that tradeoff is often worth it, especially when speed of iteration matters more than squeezing out every last cycle.
>For most web apps, the bottleneck is I/O, not CPU.
We just have blog post submission on HN that suggest otherwise. At least for RoR.
Luckily we have YJIT and we are finally understanding may be we are actually CPU bound, which means we could look into it rather than always thinking it is a I/O DB Problems.
Fair enough, but the system isn’t set up to optimize for the happiness of founders and employees. It’s set up to maximize returns, which agreed ends up concentrated in a very few rich outcomes.
When you control the devices to which you're deploying to, there is little reason why you wouldn't deploy as often as you can. It helps a great deal in isolating bugs to keep your changesets small, and you can either do that by slowing down the product iterations (and getting poor feedback from each), or releasing more often. This is ubiquitous with web development.
Weekly releases (or slower) is appropriate when you rely on users to update their software or firmware. Most mobile app development does this.
reply