> Rust was designed to facilitate incremental rewrites of an existing C++ library
Do you have a source for this claim? Rust is a fine language (though its advocates can be a bit belligerent sometimes). But, as a matter of fact, Rust was not designed for a easy interoperability with C++ or to make gradual rewrites easy.
One design constraint of Rust was to be able to be incrementally included in a large C++ codebase: Firefox.
It turns out that this kind of interop with C++ directly is extremely difficult, and so it isn’t super smooth right now. Interop with C was prioritized and you get zero overhead there. And various projects ease Rust <-> C++ via the C ABI.
> My observation is most people who suggest Rust alternative don't use Rust.
This is a very condescending statement. In fact, for most software, when given the option of a full rewrite from C/C++ to another language, Rust is usually the least reasonable option. Fully automatic memory-managed languages should be considered first.
> People who actually use Rust known it is worth to rewrite C/C++ software in Rust, either the whole or part by part.
A full rewrite is not a feasible option for many large projects, and Rust does not make it easy to rewrite C++ code piece by piece, as the linked article clearly explains.
It's not true that developers don't like full rewrites. Most often, it's the path most developers would choose if they had enough time and funding. But in reality, you don't get either of those two.
And even if you are willing to do a full rewrite, your project probably has non-trivial dependencies on large, mature C++ libraries. You are not going to rewrite those. This is why more new projects are started in C++ every day than in Rust.
If the software was originally written in C/C++ based on some performance reasons (avoiding GC/being in control of boxing/being in control of when to use vtables etc.) then what would be more reasonable options?
> Fully automatic memory-managed languages should be considered first.
Those languages have existed for 20+ years so if they were ruled out as part of the original decision making then they probably still aren't applicable.
Problem was that 20 years ago some of those languages didn't had AOT compilation options, or if they did, they were commercial, priced at ways most developers would rather not bother.
Plenty of software has been written in C or C++, only because they were the only compiled languages known to the authors.
Having said this, Go, D, Swift, OCaml, Haskell, Common Lisp, C#, F#, Java (with GraalVM, OpenJ9).
All of them offer ways to do AOT compilation and use value types, granted Java is too verbose using Panama for that, and one is better of choosing one of the others.
I've been using F# and there are actually several roadblocks for AOT F# [0]. However, a self-contained .NET JIT executable is still surprisingly small (18 MB for an ASP.NET minimal API written in F#), easy to build, and easy to deploy.
And even if the speed and memory penalties are exactly the same as they were 20 years ago, you no longer need to support a pentium 3 with 64MB of RAM. If you write code that would have performed well 20 years ago, then bloat it 300%, it'll run just fine. I'd rather have that than just about any electron app.
> Those languages have existed for 20+ years so if they were ruled out as part of the original decision making then they probably still aren't applicable.
There are huge C++ code bases that are 15+ years old and are still actively maintained because the cost of a rewrite is too high for something that still solves the problem well enough.
Most of the large C++ projects I've worked on were written in C++ because it was the most common and mainstream language given the CPU and memory constraints of that time. We have significantly more powerful CPUs now, especially considering multicore computing, and easily 5-10x more RAM than 15 years ago. Java/Kotlin, C#, Go, TypeScript, Swift etc., are perfectly applicable to many more problem domains where C++ once dominated. I can easily agree that many C++ projects would be better off transitioning to a fully garbage-collected language than to Rust.
> Those languages have existed for 20+ years so if they were ruled out as part of the original decision making then they probably still aren't applicable.
They were ruled out when the RAM/CPU budget per $SERVER could have been an expensive 4GB/2-core for a complex server processing transactions in real-time.
Today, that same complex server can cheaply run on a 48GB/6-cpu server. Those performance constraints for exactly the same $FUNCTIONALITY are such a low hurdle, dollar-wise, that it makes no sense in most applications of this heuristic.
> Fully automatic memory-managed languages should be considered first.
Of which none has solved concurrency. Some like Java prevents UB but garbage data will still be produced if you don’t diligently protect the access of the shared data.
Loss of Sequential Consistency means realistically humans cannot understand the behaviour of the software and so "garbage" seems like a reasonable characterisation of the results. It might take a team of experts some considerable time to explain why your software did whatever it did, or you can say "that's garbage" and try to fix the problem without that understanding of the results.
The hope when Java developed this memory model was that loss of SC is something humans can cope with, it was not, the behaviour is too strange.
Even with sequential consistency, interweaving multiple threads produces results that go firmly into the garbage category. Even something as simple as adding to a variable can fail.
To solve interweaving, add mutexes. And once you have mutexes you'll be protected from weak memory models.
> If it was written in C++, there's a good chance it was so for performance reasons.
I agree. But imagine that games like Doom or Quake would have been unthinkable if they weren't fully written in C/C++. Now, however, we have 3D game engines like Unity that expose a C# API for game logic scripting, and it seems to work just fine. Performance is becoming less of a concern for more and more problem domains.
True, but I would guess that projects more susceptible to rewrites would be low-level core libraries where state-of-the-art performance is always desirable.
Correct. And if those projects exposed a C++ API, then rewriting them in Rust would be highly problematic, because Rust did not prioritize C++/Rust interoperability (for well-understood reasons). So, you can either have a C API and move to Rust or have a C++ API and create a Rust version of the library for Rust developers, but your original users of the C++ library will stick with the C++ version.
You can use C++ libraries from Rust. That's a very normal thing to do. People start new projects in C++ either because they themselves don't want to learn a new thing or because they don't want their employees spending time learning a new thing.
Appart from cxx.rs. I think most binding between C++ and Rust are in fact C and Rust binding. So you need a C API to your C++ library (no template, no std etc...). So for me you can't use C++ library from Rust.
cxx only support a part of c++ STL container. It doesn't support template and custom container implementation. So no generic, no variadic number of argument, no automatic type déduction etc...
BTW, there are also other interesting, low-power RISC architectures that are used in millions of devices, but most people have never heard of them. For example:
* SuperH [0], 32bit only, now basically dead, but microcontrollers are still available
* AVR32 [1], 32 bit only, also quite dead
* ARC [2], 32/64bit, still quite popular in an automotive
> What's the appeal of this compared to a cheaper and more powerful N150 NUC, or a used mini PC
This is a very good question. The Pi 500+ is a beautiful product, but when compared in terms of price/value to the NUC and various other mini PCs, its value proposition is questionable.
Perhaps the target group are enthusiasts who had 8/16-bit "all-in-one" computers like Commodore64, Amiga, Atari, ZX Spectrum, Acorn etc., in their younger years and now want to buy something similar (non-x86) for themselves or force it on their kids. :)
The reason governments no longer fight huge corporations or even clear monopolies is also due to heavy globalization. If one government destroys a monopoly (a global mega-corporation) in its country, it may strengthen the monopoly (and the global mega-corporation) in another country. So the line of thinking is, "We don't like this nasty monopoly, but at least it's our monopoly."
I don't really buy this. The government still has the ability to just ban or tax the foreign monopoly. And seemingly the EU has the ability to fine foreign businesses for being monopolies too.
China being a good example. Google being a monopoly in the rest of the world doesn't really impact them much since they just block the foreign products.
> the EU has the ability to fine foreign businesses for being monopolies too.
Specifically, the EU has no ability to fight foreign monopolies. Though, it has an ability to fine them and extort some pocket money from them. However, this hasn't had a tangible effect on creating more competition in those markets.
Then people accuse you of being "protectionist" or "mercantilist". Your companies aren't internationally competitive. This cripples your exports unless you can convince other countries to also block the goods that are undercutting you.
We must be working from different definitions of efficient.
Yes, the CCP can say jump and expect their corporations to do so, but when everyone in a modern economy jumps at the same time, massive oversupply is the result. More market-based economies are also prone to similar overproduction when everyone gets caught up in the same mania (see AI datacenters), but investors will eventually stop lighting their money on fire when it becomes clear that the returns aren't there. Chinese companies, on the other hand, will just keep jumping until the CCP decides that they are done jumping.
Our feedback loop is geared towards only doing things that provide a return on investment. Their feedback loop has things like social stability and global competitiveness as competing goals to actually doing productive work.
Yes, they are able to accomplish a tremendous amount when they set their minds to it, but doing a tremendous amount more of something than there is actual demand is waste, the opposite of efficiency.
When I say they prioritize social stability, I mean that they won't stop producing cars regardless of how little economic sense it makes because they need to keep people employed to stave of massive civil unrest. And global competitiveness counts for little when the countries they want to export to implement anti-dumping policies to protect their own industries from government-subsidized Chinese exports.
China is efficient but largely because they don't actually have to obey the State. They are capitalist; they compete in the global market and follow market signals.
The CCP does put a heavy thumb on some scales, but so does every country. Perfect efficiency is not optimal when circumstances change, so states always enforce some redundancy.
There are many differences, of course, but just don't get the idea that China consists of monopolies in a command economy. They call it "capitalism with Chinese characteristics."
Any source for this? My hunch is that there is so much money sloshing around that government interests are easily swayed and conflicts of interest are relatively common now.
What would be an acceptable source for you? Which was the last US mega-corporation that the US government broke up? It certainly wasn't Microsoft or Google. Allowing huge companies to grow even bigger gives them more competitive power in the global market. This wasn't as important before we had super-globalized economies.
I don't think the question is whether monopolies are being allowed to exist, my question is what is the source as to WHY you think it is happening. A source would be any kind of proof that having a monopoly in one country is a strategic advantage over other countries. Data, publications, etc...
I cannot give you proof for the line of political thinking. :)
> ...having a monopoly in one country is a strategic advantage over other countries.
Having a large, unified domestic market is a strategic advantage because it enables companies to grow to a size that makes them formidable global competitors [0]. The United States and China are examples of this phenomenon. The point isn't whether it's advantageous to allow such companies to become monopolies. Once these companies reach a certain size, politicians are reluctant to break them up because they don't want other global companies to take their place.
My argument is that the point is whether its advantageous to allow companies to continue to grow because whether a company has a monopoly or anticompetitive edge is the central argument behind breaking them up. By allowing companies to become monopolies or near monopolies, you disturb the very unified domestic market that you initially mentioned, which hamstrings growth in the future.
I believe companies aren't broken up because they are now so big, it is a logistical nightmare to do so, therefore those companies lobby the crap out of politicians to kick the can down the road. NVIDIA is nearly 3x the market cap of the next non-US firm...is that really the global competition you're looking for?
Nice article, but it was probably heavily machine-translated with little human intervention. There is a message that says "Code Language: JavaScript" all over the place, but the code examples are actually in Rust, and the last (unnecessary) one is in C++.
> Apple is quietly but surely increasing its control on macOS.
This is certainly happening. However, as long as you can still install your preferred browser with its own rendering engine or a different PDF reader, the situation isn't so bad.
I'm getting great mileage out of LibreWolf on macOS (currently running Sequoia). I don't know who at Apple thought it was a great idea to permanently kill off ad blockers in Safari, but it was a terrible idea that made the world a worse place.
Of course I say macOS is getting along fine for me, but I'm posting this comment from my workstation PC running Ubuntu 24.04. I'm pleasantly surprised by how much better my Linux experience is now than it was in 2013. It seems from my personal experience so far like a free desktop OS that can run a web browser and play games better than the paid alternatives is a solved problem. I find this machine much less frustrating than my M4 Macbook Air- many of the "security" behaviors are just annoyances.
> * Without an active OS development its only 1/2 of the puzzle.*
And this is an unfortunate state of the general purpose ARM64 computing. This board, with 16 GB of RAM and M.2 slot, would make the perfect Linux desktop machine. However, you only receive one or two major distribution updates from the hardware vendor, and then you're stuck with it.
From the article: Anthropic has been suffering from pretty terrible reliability problems.
In the past, factories used to shut down when there was a shortage of coal for steam engines or when the electricity supply failed. In the future, programmers will have factory holidays when their AI-coding language model is down.
I would argue that dependency on GitHub and Slack is not the same as dependency on AI coding agents. GitHub/Slack are just straightforward tools. You can run them locally or have similar emergency backup tools ready to run locally. But depending on AI agents is like relying on external brains that have knowledge you suddenly don't have if they disappear. Moreover, how many companies could afford to run these models locally? Some of those models aren't even open.
There are plenty of open weight agentic coding models out there. Small ones you can run on a Macbook, big heavy ones you can run on some rented cloud instance. Also, if Anthropic is down, there is still Google, OpenAI, Mistral, Deepseek and so on. This seems like not much of an issue, honestly.
The small ones that you can run on a MacBook are quite useless for programming. Once you have access to a state-of-the-art model, it's difficult to accept any downgrade. That's why I think AI-driven programming will always rely on data centers and the best models.
> if Anthropic is down, there is still Google, OpenAI, Mistral, Deepseek and so on
No company is going to pay for subscriptions to all of them. Either way, we'll see a new layer of fragility caused by overdependence on AI. Surely, though, we will adapt by learning from every major event related to this.
> The small ones that you can run on a MacBook are quite useless for programming.
That really depends on your Macbook :). If you throw enough RAM at it, something like a qwen3-coder will be pretty good. It won't stack up to Claude, or Gemini or GPT, of course, but it's certainly better than nothing and better than useless.
> No company is going to pay for subscriptions to all of them.
They don't have to, every lab offers API based pricing. If Anthropic is down, I can hop straight into Codex and use GPT-5 via API, or Gemini via Vertex, or just hop onto AWS Bedrock and continue to use Claude etc.
I don't think this is an issue in practice, honestly.
How exactly can you run GitHub or Slack locally? Their entire purpose is being a place where people can communicate, they need to be centrally available on a network to have any function at all.
> or have similar emergency backup tools ready to run locally
Developers used to share code through version control before there were websites to serve the "upstream", and they used to communicate without bespoke messenger apps.
Their former ways of doing so still work just fine.
There are still people who dictate their emails to a secretary.
Technology changes, people often don't.
Programmers will be around for a longer time than anyone realises because most people don't understand how the magic box works let alone the arcane magics that run on it.
Yes, it is an easy sell. But when that happens this sentence will also be viable - "We can remove the need for your company, valued at multiple millions a month" ... because after all, PMs and CEOs aren't harder to replace than programmers at that point.
Argumentum ad populum. What kind of experience to those engineers? That matters. Another possible conclusion is that the parent was talking about use cases that are not simple web apps or marketing pages but real issues in large software.
Again you're taking your own circumstance or even patent inability and extending that to the entire technology.
You've set yourself up such that all I need to do is go "I'm developing complex veterinary software including integrations with laboratory equipment" and you're completely falsified. Why expose yourself like this instead of being intellectually humble?
The problem is that AI can generate answers and code that look relevant and as if they were written by someone very competent. Since AI can generate a huge amount of code in a short time, it's difficult for the human brain to analyze it all and determine whether it's useful or just BS.
And the worst case is when AI generates great code with a tiny, hard-to-discover catch that takes hours to spot and understand.
Do you have a source for this claim? Rust is a fine language (though its advocates can be a bit belligerent sometimes). But, as a matter of fact, Rust was not designed for a easy interoperability with C++ or to make gradual rewrites easy.