> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically
we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.
No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
> I'm not looking at current social/market trends but limits of physics and hardware.
The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially. Unforeseen economic events may always happen, but they are unforeseen. It's always possible that current trends would reverse, but that's a different matter from assuming they are likely to reverse.
> Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time.
I don't know how reasonable it is to think that. If Rust's value comes from eliminating spatial and temporal memory safety issues, surely there's value in eliminating the more dangerous of the two, which Zig does as well as Rust (but C++ doesn't).
But even if you think that's reasonable for some reason, I think it's at least as reasonable to think the opposite, given that in almost 30 years of programming in C++, by far my biggest issue with the language has been its complexity and implicitness, and Zig fixes both. Given how radically different Zig is from C++, my preferenece for Zig stems precisely from it solving what is, to me, the biggest issue with C++.
> Also don't see how much of Zig design could really assist code reviews and testing.
Because it's both explicit and simple. There are no hidden operations performed by a routine that do not appear in that routine's code. In C++ (or Rust), to know whether there's some hidden call to a destructor/trait, you have to examine all the types involved (to make matters worse, some of them may be inferred).
> No. Most memory management in Rust is not through it's GC, even though most Rust programs do use the GC to some extent.
Most? You still haven't proved that. So most Rust programs mostly use GC, yet it's not a GC language; those are some very mind-contorting definitions.
> The laws of physics absolutely do not predict that the relative cost of CPU to RAM will decrease substantially.
Laws of physics do absolutely tell you that more computation means more heat. Also trying to approach the size of atoms is another no-go. That's why current chip densities have stalled but have been kept on life support via chip stacking and gate redesigns. The 2nm process is mostly a marketing term (https://en.wikipedia.org/wiki/2_nm_process) the actual gate is around 45x20nm.
Not to mention that when working with the way atoms work (i.e. their random nature) and small scales, small irregularities mean low yields.
They put a soft cap on any exponential curve. And hard cap by placing a literal singularity.
> I don't know how reasonable it is to think that.
Why not? With modern collections (std::vector, std::span, and std::string) and modern pointers (std::unique_ptr, std::shared_ptr) you get decent memory safety.
> Because it's both explicit and simple.
Being a simple language doesn't guarantee lack of complexity in implementation (see Brainfuck). The question is how much language complexity buys implementation simplicity. C++ of course has neither because it started with a backwards compatibility goal (it did get abandoned at some point).
By Zig's explicitness, you mean everything is public? I've seen that stuff backfire spectacularly, because you don't get any encapsulation, which means maximum coupling.
> So most Rust programs mostly use GC, yet it's not a GC language; those are some very mind-contorting definitions.
I don't think these definitions are very meaningful. In memory management literature, any technique that reclaims heap memory after a heap object is not reachable is called "garbage collection". Call it a "GC language" or not, it collects heap memory using techniques in the GC literature after objects become unreachable using reference counting with a special construct for the single reference case.
There isn't too much that you can learn just by saying "GC", because the memory/CPU tradeoffs can be more different between two GCs than between one GC and some memory management style in C. So the debate on terminology is less substantial and more about how different people colloquially refer to things with different terms. But Rc/Arc are a very common, established, and traditional GC implementation (and a simple one if you ignore the large and rather elaborate implementation of malloc/free we have these days in the runtime, which is necessary for decent performance on multicore machines).
> They put a soft cap on any exponential curve. And hard cap by placing a literal singularity.
How does any of this predict that processing will become cheaper relative to memory? Note that the trend over the past 4 decades has been the opposite.
> Being a simple language doesn't guarantee lack of complexity in implementation (see Brainfuck).
I didn't think that every simple language is easy to understand, but I find Zig simple and easy to understand.
> By Zig's explicitness, you mean everything is public?
What I meant was that there are no calls/operations performed in a subroutine that aren't visible in the text of the subroutine. This is very important to me in low-level programming. Of course, different people from different domains may have different preferences. Much of my low-level programming was in safety-critical hard realtime, and Zig just appeals more to how I like to think about control over the hardware and about correctness. It's not universal, and I'm sure Rust appeals more to other low-level programmers.
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.