This can't possibly be guaranteed to work just by disabling the checker, can it? If Rust optimizes based on borrow-checker assumptions (which I understand it can and does) then wouldn't violating them be UB, unless you also mess with the compiler to disable those optimizations?
> This can't possibly be guaranteed to work just by disabling the checker, can it?
It works in the sense that the borrow checker stops bothering you and the compiler will compile your code. It will even work fine as long as you don't write code which invokes UB (which does include code which would not pass the borrow checker, as the borrow checker necessarily rejects valid programs in order to forbid all invalid programs).
> It will even work fine as long as you don't write code which invokes UB (which does include code which would not pass the borrow checker, as the borrow checker necessarily rejects valid programs in order to forbid all invalid programs).
To be clear, by "this" I meant "[allowing] code that would normally violate Rust's borrowing rules to compile and run successfully," which both of us seem to believe to be UB.
Not quite, there is code which fails borrow checking but is safe and sound.
That is part of why a number of people have been waiting for Polonius and / or the tree borrows model, most classic are relatively trivial cases of "check then update" which fail to borrow check but are obviously non-problematic e.g.
Though ultimately even if either or both efforts bear fruits they will still reject programs which are well formed: that is the halting problem, a compiler can either reject all invalid programs or accept all valid programs, but it can not do both, and the former is generally considered more valuable, so in order to reject all invalid programs compilers will necessarily reject some valid programs.
I don't feel you're quite following what I'm saying unfortunately. In your specific example: couldn't the optimizer just optimize out all the mutations you've written, under the assumption that such a program would not have passed borrow-checking and thus isn't a case it needs to handle? Wouldn't this make it so that if you disabled borrow checking, you would get incorrect codegen vs. what you intended? This seems like an entirely legal and sane optimization; I'm not sure how you're assuming something like this outside the realm of possibility.
You seem to be operating on some (generous) underlying assumptions about what a borrow-checker-violating Rust program really means and what optimizations the compiler has the liberty to make even when borrow-checker assumptions are violated. But are you sure that assumption is well-founded? What is it formally based on?
> I don't feel you're quite following what I'm saying unfortunately.
Disagreeing with your plainly incorrect assertion is not "not following" what you're saying.
> In your specific example: couldn't the optimizer just optimize out all the mutations you've written, under the assumption that such a program would not have passed borrow-checking and thus isn't a case it needs to handle?
No. The borrow checker ensures specific rules are followed, the borrow checker is not the rules themselves, and the optimisations are based on the underlying rules not on the borrow checker.
The program above abides by the underlying rules, it's literally the sort of examples used by people working on the next generation borrow checker, but the current borrow checker is not able to understand that.
> This seems like an entirely legal and sane optimization
It's neither of those things.
> You seem to be operating on some (generous) underlying assumptions about what a borrow-checker-violating Rust program really means and what optimizations the compiler has the liberty to make even when borrow-checker assumptions are violated. But are you sure that assumption is well-founded? What is it formally based on?
They are not assumptions, or generous. They are an understanding of the gap between the capabilities of the NLL borrow checker and "Behavior considered undefined".
They are what anyone working on the borrow checker sees as limitations of the borrow checker (some fixable, others intrinsic).
> Disagreeing with your plainly incorrect assertion is not "not following" what you're saying.
I'm sorry, that particular comment wasn't disagreeing so much as missing my point entirely. It gave an example that would've still suffered from the same problem I was talking about if the optimizer relied on the same borrowing assumptions (hence my subsequent comment clarifying this) and it also diverted the discussion toward explaining the basics of incompleteness and the halting problem to me, neither of which indicated a following of my point at all, and both of which indicated a misunderstanding of where my (mis)understanding was.
But your new comment tracks it now (thanks).
>> This seems like an entirely legal and sane optimization
> It's neither of those things.
Thanks for clarifying.
> They are what anyone working on the borrow checker sees as limitations of the borrow checker (some fixable, others intrinsic).
I understand this, but it (again) doesn't contradict my point. Just because something is a known limitation of an earlier stage like the borrow checker, that doesn't mean that relaxing it would never require a change to later stages of the compiler, which never had to consider other possibilities before. Just like how limitations of the type checker don't imply that relaxing them would automatically cause the backend to generate correct code. It depends how the compiler is written and what the underlying rules and assumptions are for the later stages, and what's actually tested in practice, hence this entire question.
Heck, weren't there literal bugs discovered in LLVM during Rust development (was it noalias?) simply because those patterns weren't seen or tested much prior to Rust, despite being intended to work? It feels quite... optimistic to just change the constraints in one stage and assume they will work correctly for the later stages of a compiler with zero additional work. It makes sense if there's already active work to ensure this in the later stages, and I don't know if that's the case here or not, but if there isn't, then it feels risky.
> the optimisations are based on the underlying rules not on the borrow checker
Is there a link I can follow to these underlying rules so I can see what they are?
(Note that I am an author of that paper, and also that this is just a proposal of the rules and not yet adopted as normative.)
What you seem to be forgetting in this discussion is that unsafe code exists. The example above does not pass the borrow checker, but with a small amount of unsafe code (casting a reference to a pointer and back to erase the lifetime constraints) you can make it compile. But of course with unsafe code it is possible to write programs that have undefined behavior. The question is whether this specific program has undefined behavior, and the answer is no.
Since it does not have undefined behavior, the rest of the compiler already has to preserve its semantics. So one could also tweak the borrow checker to accept this program.
TL;DR unsafe code exists and so you can't just say all programs not passing the borrow checker are UB.
Thanks for the link. Like you said, that's not normative, so it doesn't really dictate anything about what the compiler would currently do if you violated borrow checking, right?
> What you seem to be forgetting in this discussion is that unsafe code exists. (...) unsafe code exists and so you can't just say all programs not passing the borrow checker are UB.
Unsafe code does not turn off the borrow-checker though? So I don't see how its existence implies the opposite of what I wrote.
Moreover, my entire concern here is about violating assumptions in earlier stages of the compiler that later stages don't already see violated (and thus might be unprepared for). Unsafe is already supported in the language, so it doesn't fall in that category to begin with.
Imo if you run into the halting problem it's because you are trying to do too much. In particular I think what you actually want is to check soundness based on the "shape" of the code rather than the reason about which variables can have which values and what that means for soundness.
Correct, I was reading a very interesting blog post [1] on how the rust compiler will change the LLVM annotaions like sending noalias for mutable pointer. This changes a lot the generated machine code. Disabling the borrow checker won't enable those LLVM flags.
Yes. An analog would be uninitialized memory. The compiler is free to make optimizations that assume that uninitialized memory holds every value and no value simultaneously (because it is undefined behavior to ever read it).
In the following example, z is dereferenced one time and assigned to both x and y, but if z and x are aliased, then this is an invalid optimization.
> Yes. An analog would be uninitialized memory. The compiler is free to make optimizations that assume that uninitialized memory holds every value and no value simultaneously (because it is undefined behavior to ever read it).
Even casting a MaybeUninit<i32>::uninit() to i32 is UB, even though every bit pattern in that memory space is a valid i32.
What's interesting is your code example is solved in Rust. By preventing a reference and a mutable reference all of the sudden the code becomes easier to reason about. No need for special attributes: https://www.lysator.liu.se/c/restrict.html#comparison-with-n...
Not really. Unsafe blocks don't change the semantics of Rust code or disable Rust's normal checks, so if you have something that doesn't compile due to a borrow checker error adding an unsafe block around that code will do precisely nothing to get around that error.
If you write correct Rust code it'll work, the borrowck is just that, a check, if the teacher doesn't check your homework where you wrote that 10 + 5 = 15 it's still correct. If you write incorrect code where you break Rust's borrowing rules it'll have unbounded Undefined Behaviour, unlike the actual Rust where that'd be an error this thing will just give you broken garbage, exactly like a C++ compiler.
Evidently millions of people want broken garbage, Herb Sutter even wrote a piece celebrating how many more C++ programmers and projects there were last year, churning out yet more broken garbage, it's a metaphor for 2025 I guess.
KDE is a great desktop environment , but it's also notorious for being a buggy and unpolished DE [1]. It's good your experience wasn't like that, but it's certainly not how the software is generally perceived.
[1]: Of course, different versions have different levels of stability. Also, some of these bugs and problems wouldn't be prevented by using an alternative language such as Rust.
Well FWIW, the original poster's anti-C++ statements aside, removing the borrow checker does nothing except allow you to write thread-unsafe (or race condition-unsafe) code. Therefore, the only change this really makes is allowing you to write slightly more ergonomic code that could well break somewhere at some point in time unexpectedly.
Nope. Anything which wouldn't pass the borrowck is actually nonsense. This fantasy that magically it will just lose thread safety or have race conditions is just that, a fantasy.
The optimiser knows that Rust's mutable references have no aliases, so it needn't safeguard mutation, but without borrow checking this optimisation is incorrect and arbitrary undefined behaviour results.
People hate C because it's hard, people hate C++ because it truly is rubbish. Rubbish that deserved to be tried but that we've now learned was a mistake and should move on from.
I’m sure some people could tiptoe through minefields daily for years, until they fail. Nobody is perfect at real or metaphorical minefields, and hubris is probably the only reason to scoff at people suggesting alternatives.
Of course. My sense is there are a lot fewer in of out-of-bounds accesses and use after frees. Maybe a world-class programmer can go several decades without writing a memory error in C/C++, but they will probably eventually falter, meanwhile the other 99.9% of programmers fail more often. Why would you decline a compiler’s help eliminating certain types of bugs almost entirely?
herb sutter and the c++ community as a whole have put a lot of energy into improving the language and reducing UB; this has been a primary focus of C++26. they are not encouraging people to “churn out more broken garbage”, they are encouraging people to write better code in the language they have spent years developing libraries and expertise in.
Yes, many or even most domains where C++ sees a large market share are domains with no other serious alternative. But this is an indictment of C++ and not praise. What it tells us is that when there are other viable options, C++ is rarely chosen.
The number of such domains has gone down over time, and will probably continue to do so.
The number of domains where low-level languages are required, and that includes C, C++, Rust, and Zig, has gone down over time and continues to do so. All of these languages are rarely chosen when there are viable alternatives (and I say "rarely" taking into account total number of lines of code, not necessarily number of projects). Nevertheless, there are still some very important domains where such languages are needed, and Rust's adoption rate is low enough to suggest serious problems with it, too. When language X offers significant advantages over language Y, its adoption compared to Y is usually quite fast (which is why most languages get close to their peak adoption relatively quickly, i.e. within about a decade).
If we ignore external factors like experience and ecosystem size, Rust is a better language than C++, but not better enough to justify faster adoption, which is exactly what we're seeing. It's certainly gained some sort of foothold, but as it's already quite old, it's doubtful it will ever be as popular as C++ is now, let alone in its heydey. To get there, Rust's market share will need to grow by about a factor of 10 compared to what it is now, and while that's possible, if it does that it will have been the first language to ever do so at such an advanced age.
There's always resistance to change. It's a constant, and as our industry itself ages it gets a bit worse. If you use libc++ did you know your sort didn't have O(n log n) worst case performance until part way through the Biden administration? A suitable sorting algorithm was invented back in 1997, those big-O bounds were finally mandated for C++ in 2011, but it still took until a few years ago to actually implement it for Clang.
Except, as you say, all those factors always exist, so we can compare things against each other. No language to date has grown its market share by a factor of ten at such an advanced age [1]. Despite all the hurdles, successful languages have succeeded faster. Of course, it's possible that Rust will somehow manage to grow a lot, yet significantly slower than all other languages, but there's no reason to expect that as the likely outcome. Yes, it certainly has significant adoption, but that adoption is significantly lower than all languages that ended up where C++ is or higher.
[1]: In a competitive field, with selection pressure, the speed at which technologies spread is related to their relative advantage, and while slow growth is possible, it's rare because competitive alternatives tend to come up.
This sounds like you're just repeating the same claim again. It reminds me a little bit of https://xkcd.com/1122/
We get it, if you squint hard at the numbers you can imagine you're seeing a pattern, and if you're wrong well, just squint harder and a new pattern emerges, it's fool proof.
Observing a pattern with a causal explanation - in an environment with selective pressure things spread at a rate proportional to their relative competitive advantage (or relative "fitness") - is nothing at all like retroactively finding arbitrary and unexplained correlations. It's more along the lines of "no candidate has won the US presidential election with an approval of under 30% a month before the election". Of course, even that could still happen, but the causal relationship is clear enough so even though a candidate with 30% in the polls a month before the election could win, you'd hardly say that's the safer bet.
You're basically just re-stating my point. You mistakenly believe the pattern you've seen is predictive and so you've invented an explanation for why that pattern reflects some underlying truth, and that's what pundits do for these presidential patterns too. You can already watch Harry Enten on TV explaining that out-of-cycle races could somehow be predictive for 2026. Are they? Not really but eh, there's 24 hours per day to fill and people would like some of it not to be about Trump causing havoc for no good reason.
Notice that your pattern offers zero examples and yet has multiple entirely arbitrary requirements, much like one of those "No President has been re-elected with double digit unemployment" predictions. Why double digits? It is arbitrary, and likewise for your "about a decade" prediction, your explanation doesn't somehow justify ten years rather than five or twenty.
> You mistakenly believe the pattern you've seen is predictive
Why mistakenly? I think you're confusing the possibility of breaking a causal trend with the likelihood of doing that. Something is predictive even if it doesn't have a 100% success rate. It just needs to have a higher chance than other predictions. I'm not claiming Rust has a zero chance of achieving C++'s (diminished) popularity, just that it has a less than 50% chance. Not that it can't happen, just that it's not looking like the best bet given available information.
> Notice that your pattern offers zero examples
The "pattern" includes all examples. Name one programming language in the history of software that's grown its market share by a factor of ten after the age of 10-13. Rust is now older than Java was when JDK 6 came out and almost the same age Python was when Python 3 came out (and Python is the most notable example of a late bloomer that we have). Its design began when Java was younger than Rust is now. Look at how Fortran, C, C++, and Go were doing at that age. What you need to explain isn't why it's possible for Rust to achieve the same popularity as C++, but why it is more likely than not that its trend will be different from that of any other programming language in history.
> Why double digits? It is arbitrary, and likewise for your "about a decade" prediction
The precise number is arbitrary, but the rule is that the rate of adoption of any technology (or anything in a field with selective pressure) spreads at a rate proportional to its competitive advantage. You can ignore the numbers altogether, but the general rule about the rate of adoption of a technology or any ability that offers a competitive advantage in a competitive environment remains. The rate of Rust's adoption is lower than that of Fortran, Cobol, C, C++, VB, Java, Python, Ruby, C#, PHP, and Go and is more-or-less similar to that of Ada. You don't need numbers, just comparisons. Are the causal theory and historical precedent 100% accurate for any future technology? Probably not, as we're talking statistics, but at this point, it is the bet that this is the most likely outcome that a particular technology would buck the trend that needs justification.
I certainly accept that the possibility of Rust achieving the same popularity that C++ has today exists, but I'm looking for the justification that that is the most likely outcome. Yes, some places are adopting Rust, but the number of those saying nah (among C++ shops) is higher than that of all programming languages that have ever become very popular. The point isn't that bucking a trend with a causal explanation is impossible. Of course it's possible. The question is whether it is more or less likely than not breaking the causal trend.
even when there are alternatives, sometimes it makes sense to use a library like Qt in its native language with its native documentation rather than a binding - if you can do so safely
Did I write that I hated somebody? I don't think I wrote anything of the sort. I can't say my thoughts about Bjarne for example rise to hatred, nobody should have humoured him in the 1980s, but we're not talking about what happened when rich idiots humoured The Donald or something as serious as that - nobody died, we just got a lot of software written in a crap programming language, I've had worse Thursdays.
And although of course things could have been better they could also have been worse. C++ drinks too much OO kool aid, but hey it introduced lots of people to generic programming which is good.
Correct me if I'm wrong, but I don't think you think that C++ programmers actually want to write "broken garbage", so when you say "millions of people want broken garbage" the implication is that a) they do write broken garbage, b) they're so stupid don't even know that is what they are doing. I can't really read else than in the same vein as an apartheid-era white South-African statement starting "all blacks ...", i.e., an insult to a large class of people simply for their membership in that class. Maybe that's not your intent, but that's how it reads to me, sorry.
I can't help how you feel about it, but what I see is people who supposedly "don't want" something to happen and yet take little or no concrete action to prevent it. When it comes to their memory safety problem WG21 talks about how they want to address the problem but won't take appropriate steps. Years of conference talks about safety, and C++ 26 is going to... encourage tool vendors to diagnose some common mistakes. Safe C++ was rejected, and indeed Herb had WG21 write a new "standing rule" which imagines into existence principles for the language that in effect forbid any such change.
Think Republican Senators offering thoughts and prayers after a school shooting, rather than Apartheid era white South Africans.
Are you seriously comparing discrimination based on factors noone can control to a group literally defined by a choice they made? And you think that's a good faith argument?
Considering how many people will defend C++ compilers bending over backwards to exploit some accidental undefined behaviour with "but it's fast though" then yeah, that's not an inaccurate assessment.
Rust isn't a one true language, no one necessarily needs to learn it, and I'm sure your preffered language is excellent. C and C++ are critical languages with legitimate advantages and use cases. Don't learn Rust of you aren't interested.
But Rust, its community, and language flame wars are separate concerns. When I talk shop with other Rust people, we talk about our projects, not about hating C++.
So don't use it. Rust is not intended to be used by everyone. If you are happy using your current set of tools and find yourself productive with them then by all means be happy with it.
You’re expressing the same attitude here, just in reverse. Some users not thinking highly of C++ doesn’t make Rust a worse or less interesting language.
>Haskell (and OCaml etc) give you both straightjackets..
Haskell's thing with purity and IO does not feel like that. In fact Haskell does it right (IO type is reflected in type). And rust messed it up ("safety" does not show up in types).
You want a global mutable thing in Haskell? just use something like an `IORef` and that is it. It does not involve any complicated type magic. But mutations to it will only happen in IO, and thus will be reflected in types. That is how you do it. That is how it does not feel like a straight jacket.
Haskell as a language is tiny. But Rust is really huge, with endless behavior and expectation to keep in mind, for some some idea of safety that only matter for a small fraction of the programs.
And that I why I find that comment very funny. Always using rust is like always wearing something that constrains you greatly for some idea of "safety" even when it does not really matter. That is insane..
It does in rust. An `unsafe fn()` is a different type than a (implicitly safe by the lack of keyword) `fn()`.
The difference is that unsafe fn's can be encapsulated in safe wrappers, where as IO functions sort of fundamentally can't be encapsulated in non-IO wrappers. This makes the IO tagged type signatures viral throughout your program (and as a result annoying), while the safety tagged type signatures are things you only have to think about if you're touching the non-encapsulated unsafe code yourself.
>The difference is that unsafe fn's can be encapsulated in safe wrappers
This is the koolaid I am not willing to drink.
If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
> IO tagged type signatures viral throughout your program (and as a result annoying)..
Well, that is what good type systems do. Carry information about the types "virally".
Anything short is a flawed system.
> If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
Y'know people complain a lot about Rust zealots and how they come into discussions and irrationally talk about how Rust's safety is our lord and savior and can eliminate all bugs or whatever...
But your take (and every one like it) is one of the weakest I've heard as a retort.
At the end of the day "adding safety very carefully atop of unsafe stuff" is the entire point of abstractions in software. We're just flipping bits at the end of the day. Abstractions must do unsafe things in order to expose safe wrappers. In fact that's literally the whole point of abstractions in the first place: They allow you to solve one problem at a time, so you can ignore details when solving higher level problems.
"Hiding a raw pointer behind safe array-like semantics" is the whole point of a vector, for instance. You literally can't implement one without being able to do unsafe pointer dereferencing somewhere. What would satisfy your requirement for not doing unsafe stuff in the implementation? Even if you built a vector into the compiler, it's still ultimately emitting "unsafe" code in order to implement the safe boundary.
If you want user-defined types that expose things with safe interfaces, they have to be implemented somehow.
As for why this is qualitatively different from "why not just use c", it's because unsafety is something you have to opt into in rust, and isn't something you can just do by accident. I've been developing in rust every day at $dayjob for ~2 years now and I've never needed to type the unsafe keyword outside of a toy project I made that FFI'd to GTK APIs. I've never "accidentally" done something unsafe (using Rust's definition of it.)
It's an enormous difference to something like C, where simply copying a string is so rife with danger you have a dozen different strcpy-like functions each of which have their own footguns and have caused countless overflow bugs: https://man.archlinux.org/man/string_copying.7.en
1. In `c` one have to remember a few, fairly intutive things, and enforce them without fail.
2. In rust, one have to learn, remember ever increasing number of things and constantly deal with non-intutive borrow-checker shenanigans that can hit your project at any point of the development forcing you to re-architecture your project, despite doing everything to ensure "safety". But the borrow-checker can't be convinced.
I have had enough of 2. I might use rust if I want to build a critical system with careless programmers, but who would do such a thing? For open source dependencies, one will have to go by community vouching or auditing themselves. Can't count something to be "Safe" just because it is in rust, right? So what is the point. I just don't see it. I mean, if you look a bit deeper, It just does not make any sense.
What is the point. If I share something, someone is going to come along and say. That is not how you are "supposed" to do it in rust.
And that is exactly my point. You need to learn a zillion rust specific patterns for doing every little thing to work around the borrow-checker and would be kind of unable to come up with your own designs with trade-offs that you choose.
And that becomes very mechanical and hence boring. I get that it would be safe.
So yes, if I am doing brain surgery, I would use tools that prevent me from making quick arbitrary movements. But for everything else a glove would do.
To learn something is generally the point. Either me, or you. I’ve been developing in rust for half a decade now and genuinely do not know what you were talking about here. I haven’t experienced it.
So either there are pain points that I’m not familiar with (which I’m totally open to), or you might be mistaken about how rust works. Either way, one or both of us might learn something today.
All lessons are not equally valuable. Seemingly arbitrary reasoning for some borrow checker behavior is not interesting enough for me to learn.
In the past, I would come across something and would lookup and the reasoning for it often would be "What if another thread do blah blah balh", but my program is single threaded.
Borrow checker issues do not require multiple threads or async execution to be realized. For example, a common error in C++ is to take a reference/interator into vector, then append/push onto the end of that vector, then access the original error. If that causes reallocation, the reference is no longer valid and this is UB. Rust catches this because append requires a mutable reference, and the borrow checker ensures there are no other outstanding references (read only or mutable) before taking the &mut self reference for appending.
This is generally my experience with Rust: write something the way I would in C++, get frustrated at borrow checker errors, then look into it and learn my C++ code has hidden bugs all these years, and appreciate the rust compiler’s complaints.
> If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
There is help from the compiler - the compiler lets the safe code expose an interface that creates strict requirements about how it is being called with and interacted with. The C language isn't expressive enough to define the same safe interface and have the compiler check it.
You can absolutely write the unsafe part in C. Rust is as good at encapsulating C into a safe rust interface as it is at encapsulating unsafe-rust into a safe rust interface. Just about every non-embedded rust program depends on C code encapsulated in this manner.
> Well, that is what good type systems do. Carry information about the types "virally". Anything short is a flawed system.
Good type systems describe the interface, not every implementation detail. Virality is the consequence of implementation details showing up in the interface.
Good type systems minimize the amount of work needed to use them.
IO is arguably part of the interface, but without further description of what IO it's a pretty useless detail of the interface. Meanwhile exposing a viral detail like this as part of the type system results in lots of work. It's a tradeoff that I think is generally not worth it.
>the compiler lets the safe code expose an interface that creates strict requirements about how it is being called with and interacted with..
The compiler does not and cannot check if these strict requirements are enough for the intended "safety". Right? It is the judgement of the programmer.
And what is stopping a `c` function with such requirements to be wrapped in some code that actually checks these requirements are met? The only thing that the rust compiler enables is to include a feature to mark a specific function as unsafe.
In both cases there is zero help from the compiler to actually verify that the checks that are done on top are sufficient.
And if you want to mark a `c` function as unsafe, just follow some naming convention...
>but without further description of what IO it's a pretty useless detail of the interface..
Take a look at effect-system libraries which can actually encode "What IO" at the type level and make it available everywhere. It is a pretty basic and widely used thing.
> The compiler does not and cannot check if these strict requirements are enough for the intended "safety". Right? It is the judgement of the programmer.
Yes*. It's up to the programmer to check that the safe abstraction they create around unsafe code guarantees all the requirements the unsafe code needs are upheld. The point is that that's done once, and then all the safe code using that safe abstraction can't possibly fail to meet those requirements - or in other words any safety related bug is always in the relatively small amount of code that uses unsafe and builds those safe abstraction.
> And what is stopping a `c` function with such requirements to be wrapped in some code that [doesn't] actually checks these requirements are met?
Assuming my edit to your comment is correct - nothing. It's merely the case that any such bug would be in the small amount of clearly labelled (with the unsafe keyword) binding code instead of "anywhere".
> The only thing that the rust compiler enables is to include a feature to mark a specific function as unsafe.
No, the rust compiler has a lot more features than just a way to mark specific functions as unsafe. The borrow checker, and it's associated lifetime constraints, enforcing that variables that are moved out of (and aren't `Copy`) aren't used, is one obvious example.
Another example is marking how data can be used across threads with traits like `Send` and `Sync`. Another - when compared to C anyways - is simply having a visibility system so that you can create structs with fields that aren't directly accessible via other code (so you can control every single function that directly accesses them and maintain invariants in those functions).
> In both cases there is zero help from the compiler to actually verify that the checks that are done on top are sufficient.
Yes and no, "unsafe" in rust is synonymous with "the compiler isn't able to verify this for you". Typically rust docs do a pretty good job of enumerating exactly what the programmer must verify. There are tools that try to help the programmer do this, from simple things like being able to enable a lint that checks every time you wrote unsafe you left a comment saying why it's ok, and that you actually wrote something the compiler couldn't verify in the first place. To complex things like having a (very slow) interpreter that carefully checks that in at least one specific execution every required invariant is maintained (with the exception of some FFI stuff that it fails on as it is unable to see across language boundaries sufficiently well).
The rust ecosystem is very interested in tools that make it easier to write correct unsafe code. It's just rather fundamentally a hard problem.
* Technically there are very experimental proof systems that can check some cases these days. But I wouldn't say they are ready for prime time use yet.
> You want a global mutable thing in Haskell? just use something like an `IORef` and that is it. It does not involve any complicated type magic. But mutations to it will only happen in IO, and thus will be reflected in types. That is how you do it. That is how it does not feel like a straight jacket.
Haskell supports linear types now. They are pretty close in spirit to Rust's borrowing rules.
> Haskell as a language is tiny.
Not at all. Though much of what Haskell does can be hand-waved as sugar on top of a smaller core.
I think that is because when you start learning Haskell, you are not typically told about state monads, `IORefs` and likes that enables safe mutability.
It might be because Monads could have a tad bit advanced type machinery. But IORefs are straightforward, but typically one does not come across it until a bit too late into their Haskell journey.
How are we still having the same trade off discussion being argued so black and white when reality has shown that both options are preferred by different groups.
Rust says that all incorrect programs (in terms of memory safety) are invalid but the trade is that some correct programs will also be marked as invalid because the compiler can't prove them correct.
C++ says that all correct programs are valid but the trade is that some incorrect programs are also valid.
You see the same trade being made with various type systems and people still debate about it but ultimately accept that they're both valid and not garbage.
>C++ says that all correct programs are valid but the trade is that some incorrect programs are also valid.
C++ does not say this, in fact no statically typed programming language says this, they all reject programs that could in principle be correct but get rejected because of some property of the type system.
You are trying to present a false dichotomy that simply does not exist and ignoring the many nuances and trade-offs that exist among these (and other) languages.
Nope. C++ really does deliberately require that compilers will in some cases emit a program which does... something even though what you wrote isn't a C++ program.
Yes, that's very stupid, but they did it with eyes open, it's not a mistake. In the C++ ISO document the words you're looking are roughly (exact phrasing varies from one clause to another) Ill-formed No Diagnostic Required (abbreviated as IFNDR).
What this means is that these programs are Ill-formed (not C++ programs) but they compile anyway (No diagnostic is required - a diagnostic would be an error or warning).
Why do this? Well because of Rice's Theorem. They want a lot of tricky semantic requirements for their language but Rice showed (back in like 1950) that all the non-trivial semantic requirements are Undecidable. So it's impossible for the compiler to correctly diagnose these for all cases. Now, you could (and Rust does) choose to say if we're not sure we'll reject the program. But C++ chose the exact opposite path.
No one disputes that C++ accepts some invalid programs, I never claimed otherwise. I said that C++'s type system will reject some programs that are in principle correct, as opposed to what Spivak originally claimed about C++ accepting all correct programs as valid.
The fact that some people can only think in terms of all or nothing is really saying a lot about the quality of discourse on this topic. There is a huge middle ground here and difficult trade-offs that C++ and Rust make.
I knew I should have also put the (in terms of memory safety) on the C++ paragraph but I held off because I thought it would be obvious both talking about the borrow checker and in contrast to Rust with the borrow checker.
Yes, when it comes to types C++ will reject theoretically sound programs that don't type correctly. And different type system "strengths" tune themselves to how many correct programs they're willing to reject in order to accept fewer incorrect ones.
I don't mean to make it a dichotomy at all, every "checker", linter, static analysis tool—they all seek to invalidate some correct programs which hopefully isn't too much of a burden to the programmer but in trade invalidate a much much larger set of incorrect programs. So full agreement that there's a lot of nuance as well as a lot of opinions when it goes too far or not far enough.
For all its faults, and it has many (though Rust shares most of them), few programming languages have yielded more value than C++. Maybe only C and Java. Calling C++ software "garbage" is a bonkers exaggeration and a wildly distorted view of the state of software.