I don't feel you're quite following what I'm saying unfortunately. In your specific example: couldn't the optimizer just optimize out all the mutations you've written, under the assumption that such a program would not have passed borrow-checking and thus isn't a case it needs to handle? Wouldn't this make it so that if you disabled borrow checking, you would get incorrect codegen vs. what you intended? This seems like an entirely legal and sane optimization; I'm not sure how you're assuming something like this outside the realm of possibility.
You seem to be operating on some (generous) underlying assumptions about what a borrow-checker-violating Rust program really means and what optimizations the compiler has the liberty to make even when borrow-checker assumptions are violated. But are you sure that assumption is well-founded? What is it formally based on?
> I don't feel you're quite following what I'm saying unfortunately.
Disagreeing with your plainly incorrect assertion is not "not following" what you're saying.
> In your specific example: couldn't the optimizer just optimize out all the mutations you've written, under the assumption that such a program would not have passed borrow-checking and thus isn't a case it needs to handle?
No. The borrow checker ensures specific rules are followed, the borrow checker is not the rules themselves, and the optimisations are based on the underlying rules not on the borrow checker.
The program above abides by the underlying rules, it's literally the sort of examples used by people working on the next generation borrow checker, but the current borrow checker is not able to understand that.
> This seems like an entirely legal and sane optimization
It's neither of those things.
> You seem to be operating on some (generous) underlying assumptions about what a borrow-checker-violating Rust program really means and what optimizations the compiler has the liberty to make even when borrow-checker assumptions are violated. But are you sure that assumption is well-founded? What is it formally based on?
They are not assumptions, or generous. They are an understanding of the gap between the capabilities of the NLL borrow checker and "Behavior considered undefined".
They are what anyone working on the borrow checker sees as limitations of the borrow checker (some fixable, others intrinsic).
> Disagreeing with your plainly incorrect assertion is not "not following" what you're saying.
I'm sorry, that particular comment wasn't disagreeing so much as missing my point entirely. It gave an example that would've still suffered from the same problem I was talking about if the optimizer relied on the same borrowing assumptions (hence my subsequent comment clarifying this) and it also diverted the discussion toward explaining the basics of incompleteness and the halting problem to me, neither of which indicated a following of my point at all, and both of which indicated a misunderstanding of where my (mis)understanding was.
But your new comment tracks it now (thanks).
>> This seems like an entirely legal and sane optimization
> It's neither of those things.
Thanks for clarifying.
> They are what anyone working on the borrow checker sees as limitations of the borrow checker (some fixable, others intrinsic).
I understand this, but it (again) doesn't contradict my point. Just because something is a known limitation of an earlier stage like the borrow checker, that doesn't mean that relaxing it would never require a change to later stages of the compiler, which never had to consider other possibilities before. Just like how limitations of the type checker don't imply that relaxing them would automatically cause the backend to generate correct code. It depends how the compiler is written and what the underlying rules and assumptions are for the later stages, and what's actually tested in practice, hence this entire question.
Heck, weren't there literal bugs discovered in LLVM during Rust development (was it noalias?) simply because those patterns weren't seen or tested much prior to Rust, despite being intended to work? It feels quite... optimistic to just change the constraints in one stage and assume they will work correctly for the later stages of a compiler with zero additional work. It makes sense if there's already active work to ensure this in the later stages, and I don't know if that's the case here or not, but if there isn't, then it feels risky.
> the optimisations are based on the underlying rules not on the borrow checker
Is there a link I can follow to these underlying rules so I can see what they are?
(Note that I am an author of that paper, and also that this is just a proposal of the rules and not yet adopted as normative.)
What you seem to be forgetting in this discussion is that unsafe code exists. The example above does not pass the borrow checker, but with a small amount of unsafe code (casting a reference to a pointer and back to erase the lifetime constraints) you can make it compile. But of course with unsafe code it is possible to write programs that have undefined behavior. The question is whether this specific program has undefined behavior, and the answer is no.
Since it does not have undefined behavior, the rest of the compiler already has to preserve its semantics. So one could also tweak the borrow checker to accept this program.
TL;DR unsafe code exists and so you can't just say all programs not passing the borrow checker are UB.
Thanks for the link. Like you said, that's not normative, so it doesn't really dictate anything about what the compiler would currently do if you violated borrow checking, right?
> What you seem to be forgetting in this discussion is that unsafe code exists. (...) unsafe code exists and so you can't just say all programs not passing the borrow checker are UB.
Unsafe code does not turn off the borrow-checker though? So I don't see how its existence implies the opposite of what I wrote.
Moreover, my entire concern here is about violating assumptions in earlier stages of the compiler that later stages don't already see violated (and thus might be unprepared for). Unsafe is already supported in the language, so it doesn't fall in that category to begin with.
You seem to be operating on some (generous) underlying assumptions about what a borrow-checker-violating Rust program really means and what optimizations the compiler has the liberty to make even when borrow-checker assumptions are violated. But are you sure that assumption is well-founded? What is it formally based on?