Hacker Newsnew | past | comments | ask | show | jobs | submit | uecker's commentslogin

I split to enforce encapsulation by defining interfaces in headers based on incomplete structure types. So it helps me with he conceptually separated module boundaries. Super fast compilation is another benefit.

My personal goal was to replace all of Microsoft in my life. I mostly achieved this 30 years ago with the help of a C program. Now I only have to use Microsoft products when other people I collaborate with insist on using them. So if Rust with the help of AI wll kill Microsoft with this project, this would make me very happy and I certainly will look at this language much more favorably.

This is basically what many functional programming languages do. This always came with plausibly sounding claims that this allows so much better optimizations that this soon will surpass imperative programs in performance, but this never materialized (it still did not - even though Rust fans now adopted this claim, it still isn't quite true). Also control over explicit memory layout is still more important.

Gah, can't believe I forgot about functional programming languages here :(

> even though Rust fans now adopted this claim

Did they? Rust's references seem pretty pointer-like to me on the scale of "has pointers" to "pointers have been entirely removed from the language".

(Obviously Rust has actual pointers as well, but since usefully using them requires unsafe I assume they're out of scope here)


What I meant is that Rust has stricter aliasing rules which make some optimization possible without extra annotations, but this is balanced out by many other issues.

Sure, but I think the presence/absence of aliasing is different from what GP was wondering/asking about, which was the removal of pointers from the programmer-facing model.

Is it? You just add "restrict" where needed?

https://godbolt.org/z/jva4shbjs


> Is it? You just add "restrict" where needed?

Yes. That is the main solution and it is not a good one.

1- `restrict` need to be used carefully. Putting it everywhere in large codebase can lead to pretty tricky bugs if aliasing does occurs under the hood.

1- Restrict is not an official keyword in C++. C++ always has refused to standardize it because it plays terribly with almost any object model.


Regarding "restrict", I don't think one puts it everywhere, just for certain numerical loops which otherwise are not vectorized should be sufficient. FORTRAN seems even more dangerous to me. IMHO a better solution would be to have explicit notation for vectorized operations. Hopefully we will get this in C. Otherwise, I am very happy with C for numerics, especially with variably modified typs.

For C++, yes, I agree.


There is no programming language better than C ;-) Just people not yet experienced enough to have learned this. (Just trolling you back)

50 years of widespread C usage has shown that just trying writing without errors using C doesn't work. But surprisingly some people still believe it's possible.

> 50 years of widespread C usage has shown that just trying writing without errors using C doesn't work.

Millions upon millions of C code, over decades, controlled (and still control) things around you that would kill you, or similar catastrophic failure. Cars, microwaves, industrial machinery, munitions, aircraft systems ... with so few errors attributable to C that I can only think of one prominent example.

So sure, you can get bugs written in C. In practice, the development process is more important to fault-reduction than the language chosen. And yes, I speak from a place of experience, having spent considerable parts of my career in embedded systems.


Writing without errors using other languages also doesn't work. And if you go towards formal verification (which also does not completely avoid errors), C has good tools.

By using a better language you have no errors typical for C which usually require debugging. Logical errors may still happen, but they are easy to identify without even running a debugger.

For your comments I get that you drank the Kool Aid, but I see no argument.

I do not think it is weird. Every C bug was taken as clear evidence that we need to abandon C and switch to Rust. So the fact that there are also such bugs in Rust is - while obvious - also important to highlight. So it is not weird hatred against Rust, but hatred against bullshit. And considering that most of the code is C, your 150 C vulnerabilities is a meaningless number, so you still continue with this nonsense.

One C bug was not taken as clear evidence that we need to abandon C and switch to Rust. Hundreds of thousands of very similar bugs, over decades, in common code patterns were. I’ve never understood how other C or C++ developers could seriously question whether Rust solves any safety problems at all. Maybe the tradeoffs aren’t worth it for a particular use case, but how could you find it unimaginable that even just enforced bounds checks would catch lots of bugs that (normal, non-verified) C would miss? Do you doubt that Python code corrupts memory a lot less because you saw a CPython CVE once?

What language do you think Graydon Hoare was spending most of this time writing while he started working of Rust as a side project? Hint: it sure wasn’t Java. Rust is not the product of some developer who has only used 2 scripting languages and had to read the definition of stack-smashing off of Wikipedia showing those C developers how to live in the future. It’s not old enough for many of the developers working on it to have only ever used Rust. It’s mostly C and C++ developers trying to build a new option for solving their same problems.


> It’s mostly C and C++ developers trying to build a new option for solving their same problems.

I've observed that a lot of the folks I used to meet at ruby conferences have moved to Rust. No idea what led to this, but maybe it's just folks that were generally curious about new programming languages that moved to ruby when it became better known and that the same interest led to adopting Rust.


I worked on a Ruby codebase that moved to Rust - I think that part is mostly cargo-culting cool things in the news to be perfectly honest. There’s type safety advantages, but if Ruby’s performance envelope was even conceivably acceptable for your use-case there are likely better options. I strongly suspect a lot of the friction between Rust and only-C/C++ developers is the product of a bunch of people coming at Rust from higher level languages parroting lines about safety, and then looking at you blankly if you ask how it handles nested pointer indirection.

But I don’t think that applies to the people actually driving the language forward, just those talking a big game on HN/Reddit.


Giving the size and age of the C ecosystem, the number of bugs is not really a valid argument. We will see an increasing numbers with Rust as Rust is increasingly used. I also do not question that Rust solves some problems. It just solves them rather badly and at high cost while bringing new problems.

I looked at firefox code a decade ago, it was a complete complex nightmare mix of different languages. I can see that this motivated starting something new, but it was not a clean C code base (and not even C).


What number of CVEs is Rust kernel code allowed to have before we have good evidence it’s a categorical failure? Do you turn off KASLR for your Linux machines because there exist CVEs it doesn’t protect against?

As long as the kernel will be developed, there will be CVEs - even with Rust. So at what point the number is so high that we should drop Rust and move to formal verification? And even then, there will be CVEs... This whole argument is nonsense.

But I also do not agree that memory safety is of much higher importance than other issues. Memory safety is highly critical if you have a monopolistic walled garden spyware ecosystem - such as Android. Not that I do not want memory safety, but the people I know who got hacked, did not get hacked because of memory safety issue, but because of weak passwords or unpatched software. And at least the later problems gets harder with Rust...


Your priorities do not match that of most kernel developers or most operators of network-connected Linux systems (even if we ignore Android). So I don’t think your problem is with Rust at all, you’ll need to fork Linux if you want the project to stop putting huge amounts of effort into memory safety (as it has for decades).

You are right, I do not have a problem with Rust as a language nor with the kernel improving memory safety. My issue is solely with exaggerated claims and aggressive marketing of Rust.

(And I am operating network-connected Linux devices since 30 years myself. Memory safety is not the known issue, at the moment I worry more about limited security updates due to Rust.)


The number of memory related bugs is absolutely a valid issue with C when the same bugs are impossible in Rust. The C memory model is a disaster when every computer is connected to the Internet.

You are saying the Rust bug in the kernel was impossible? How did it happen then? Come on guys.

> Every C bug was taken as clear evidence that we need to abandon C and switch to Rust.

I think more charitably it's every "simple" C bug that tends to provoke that reaction. Buffer overflows, use-after-frees, things for which mechanically-enforceable solutions have existed and been widespread for a while. I think more exotic bugs tend to produce more interesting discussions since the techniques for avoiding those bugs tend to be similarly exotic.

> So the fact that there are also such bugs in Rust

Similarly, I think you need to be careful about what exactly "such bugs" encompasses. This bug wasn't one of the above "simple" bugs IMHO, so I would guess an equivalent bug in C code would at least avoid the worst of the more strident calls you so dislike. Hard to say for sure, though, given our unfortunate lack of a time machine.


I agree with you that this is more nuanced and that I oversimplified this a bit in my comment.

Okay, fair that since the majority of the codebase is C and that 150 vulnerabilities is probably negligible in comparison since we're talking about ratios, but if we're to be THAT nuanced then we also need to consider that code has been iterated upon for decades, so, I think the point is moot.

The claim has never, ever, been that Rust is bug free. The objective was to help reduce bugs, which is an outcome I've seen first hand in projects I work on. You still seem to speak in aggro terms, so it still feels like an emotional response to Rust.


What features do you like the most in Rust? Are pattern matching and enums some of them?

Ah, credibility attacks. Nice.

Safe Rust eliminates some of the more common memory bugs in C. The bug under discussion was written in unsafe Rust—but even that doesn't obviate the huge advantages Rust has over C. Even unsafe Rust, for instance, has far fewer UB gotchas than C. And with Rust, you can isolate the tricky bits in 'unsafe' blocks and write higher-level logic in safe Rust, giving your code an extra layer of protection. C is 100% unsafe—"unsafe at any speed" as I like to say.

IMHO, "C is 100% unsafe" is a misleading way to look at it and the kind of exaggeration which I criticize. Also in C only specific language features are unsafe and not all code, and you can screen for these features and also isolate critical code in helper functions. Saying these features could appear everywhere is no difference from "unsafe" possibly appearing everywhere in Rust. I agree that "unsafe" is easier to find as a keyword, but I do not think this is a fundamental advantage, especially in projects where you have a lot of such "unsafe" blocks.

> Saying these features could appear everywhere is no difference from "unsafe" possibly appearing everywhere in Rust.

That's not true in practice: Unsafe code is clearly delineated and can be 100% correctly identified. In C, usage of dangerous features can occur at any point and is much harder to clearly separate.


First, if "unsafe" worked so well 100% time, why did we have this bug? (and many other) So this already obviously wrong statement.

Then yes, you can use dangerous features in C at any time, but obviously you can also use "unsafe" at any time. The only difference is that "unsafe" is clearer to recognize. But how much this is worth is unclear. First, if you do not invalidly reduce the discussion to only memory safety, you need to review all other code anyway! But even then, it is also not true that only the code marked with "unsafe" is relevant. This is major myth. The "unsafe" code can cause UB outside "unsafe" and logic bugs outside "unsafe" can cause bugs unsafe. This does not perfectly decouple if you Rust repeat this nonsense over and over again.

Don't get me wrong, I think the unsafe keyword is good idea. But the magical powers Rust fans attribute to it and the "SAFETY" comment they put next to it tells me they are a bit delusional.


> logic bugs outside "unsafe" can cause bugs unsafe.

This is the wrong understanding of Rust's unsafety encapsulation. For example, no logic bug outside of `unsafe` can cause undefined behavior of Rust std's `Vec` abstraction, which is using underlying unsafe to build.

The point that "because unsafe is used so the entire Rust program is also unsafe" is a real major myth. It's as absurd as saying "because Java runtime using unsafe underlying to build so Java is also unsafe".


Two questions:

Why was the fix to this unsafe memory safety bug [0] only changes to code outside of unsafe Rust blocks?[1][2]

Why does the Rustonomicon[3] say the following?

> This code is 100% Safe Rust but it is also completely unsound. Changing the capacity violates the invariants of Vec (that cap reflects the allocated space in the Vec). This is not something the rest of Vec can guard against. It has to trust the capacity field because there's no way to verify it.

> Because it relies on invariants of a struct field, this unsafe code does more than pollute a whole function: it pollutes a whole module. Generally, the only bullet-proof way to limit the scope of unsafe code is at the module boundary with privacy.

[0] https://social.kernel.org/notice/B1JLrtkxEBazCPQHDM

[1] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

[2] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...

[3] https://doc.rust-lang.org/nomicon/working-with-unsafe.html


If a logic bug inside Vec and outside any unsafe blocks inside Vec happens, and that logic bug violates any invariants and requirements of the unsafe blocks, that can cause memory unsafety.

That would be the unsoundness of `Vec` itself, but if the abstraction of `Vec` is sound, there would be no way to use `Vec` outside of `unsafe` that can cause memory unsafety.

The point coming back to abstraction and responsibility, in Rust, you can build abstraction that is sound and guarantee memory safety from there. There can be soundness bug inside your abstraction, but it will be a massively smaller surface for auditing and expert required to write such abstraction. Also, when soundness bug appears, the responsibility is solely on the abstraction writer, not the user.

Whereas in C, without those safe abstraction, the surface of doing thing right to avoid memory safety issue is your entire codebase, and responsibility of "holding the knife correctly" is on the user.


> There can be soundness bug inside your abstraction, but it will be a massively smaller surface for auditing and expert required to write such abstraction.

If all of the Vec has to be "audited", or checked and reviewed, including all the code that is not inside unsafe blocks, how would the surface be any smaller?

> The point coming back to abstraction and responsibility, in Rust, you can build abstraction that is sound and guarantee memory safety from there.

Isn't it normal for programming languages to support building abstractions that can help with not only memory safety, but general correctness? C is a bit barebones, but lots of other programming languages, like C#, C++, Haskell and Scala support building abstractions that are harder to misuse.


All of `Vec` is much smaller than all of the place using Vec. IIRC, Vec is around 3k LoC. And for even low level code like Oxide & Android core, they are observed less than 4% of their code is inside or related to unsafe, that’s a massive improvement.

Yes, Rust is not new in term of allow building hard to misuse abstraction, it’s just allow abstraction over memory safety without relying on GC or runtime checks. Rust achieve this by adding capability to enforce shared XOR mutability with its borrowck which C++ couldn’t.


[flagged]


Wow, now no more discussion and an accuse of bot? I’m flattered.

Also you are doing tech, be specific, what is much shallower or hollow?


> in C only specific language features are unsafe and not all code

Using rust's definition of unsafe which is roughly "can cause undefined behaviour" then it seems to me isolating use of these features isn't possible. What is C without:

* Dereferencing pointers * Array access * Incrementing signed integers

You can do all of the above without invoking UB, but you can't separate the features in C that can cause UB from the ones that can't.


The first misunderstanding is that safety is a property of the language or not. Rust marketing convinced many people that this is so, but C can be safe or unsafe. Fil-C shows that even all of C can be memory safe (but at a substantial cost in performance). But even just with GCC and Clang, array access and signed integer can be made safe with a compiler flag, and a violation then traps and this is similar to a Rust panic. The cases which can not be dealt with easily are pointer arithmetic, unions, and free and concurrently related issues. And it is very well possible to isolate and review all of these. This will not find all bugs, but neither does this work perfectly for Rust "unsafe" as this bug (and many others) nicely illustrates.

I guess that means you're using the colloquial meaning of the word safety/unsafe rather than the rust definition. It's worth being explicit about that (or choosing a different word) in these discussions to prevent confusion.

For Rust safety (meaning no UB) most definitely is a property of the language. If a module does not contain unsafe and the modules it uses that do contain unsafe are implemented soundly then there is no UB.

In C UB is a part of the language.


No, in the comment you reply to, I am using safe/unsafe in the Rust sense. E.g. signed overflow changed to trap avoids the UB.

Also "If .. are implemented soundly" sounds harmless but simply means there is no safety guarantee (in contrast to Fil-C or formally verified C, for example). It relies on best-effort manual review. (but even without "unsafe" use anywhere, there are various issues in Rust's type system which would still allow UB but I agree that this is not that critical)

In C UB is part of the ISO language specification, but not necessarily part of a specific implementation of ISO C. If you argue that the ISO spec matters so much, I like to point out that Rust does not even have one, so from this perspective it is completely UB.


> Also "If .. are implemented soundly" sounds harmless but simply means there is no safety guarantee (in contrast to Fil-C or formally verified C, for example).

Don't those also depend on implementations being sound? Fil-C has its own unsafe implementation, formal verification tools have their trusted kernels, it's turtles all the way down.


The implementation itself being sound, yes. And yes, in Rust if you only use sound libraries (in combination), never use unsafe yourself and ignore the known defects in Rust, then it is also guaranteed to be safe. But in system programming, you usually have to use "unsafe" in your own code, and then there is no guarantee and you make sure the could has no UB yourself, just like in C.

Sure. My point is mostly that the problem is less that your safety guarantees rely on correct implementations (since that applies to all "safe" systems as long as we're running on unsafe hardware) and more that the trusted codebase tends to be quite a bit larger for (current?) Rust compared to Fil-C/formal verification tools. There are efforts to improve the situation there, but it'll take time.

Does make me wonder how easy porting Fil-C to Rust/Zig/etc. would be. IIRC most of the work is done via LLVM pass(es?) and changes to frontends were relatively minor, so it might be an interesting alternative to MIRI/ASan.


The safest computer is a rock.

The point is that when you start using rust in the real world to get real work done a lot of the promises that were made about safety have to be dropped because you need to boot your computer before the heat death of the universe. The result will be that we end up with something about as safe as C is currently - because CPUs are fundamentally unsafe and we need them to work somehow.

Rust is from the universe in which micro kernels weren't a dead end and we could avoid all the drivers being written in C.


> when you start using rust in the real world to get real work done a lot of the promises that were made about safety have to be dropped because you need to boot your computer before the heat death of the universe.

Safe rust isn't slow like Python, Go or Fil-C. It gets compiled to normal native code just like C and C++. It generally runs just as fast as C. At least, almost all the time. Arrays have runtime bounds checks. And ... thats about it.

> The result will be that we end up with something about as safe as C is currently - because CPUs are fundamentally unsafe and we need them to work somehow.

Nah. Most rust is safe rust. Even in the kernel, not much code actually interacts directly with raw hardware. The argument in favour of moving to rust isn't that it will remove 100% of memory safety bugs. Just that it'll hopefully remove most of them.


A million memory error bugs is used as a valid argument to stop using C and switch to rust where such bugs are impossible.

That is is undefined behavior does not mean it is exploitable. But I also have not seen an argument why a data race should not be exploitable in this context.

I am very wary of going that route. If there is undefined behavior, the compiler is in principle allowed to do anything and everything, unless it promises something beyond what the language promises.

One could then argue that a specific version of a specific compiler with specific settings in a specific case, after investigation of the generated assembly or inspection of what guarantees the compiler provides beyond the language, is not exploitable. But other settings of the compiler and other versions of the compiler and other compilers may have different guarantees and generation of assembly.

The Linux kernel uses, as I understand it, a flag for GCC for C code that disables strict aliasing. That basically means that strict aliasing is no longer undefined behavior, as long as that flag is used. Basically a dialect of C.


It is very common for C implementation to define undefined behavior and also common for C programs to rely on this. For this reason, I think it is very misleading to say that undefined behavior is automatically exploitable or even a bug.

And the kernel is infamous for being picky about the C compilers it can be successfully built with - clang couldn’t build a working kernel for a long time, and it was mostly relying on subtle GCCisms.

I think defining terminology here might help.

An attempt:

Language-UB (L-UB): UB according to the guarantees of the language.

Project-compiler-UB (PC-UB): The project picks compilers and compiler settings to create a stronger set of guarantees, that turns some language-UB into not being UB. Examples include turning off the strict aliasing requirement in the used compilers, or a compiler by default defining some language-UB as being defined behavior.

I do not know if such terms might catch on, though. Do they seem reasonable to you?


It is entertaining to observe that how - after the bullshit and propaganda phase - Rust now slowly enters reality and the excuses for problems that did not magically disappear are now exactly the same as what we saw before from C programmers and which Rust proponents would have completely dismissed as unacceptable in the past ("this CVE is not exploitable", "all programmers make mistakes", "unwrap should never been used in production", "this really is an example how fantastic Rust is").

You have a wild amount of confirmation bias going on here, though.

Of course, this bug was in an `unsafe` block, which is exactly what you would expect given Rust's promises.

The promise of Rust was never that it is magical. The promise is that it is significantly easier to manage these types of problems.


There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs." Of course, not everybody made such claims, but some did.

Whether it is "significantly easier" to manage these types of problems and at what cost remains to be seen.

I do not understand you comment about "confirmation bias" as did not make a quantitative prediction that could have bias.


> There were certainly a lot of people running around claiming that "Rust eliminates the whole class of memory safety bugs."

Safe Rust does do this. Dropping into unsafe Rust is the prerogative of the programmer who wants to take on the burden of preventing bugs themselves. Part of the technique of Rust programming is minimising the unsafe part so memory errors are eliminated as much as possible.

If the kernel could be written in 100% safe Rust, then any memory error would be a compiler bug.


Yes, but this is the marketing bullshit I am calling out. "Safe Rust" != "Rust" and it is not "Safe Rust" which is competing with C it is "Rust".

> it is not "Safe Rust" which is competing with C it is "Rust".

It is intended that Safe Rust be the main competitor to C. You are not meant to write your whole program in unsafe Rust using raw pointers - that would indicate a significant failure of Rust’s expressive power.

Its true that many Rust programs involve some element of unsafe Rust, but that unsafety is meant to be contained and abstracted, not pervasive throughout the program. That’s a significant difference from how C’s unsafety works.


But there are more than 2000 uses of "unsafe" even in the tiny amount of Rust use in the Linux kernel. And you would need to compare to C code where an equally amount of effort was done to develop safe abstractions. So essentially this is part of the fallacy Rust marketing exploits: comparing an idealized "Safe Rust" scenario compared to real-word resource-constrained usage of C by overworked maintainers.

The C code comparison exists because people have written DRM drivers in Rust that were of exceedengly high quality and safety compared to the C equivalents.

This is just so obtuse. Be serious.

Even if you somehow manage to ignore the very obvious theoretical argument why it works, the amount of quantitative evidence at this point is staggering: Rust, including unsafe warts and all, substantially improve the ability of any competent team to deliver working software. By a huge margin.

This is the programming equivalent of vaccine denialism.


There is a lot of science showing vaccines work. For Rust showing that it is better this is still lacking. And no Google's blog posts are not science.

So kernel devs claiming Rust works isn't good enough? CloudFlare? Mozilla? Your're raising the bar to a place where no software will be good enough for you.

Safe Rust absolutely eliminates entire categories of bugs

> Of course, this bug was in an `unsafe` block, which is exactly what you would expect given Rust's promises.

The fix was outside of any Rust unsafe blocks. Which confused a lot of Rust developers on Reddit and elsewhere. Since fans of Rust have often repeated that only unsafe blocks have to be checked. Despite the Rustonomicon clearly spelling out that much more than the unsafe blocks might need to be checked in order to avoid UB.


The unsafe code relied on an assumption that was not true; the chosen fix was to make that assumption be true. Makes perfect sense to me.

Rust fanboys on Reddit are not contributing to the Linux kernel. What matters here is that Rust helps serious people deliver great code.

Is it any more or less amusimg, or perhaps tedious, watching the first Rust Linux kernel CVE be pounced on as evidence that "problems .. did not magically disappear"?

Does anyone involved in any of this work believe that a CVE in an unsafe block could not happen?


Was it? It seems more a fantastic demonstration how the same type of errors can also occur in Rust code.

In C this kind of issue is so common it wouldn't raise to the status of "CVE". People would just shrug and say "git gud".

This is certainly not true. But also arguments about "common" are completely misleading as long as there is many orders of magnitude more C code than Rust code.

Maybe you haven't been paying much attention in this space. Google found empirically that error density in _unsafe_ Rust is still much lower than in C/C++. And only a small portion of code is unsafe. So per LOC Rust has orders of magnitudes fewer errors than C/C++ in real world Android development. And these are not small sample sizes. By now more code is being written in Rust than C++ at Google:

https://security.googleblog.com/2025/11/rust-in-android-move...

But don't take my word for it, you can hear about the benefits of Rust directly from GKH:

www.youtube.com/watch?v=HX0GH-YJbGw

There really isn't a good faith argument here. You can make mistakes in Rust? No one denies that. There is more C code so of course there are more mistakes in C code than in Rust? Complete red herring.


Hey, it was my point that the number of CVEs is red herring.

And no, I do not care or even believe what Google says. There are so many influencing factors.


I would expect that the largest factor is cultural, and of course it's possible to inculcate safety culture in a team working on a C or C++ codebase, but it seems to me that we've shown it's actually easier to import the culture with a language which supports it.

Essentially Weak Sapir–Whorf but for programming languages rather than natural languages. Which is such a common idea that it's the subject of a Turing Award speech. Because the code you read and write in Rust usually has these desirable safety properties, that's how you tend to end up thinking about the problems expressed in that code. You could think this way in C, or C++ but the provided tooling and standard libraries don't support that way of using them so well.


I also think that the largest factor is cultural. But my conclusion from this is not that one should import it with a new language while pretending achieving similar results is not possible otherwise. This just gives an excuse for not caring for the existing code anymore, which I suspect is one reason some parts of the industry like Rust ("nobody can expect us to care about the legacy code, nothing can be done until it is rewritten in Rust")

Of course highly correct C code is possible [1]. But ADA makes it easier. Rust makes it easier. You can write anything in any language, that is _not_ the argument. How could you plausibly advocate for a culture that invests a lot of effort [1] into making codes correct, and not also advocate for tools and languages that make it easier to check important aspects of correctness? A craftsman is responsible for his tools. Using subpar tools with the argument that with sufficient knowledge, skill and an appropriate culture you can overcome their shortcomings is absurd.

Rust is also often not the right tool. I looked at it fairly deeply some years ago for my team to transition away from Python/C hybrids, but settled on a fully garbage-collected language in the end. That was definitely the right choice for us.

[1] e.g. MISRA C, or https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...


The thing is. There always was a strong theoretical case that Rust should improve software quality (not just because of the fact that you have a lifetime system). The only reasonable counterpoint was that this is theory, and large scale experience is missing. Maybe in high quality code bases the mental overhead of using Rust would outweigh the theoretical guarantees, and the type of mistakes prevented are already caught by C/C++ tooling anyways?

The (in recent years) rapid adoption of Rust in industry clearly shows that this is not the case.


[flagged]


What about qmail? No one runs qmail and no one is writing new C with that kind of insanely hyperconservative style using only world-class security experts.

And it still wasn't enough. qmail has seen RCEs [0, 1] because DJB didn't consider integer and buffer overflows in-scope for the application.

[0] https://www.guninski.com/where_do_you_want_billg_to_go_today...

[1] https://lwn.net/Articles/820969/


> Why don't they use qmail as an example?

Perhaps because qmail is an anomaly, not Android? To remain relatively bug-free, a sizeable C project seems to require a small team and iron discipline. Unix MTAs are actually pretty good examples. With qmail, for a long time, it was just DJB. Postfix has also fared well, and (AFAIK) has a very small team. Both have been architected to religiously check error conditions and avoid the standard library for structure manipulation.

Android is probably more representative of large C (or C++) projects one may encounter in the wild.


What does bias have to do with empirical evidence? Disprove that than driveling about non-tech stuff.

[flagged]


So you can't, and if a "dumbass" like me can understand the importance of empirical evidence but you can't, maybe read up on rational thinking instead of lashing out emotionally.

Github says 0.3% of the kernel code is Rust. But even normalized to lines of code, I think counting CVEs would not measure anything meaningful.

> Github says 0.3% of the kernel code is Rust. But even normalized to lines of code, I think counting CVEs would not measure anything meaningful.

Your sense seems more than a little unrigorous. 1/160 = 0.00625. So, several orders of magnitude fewer CVEs per line of code.

And remember this also the first Rust kernel CVE, and any fair metric would count both any new C kernel code CVEs, as well as those which have already accrued against the same C code, if comparing raw lines of code.

But taking a one week snapshot and saying Rust doesn't compare favorably to C, when Rust CVEs are 1/160, and C CVEs are 159/160 is mostly nuts.


I'm more interested in the % of rust code that is marked unsafe. If you can write a kernel with 1% safe, that sounds pretty great. If the nature of dealing with hardware (AFAIK most of a kernel is device drivers) means something higher, maybe 10%, then maybe safety becomes difficult, especially because unsafety propagates in an unclear way since safe code becomes unsafe to some degree when it calls into it.

I'm also curious about the percentage of implicit unsafe code in C, given there are still compilers and linters checking something, just not at the level of lifetimes etc in Rust. But I guess this isn't easy to calculate.

I like rust for low level projects and see no need to pick C over it personally - but I think it's fair to question the real impact of language safety in a realm that largely has to be unsafe. There's no world where Rust is more unsafe than C though so it's all academic. I just wonder if there's been any analysis on this, in close to metal applications like a kernel.


> I'm more interested in the % of rust code that is marked unsafe.

I think you should less interested in % unsafe as what the unsafe is used to do, that is, it's likelihood to cause UB, etc. If it's unsafe to interface with C code, or unsafe to do a completely safe transmute, I'm not sure one should care.


> There's no world where Rust is more unsafe than C though so it's all academic

I think Rust is more unsafe than C due to supply chain issues in the Rust ecosystem, which have not fully materialized yet. Rust certainly has an advantage in terms of memory safety, but I do not believe it is nearly as big as people like to believe compared to a C project that actually cares about memory safety and applies modern tooling to address safety. There seems to be a lot of confirmation bias. I also believe Rust is much safer for average coders doing average projects by being much safer by default.


> I think Rust is more unsafe than C due to supply chain issues in the Rust ecosystem

This is such an incredibly cheap shot. First, the supply chain issues referenced have nothing to do with Rust, the language, itself. Second, Rust's build system, cargo, may have these issues, but cargo's web fetch features simply aren't used by the Linux kernel.

So -- we can have a debate about which is a better a world to live in, one with or without cargo, but it really has nothing to do with the Linux kernel security.


> Your sense seems more than a little unrigorous. 1/160 = 0.00625. So, several orders of magnitude fewer CVEs per line of code.

This is incorrect. Chalk it up to the flu and fever! Sorry.

0.00625 == .625%. or about twice the instance of Rust code however as stated above these are just the metric from one patch cycle.


It wasn't me trying to conclude anything from insufficient data.

To be actually fair, you should probably only look at CVEs concerning new-ish code.

It would probably have to be normalized to something slightly different as lines of code necessary to a feature varies by language.. But even with the sad state of CVE quality, I would certainly prefer a language that deflects CVEs for a kernel that is both in places with no updates and in places with forced updates for relevant or irrelevant CVE.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: