Hacker Newsnew | past | comments | ask | show | jobs | submit | more carlmr's commentslogin

>When you write Rust code without lifetimes and trait bounds and nested types, the language looks like Ruby lite.

And once you learn a few idioms this is mostly the default.


>You get memory safety. That's about it for Security

Not true, you get one of the strongest and most expressive type systems out there.

One example are the mutability guarantees which are stronger than in any other language. In C++ with const you say "I'm not going to modify this". In Rust with &mut you're saying "Nobody else is going to modify this." This is 100x more powerful, because you can guarantee nobody messes with the values you borrow. That's one very common issue in efficient C++ code that is unfixable.

Sum types (enum with value) enable designing with types in a way otherwise only doable in other ML-based languages. Derive macros make it easy to use as well since you can skip the boilerplate.

Integers of different sizes need explicit casting, another common source of bugs.

Macros are evaluated as part of the AST. A lot safer than text substitution.

Further the borrow checker helps with enabling compile time checking of concurrency.

The list goes on, but nobody that has tried Rust properly can say that it only helps prevent memory safety issues. If that's your view, you just showed that you didn't even try.


>like an animal with no higher order thinking.

In Germany I see far fewer abandoned shopping carts than in America.


Bin ifTrue


It was bad before AI. Not saying AI vibe code is great, just that poor engineering culture existed before AI.


The only important question.


Yeah, I was going to say, if anybody with distributed systems knowledge actually thought about this code, it wouldn't have happened.

If you added model checking to it you could have prevented it though, because people that know how to program a model checking program, will see the error right away.


While true, this is one reason I always introduce automated code-formatting early on. It makes git blame a bit more useful.


Automated code formatting, in my experience, never decreases diff sizes, and frequently increases them. Some of those diff size increases support git-blame, some of them hinder it. Around the boundary between two possible formattings, they’re terrible.

Code formatters do tend to force some patterns that may make the line-oriented git-blame more useful such as splitting function calls into many lines, with a single argument on each line; yet that’s not about the code formatter, just the convention. (And the automatic formatters choose it because they have no taste, which is necessary to make other styles consistently good. If you have taste, you can do better than that style, sometimes far better.)


Depends on the language and the available formatters really, I find the black/ruff formatting style in Python to be very consistent, which helps with git blame. For C++ there's no good default formatter for "small diffs" since they all, as you say, add random line breaks dependending on position of parameters and such.

With style you can do better, but having style is impossible in anything except single developer projects.


Halting is sometimes preferable to thrashing around and running in circles.

I feel like if LLMs "knew" when they're out of their depth, they could be much more useful. The question is whether knowing when to stop can be meaningfully learned from examples with RL. From all we've seen the hallucination problem and this stopping problem all boil down to this problem that you could teach the model to say "I don't know" but if that's part of the training dataset it might just spit out "I don't know" to random questions, because it's a likely response in the realm of possible responses, instead of spitting out "I don't know" to not knowing.

SocratesAI is still unsolved, and LLMs are probably not the path to get knowing that you know nothing.


> if LLMs "knew" when they're out of their depth, they could be much more useful.

I used to think this, but no longer sure.

Large-scale tasks just grind to a halt with more modern LLMs because of this perception of impassable complexity.

And it's not that they need extensive planning, the LLM knows what needs to be done (it'll even tell you!), it's just more work than will fit within a "session" (arbitrary) and so it would rather refuse than get started.

So you're now looking at TODOs, and hierarchical plans, and all this unnecessary pre-work even when the task scales horizontally very well (if it just jumped into it).


At this point it's AI discussing with AI about AI. AI is really good at this, it's much easier to keep this discourse going, than to solve deep technical problems with it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: