Hacker Newsnew | past | comments | ask | show | jobs | submit | anp's commentslogin

> Others think someone from the Rust (programming language, not video game) development community was responsible due to how critical René has been of that project, but those claims are entirely unsubstantiated.

I find a lot of these points persuasive (and I’m a big Rust fan so I haven’t spent much time with Zig myself because of the memory safety point), but I’m a little skeptical about the bug report analysis. I could buy the argument that Zig is more likely to lead to crashy code, but the numbers presented don’t account for the possibility that the relative proportions of bug “flavors” might shift as a project matures. I’d be more persuaded on the reliability point if it were comparing the “crash density” of bug reports at comparable points in those project’s lifetimes.

For example, it would be interesting to compare how many Rust bugs mentioned crashes back when there were only 13k bugs reported, and the same for the JS VM comparison. Don’t get me wrong, as a Rust zealot I have my biases and still expect a memory safe implementation to be less crashy, but I’d be much happier concluding that based on stronger data and analysis.


I had the same thought. But one thing was actually very useful to compare "bug densities": deno vs bun. They have comparable codebase sizes as well as comparable ages (7y vs 4y). I'd like to see the same stats for tigerbeetle, which is very carefully designed: if segfaults were relatively high on that as too, well...


> I'd like to see the same stats for tigerbeetle

Actual SIGSEGVs are pretty rare, even during development. There was a pretty interesting one that affected our fuzzing infra a little bit ago: https://ziggit.dev/t/stack-probe-puzzle/10291

Almost all of the time we hit either asserts or panics or other things which trigger core dumps intentionally!



> Don’t get me wrong, as a Rust zealot I have my biases and still expect a memory safe implementation to be less crashy

That is a bias. You want all your "memory safety" to be guaranteed at compile time. Zig is willing to move some of that "memory safety" to run time.

Those choices involve tradeoffs. Runtime checks make Zig programs more "crashy", but the language is much smaller, the compiler is vastly faster, "debug" code isn't glacially slow, and programs can be compiled even if they might have an error.

My personal take is that if I need more abstraction than Zig, I need something with managed memory--not Rust or C++. But, that is also a bias.


I understand that I have a bias, which is why I was disclosing it. I think it strengthens my question since naively I'd expect a self-professed zealot to buy into the narrative in the blog post without questioning the data.


> My personal take is that if I need more abstraction than Zig, I need something with managed memory--not Rust or C++

You may potentially like D. Its tooling leaves much to be desired but the language itself is pretty interesting.


Might be worth noting that npm didn’t have lock files for quite a long time, which is the era during which I formed my mental model of npm hell. The popularity of yarn (again importing bundled/cargo-isms) seems like maybe the main reason npm isn’t as bad as it used to be.


npm has evolved, slowly, but evolved, thanks to yarn and pnpm.

It even has some (I feel somewhat rudimentary) support for workspaces and isolated installs (what pnpm does)


Lock files are only needed because of version ranging.

Maven worked fine without semantic versioning and lock files.

Edit: Changed "semantic versioning" to "version ranging"


> Maven worked fine without semantic versioning and lock files.

No, it actually has the exact same problem. You add a dependency, and that dependency specifies a sub-dependency against, say, version `[1.0,)`. Now you install your dependencies on a new machine and nothing works. Why? Because the sub-dependency released version 2.0 that's incompatible with the dependency you're directly referencing. Nobody likes helping to onboard the new guy when he goes to install dependencies on his laptop and stuff just doesn't work because the versions of sub-dependencies are silently different. Lock files completely avoid this.


It is possible to set version ranges but it is hard to see this in real world. Everyone is using pinned dependencies.

Version ranges are really bad idea which we can see in NPM.


My apologies I should have said "version ranging" instead of "semantic versioning".

Before version ranging, maven dependency resolution was deterministic.


Always using exact versions avoids this (your pom.xml essentially is the lock file), but it effectively meant you could never upgrade anything unless every dependency and transitive dependency also supported the new version. That could mean upgrading dozens of things for a critical patch. And it's surely one of the reasons log4j was so painful to get past.


I’ve been out of the Java ecosystem for a while, so I wasn’t involved in patching anything for log4j, but I don’t see why it would be difficult for the majority of projects.

Should just be a version bump in one place.

In the general case Java and maven doesn’t support multiple versions of the same library being loaded at once(not without tricks at least, custom class loaders or shaded deps), so it shouldn’t matter what transitive dependencies depend on.


Right, that's the program. Let's say I really on 1.0.1. I want to upgrade to 1.0.2. Everything that also relies on 1.0.1 also needs to be upgraded.

It effectively means I can only have versions of dependencies that rely on the exact version that I'm updating to. Have a dependency still on 1.0.1 with no upgrade available? You're stuck.

Even worse, let's say you depends on A which depends on B, and B has an update to 1.0.2, if A doesn't support the new version of B, you're equally stuck.


Maven also has some terrible design where it will allow incompatible transitive dependencies to be used, one overwriting the other based on “nearest wins” rather than returning an error.


there are a small number of culprits from logging libraries to guava, netty that can cause these issues. For these you can use the Shade plugin https://maven.apache.org/plugins/maven-shade-plugin/


If in some supply chain attack someone switches out a version's code under your seating apparatus, then good look without lock files. I for one prefer being notified about checksums of things suddenly changing.


Maven releases are immutable


Sounds like the Common Lisp approach, where there are editions or what they call them and those are sets of dependencies at specific versions.

But the problem with that is, when you need another version of a library, that is not in that edition. For example when a backdoor or CVE gets discovered, that you have to fix asap, you might not want to wait for the next Maven release. Furthermore, Maven is Java ecosystem stuff, where things tend to move quite slowly (enterprisey) and comes with its own set of issues.


I was quite tickled to see this, I don’t remember why but I recently started rewatching the show. Perfect timing!


I tend to agree but there are a few scenarios where I really want it to work. Debuggers in particular seem hard to get right for the current agents. I’ve not been able to get the various MCP servers I’ve tried to work, I’ve struck out using the debug adapter protocol from agent-authored python. The best results I’ve gotten are from prompting it to run the debugger under screen, but it takes many tool calls to iterate IME. I’m curious to see how gemini cli works for that use case with this feature.


I would love to use gdb through an agent instead of directly. I spend so much time looking up commands and I sometimes skip things because I get impatient stepping over the next thing


Not GP but 2CB and psilocybin were never very visual for me compared with LSD in my tripping days. I have aphantasia and the only chemical to give me full eyes open visuals was DMT. Mescaline was a very distant second.


This matches my experience and I was quite surprised to find out other aphantasiacs have their “minds eye open” when tripping. For me psychedelics only ever produced a fractal overlay on top of what I was already seeing.

I wondered for a long time why everyone else experienced such strong visuals and eventually decided on my own it must be related to aphantasia. It’s nice to find out I might not have been a total crank with that hypothesis :).


It depends on the psychadelic. Acid will be fractal overlay and color shifts, breathing textures. Mushrooms, you will see a face in the treebark and the clouds, plus the color shifts and breathing textures.


(I work on a project that uses Chromium’s commit queue infrastructure)

I think there’s a big difference between Chromium’s approach and the “not rocket science” rule. AIUI Chromium’s model there are still postsubmits that must pass or a change will be reverted by a group monitoring the queue. This is a big difference in practice vs having a rotation or team that reorders the merge queue and rolls changes up to merge together. In the commit queue model you land faster at the expense of more likely reverts than in the merge queue model.


Comments so far seem to be focusing on the rejection without considering the stated reasons for rejection. AFAICT Alsup is saying that the problems are procedural (how do payouts happen, does the agreement indemnify Anthropic from civil “double jeopardy”, etc), not that he’s rejecting the negotiated payout. Definitely not a lawyer but it seems to me like the negotiators could address the rejection without changing any dollar numbers.


Yes, exactly. The article is pretty clear that it’s rejected without prejudice and that a few points need to be ironed out before he gives a preliminary approval. I suspect a lot of folks didn’t read much/any of TFA.

I do wonder if all of the kinks will be smoothed out in time. Not a lawyer too, but the timeline to create the longer list is a bit tight, and generally feels like we could see an actual rejection or at least a stretched out process here that goes on for a few more months at least before approval.


Exactly. The judge is doing exactly what he's designed to do in a civil case -- help forge an agreement between the parties that doesn't come back to bite anyone in the future. The last thing a judge wants is a case getting reopened and relitigated a year from now because there was a "bug" in the settlement.


FWIW this closely matches my experience. I’m pretty late to the AI hype train but my opinion changed specifically because of using combinations of models & tools that released right before the cut off date for the data here. My impression from friends is that it’s taken even longer for many companies to decide they’re OK with these tools being used at all, so I would expect a lot of hysteresis on outputs from that kind of adoption.

That said I’ve had similar misgivings about the METR study and I’m eager for there to be more aggregate study of the productivity outcomes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: