Hacker Newsnew | past | comments | ask | show | jobs | submit | jmillikin's commentslogin

Most of the code in WebP and AVIF is shared with VP8/AV1, which means if your browser supports contemporary video codecs then it also gets pretty good lossy image codecs for free. JPEG-XL is a separate codebase, so it's far more effort to implement and merely providing better compression might not be worth it absent other considerations. The continued widespread use of JPEG is evidence that many web publishers don't care that much about squeezing out a few bytes.

Also from a security perspective the reference implementation of JPEG-XL isn't great. It's over a hundred kLoC of C++, and given the public support for memory safety by both Google and Mozilla it would be extremely embarrassing if a security vulnerability in libjxl lead to a zero-click zero-day in either Chrome or Firefox.

The timing is probably a sign that Chrome considers the Rust implementation of JPEG-XL to be mature enough (or at least heading in that direction) to start kicking the tires.


> The continued widespread use of JPEG is evidence that many web publishers don't care that much about squeezing out a few bytes.

I agree with the second part (useless hero images at the top of every post demonstrate it), but not necessarily the first. JPEG is supported pretty much everywhere images are, and it’s the de facto default format for pictures. Most people won’t even know what format they’re using, let alone that they could compress it or use another one. In the words of Hank Hill:

> Do I look like I know what a JPEG is? I just want a picture of a god dang hot dog.

https://www.youtube.com/watch?v=EvKTOHVGNbg


I'm not (only) talking about the general population, but major sites. As a quick sanity check, the following sites are serving images with the `image/jpeg` content type:

* CNN (cnn.com): News-related photos on their front page

* Reddit (www.reddit.com): User-provided images uploaded to their internal image hosting

* Amazon (amazon.com): Product categories on the front page (product images are in WebP)

I wouldn't expect to see a lot of WebP on personal homepages or old-style forums, but if bandwidth costs were a meaningful budget line item then I would expect to see ~100% adoption of WebP or AVIF for any image that gets recompressed by a publishing pipeline.


Any site that uses a frontend framework or CMS will probably serve WebP at the very least.

It’s subsidized by cheap CDN rates and dominated by video demand.

The https://github.com/blackjetrock/ghidra-6303 repository your post links to (containing a SLEIGH spec for the HD6303) is no longer available, did you happen to save a local clone that could be re-uploaded somewhere?


Thank you very much for pointing this out! Fortunately I still have the code locally. I'll try to raise another PR to get 6303 support into Ghidra.



Thank you for finding this! Depili does great work! In another comment I mentioned that I've been working on the Casio CZ-101, which uses the NEC μPD7810 processor. Depili created a processor spec for the μCOM-87 architecture, which I've continued working on in this PR: https://github.com/NationalSecurityAgency/ghidra/pull/7930


There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.

Ctrl+F for "NonStop" in https://lwn.net/Articles/998115/


Shouldn't these platforms work on getting Rust to support it rather than have our tools limited by what they can consume? https://github.com/Rust-GCC/gccrs


A maintainer for that specific platform was more into the line of thinking that Git should bend over backwards to support them because "loss of support could have societal impact [...] Leaving debit or credit card authorizers without a supported git would be, let's say, "bad"."

To me it looks like big corps enjoying the idea of having free service so they can avoid maintaining their own stuff, and trying the "too big to fail" fiddle on open source maintainers, with little effect.


It's additionally ridiculous because git is a code management tool. Maybe they are using it for something much more wild than that (why?) but I assume this is mostly just a complaint that they can't do `git pull` from their wonky architecture that they are building on. They could literally have a network mount and externally manage the git if they still need it.

It's not like older versions of git won't work perfectly fine. Git has great backwards compatibility. And if there is a break, seems like a good opportunity for them to fork and fix the break.

And lets be perfectly clear. These are very often systems built on top of a mountain of open source software. These companies will even have custom patched tools like gcc that they aren't willing to upstream because some manager decided they couldn't just give away the code they paid an engineer to write. I may feel bad for the situation it puts the engineers in, I feel absolutely no remorse for the companies because their greed put them in these situations in the first place.


> Leaving debit or credit card authorizers without a supported git would be, let's say, "bad".

Oh no, if only these massive companies that print money could do something as unthinkable as pay for a support contract!


Yes. It benefits them to have ubiquitous tools supported on their system. The vendors should put in the work to make that possible.

I don’t maintain any tools as popular as git or you’d know me by name, but darned if I’m going to put in more than about 2 minutes per year supporting non-Unix.

(This said as someone who was once paid to improve Ansible’s AIX support for an employer. Life’s too short to do that nonsense for free.)


As you're someone very familiar with Ansible, what are your thoughts on it in regards to IBM's imminent complete absorption of RedHat? I can't imagine Ansible, or any other RedHat product, doing well with that.


I wouldn’t say I’m very familiar. I don’t use it extensively anymore, and not at all at work. But in general, I can’t imagine a way in which IBM’s own corporate culture could contribute positively to any FOSS projects if they removed the RedHat veneer. Not saying it’s impossible, just that my imagination is more limited than the idea requires.


IBM has been, and still is, a big contributor to a bunch of Eclipse projects, as their own tools build on those. The people there were both really skilled, friendly and professional. Different divisions and departments can have huge cultural differences and priorities, obviously, but “IBM” doesn’t automatically mean bad for OSS projects.


I'm sure some of RedHat stuff will end up in the Apache Foundation once IBM realizes it has no interest in them.


There isn't even a Nonstop port of GCC yet. Today, Nonstop is big-endian x86-64, so tacking this onto the existing backend is going to be interesting.


That platform doesn’t support GCC either.


Isn’t that’s what’s happening? The post says they’re moving forward.


[flagged]


On the other hand: why should the entire open-source world screech to a halt just because some new development is incompatible with the ecosystem of a proprietary niche system developed by a billion-dollar freeloader?

HPE NonStop doesn't need to do anything with Rust, and nobody is forcing them to. They have voluntarily chosen to use an obscure proprietary toolchain instead of contributing to GCC or LLVM like everyone else: they could have gotten Rust support for free, but they believed staying proprietary was more important.

Then they chose to make a third-party project (Git) a crucial part of that ecosystem, without contributing time and effort into maintaining it. It's open source, so this is perfectly fine to do. On the other hand, it also means they don't get a say in how the project is developed, and what direction it will take in the future. But hey, they believed saving a few bucks was more important.

And now it has blown up in their face, and they are trying to control the direction the third-party project is heading by playing the "mission-critical infrastructure" card and claiming that the needs of their handful of users is more important than the millions of non-HPE users.

Right now there are three options available to HPE NonStop users:

1. Fork git. Don't like the direction it is heading? Then just do it yourself. Cheapest option short-term, but it of course requires investing serious developer effort long-term to stay up-to-date, rather than just sending the occasional patch upstream.

2. Port GCC / LLVM. That's usually the direction obscure platforms go. You bite the bullet once, but get to reap the benefits afterwards. From the perspective of the open-source community, if your platform doesn't have GCC support it might as well not exist. If you want to keep freeloading off of it, it's best to stop fighting this part. However, it requires investing developer effort - especially when you want to maintain a proprietary fork due to Business Reasons rather than upstreaming your changes like everyone else.

3. Write your own proprietary snowflake Rust compiler. You get to keep full control, but it'll require a significant developer effort. And you have to "muck around" with Rust, of course.

HPE NonStop and its ecosystem can do whatever it wants, but it doesn't get to make demands just because their myopic short-term business vision suddenly leaves them having to spend effort on maintaining it. This time it is caused by Git adopting Rust, but it will happen again. Next week it'll be something like libxml or openssl or ssh or who-knows-what. Either accept that breakage is inevitable when depending on third-party components, or invest time into staying compatible with the ecosystem.


At this point maybe it's time to let them solve the problem they've created for themselves by insisting on a closed C compiler in 2025.


[flagged]


>> insisting on a closed C compiler in 2025.

> Everything should use one compiler, one run-time and one package manager.

If you think that calling out closed C compilers is somehow an argument for a single toolchain for all things, I doubt there's anything I can do to help educate you about why this isn't the case. If you do understand and are choosing to purposely misinterpret what I said, there are a lot of much stronger arguments you could make to support your point than that.

Even ignoring all of that, there's a much larger point that you've kind of glossed over here by:

> The shitheads who insist on using alternative compilers and platforms don't deserve tools

There's frequently discussion around the the expectations between open source project maintainers and users, and in the same way that users are under no obligation to provide compensation for projects they use, projects don't have any obligations to provide support indefinitely for any arbitrary set of circumstances, even if they happen to for a while. Maintainers sometimes will make decisions weighing tradeoffs between supporting a minority of users or making a technical change they feel will help them maintain the project better in the long-term differently than the users will. It's totally valid to criticize those decisions on technical grounds, but it's worth recognizing that these types of choices are inevitable, and there's nothing specific about C or Rust that will change that in the long run. Even with a single programming language within a single platform, the choice of what features to implement or not implement could make or break whether a tool works for someone's specific use case. At the end of the day, there's a finite amount of work people spend on a given project, and there needs to be a decision about what to spend it on.


For various libs, you provide a way to build without it. If it's not auto-detected, or explicitly disabled via the configure command line, then don't try to use it. Then whatever depends on it just doesn't work. If for some insane reason git integrates XML and uses libxml for some feature, let it build without the feature for someone who doesn't want to provide libxml.

> At the end of the day, there's a finite amount of work people spend on a given project

Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.


> Integrating Rust shows you have too much time on your hands; the people who are affected by that, not necessarily so.

As cited elsewhere in the this thread, the person making this proposal on the mailing list has been involved in significant contributions to git in the past, so I'd be inclined to trust their judgment about whether it's a worthwhile use of their time in the absence of evidence to the contrary. If you have something that would indicate this proposal was made in bad faith, I'd certainly be interested to see it, but otherwise, I don't see how you can make this claim other than as your own subjective opinion. That's fine, but I can't say I'm shocked that the people actually making the decisions on how to maintain git don't find it convincing.


Weighted by user count for a developer tool like Git, Rust is a more portable language than the combination of C and bash currently in use.


> There's at least one proprietary platform that supports Git built by via a vendor-provided C compiler, but for which no public documentation exists and therefore no LLVM support is possible.

That's fine. The only impact is that they won't be able to use the latest and greatest release of Git.

Once those platforms work on their support for Rust they will be able to jump back to the latest and greatest.


It's sad to see people be so nonchalant about potentially killing off smaller platforms like this. As more barriers to entry are added, competition is going to decrease, and the software ecosystem is going to keep getting worse. First you need a lib C, now you need lib C and Rust, ...

But no doubt it's a great way for the big companies funding Rust development to undermine smaller players...


It's kind of funny to see f-ing HPE with 60k employees somehow being labeled as the poor underdog that should be supported by the open-source community for free and can't be expected to take care of software running on their premium hardware for banks etc by themselves.


I think you misread my comment because I didn't say anything like that.

In any case HPE may have 60k employees but they're still working to create a smaller platform.

It actually demonstrates the point I was making. If a company with 60k employees can't keep up then what chance do startups and smaller companies have?


> If a company with 60k employees can't keep up then what chance do startups and smaller companies have?

They build on open source infrastructure like LLVM, which a smaller company will probably be doing anyway.


Sure, but let's not pretend that doesn't kill diversity and entrench a few big players.


The alternative is killing diversity of programming languages, so it's hard to win either way.


HP made nearly $60b last year. They can fund the development of the tools they need for their 50 year old system that apparently powers lots of financial institutions. It's absurd to blame volunteer developers for not wanting to bend over backwards, just to ensure these institutions have the absolute latest git release, which they certainly do not need.


Oh they absolutely can, they just choose not to. To just make some tools work again there's also many slightly odd workarounds one could choose over porting the Rust compiler.


> It's sad to see people be so nonchalant about potentially killing off smaller platforms like this.

Your comment is needlessly dramatic. The only hypothetical impact this has is that whoever uses these platforms won't have upgrades until they do something about it, and the latest and greatest releases will only run if the companies behind these platforms invests in their maintenance.

This is not a good enough reason to prevent the whole world from benefiting from better tooling. This is not a lowest common denominator thing. Those platforms went out of their way to lag in interpretability, and this is the natural consequence of these decisions.


Maybe they can resurrect the C backend for LLVM and run that through their proprietary compilers?

It's probably not straightforward but the users of NonStop hardware have a lot of money so I'm sure they could find a way.


Rust has an experimental C backend of its own as part of rustc_codegen_clr https://github.com/FractalFir/rustc_codegen_clr . Would probably work better than trying to transpile C from general LLVM IR.


Some people have demonstrated portability using the WASM target, translating that to C89 via w2c2, and then compiling _that_ for the final target.


Given that the maintainer previously said they had tried to pay to get GCC and LLVM ported multiple times, all of which failed, money doesn’t seem to have helped.


Surely the question is how much they tried to pay? Clearly the answer is "not enough".


I mean at one point I had LLVM targeting Xbox 360, PS3, and Wii so I'm sure it's possible, it just needs some imagination and elbow grease :)


Why should free software projects bend over backwards to support obscure proprietary platforms? Sounds absurd to me


Won't someome think of the financial sector


Reminds me of a conversation about TLS and how a certain bank wanted to insert a backdoor into all of TLS for their convenience.


Sucks to be that platform?

Seriously, I guess they just have to live without git if they're not willing to take on support for its tool chain. Nobody cares about NonStop but the very small number of people who use it... who are, by the way, very well capable of paying for it.


I strongly agree. I read some of the counter arguments, like this will make it too hard for NonStop devs to use git, and maybe make them not use it at all. Those don’t resonate with me at all. So what? What value does them using git provide to the git developers? I couldn’t care less if NonStop devs can use my own software at all. And since they’re exclusively at giant, well-financed corporations, they can crack open that wallet and pay someone to do the hard work if it means than much to them.


"You have to backport security fixes for your own tiny platform because your build environment doesn't support our codebase or make your build environment support our codebase" seems like a 100% reasonable stance to me


> your build environment doesn't support our codebase

If that is due to the build environment deviating from the standard, then I agree with you. However, when its due to the codebase deviating from the standard, then why blame the build environment developers for expecting codebases to adhere to standards. That's the whole point of standards.


Is there a standard that all software must be developed in ANSI C that I missed, or something? The git developers are saying - we want to use Rust because we think it will save us development effort. NonStop people are saying we can't run this on our platform. It seems to me someone at git made the calculus: the amount that NonStop is contributing is less than what we save going to Rust. Unless NonStop has a support contract with git developers that they would be violating, it seems to me the NonStop people want to have their cake and eat it too.

According to git docs they seem to try to make a best effort to stick to POSIX but without any strong guarantees, which this change seems to be entirely in line with: https://github.com/git/git/blob/master/Documentation/CodingG...


An important point of using C is to write software that adheres to a decades old very widespread standard. Of course developers are free to not do that, but any tiny bit of Rust in the core or even in popular optional code amounts to the same as not using C at all, i.e. only using Rust, as far as portability is concerned.

If your codebase used to conform to a standard and the build environment relies on that standard, and now the your codebase doesn't anymore, then its not the build environment that deviates from the standard, its the codebase that brakes it.


Had you been under the impression that any of these niche platforms conform to any common standard other than their own?

Because they don’t. For instance, if they were fully POSIX compliant, they’d probably already have LLVM.


I expect them to conform to the C standard or to deal with the deviation. I don't think POSIX compliance is of much use on an embedded target.


I’m sold.


How is this git's concern?


They enjoy being portable and like things to stay that way so when they introduce a new toolchain dependency which will make it harder for some people to compile git, they point it out in their change log?


I don't think "NonStop" is a good gauge of portability.

But, I wasn't arguing against noting changes in a changelog, I'm arguing against putting portability to abstruse platforms before quality.


I don’t think staying portable means you have to do concession on quality. That merely limit your ability to introduce less portable dependancies.

But even then Git doesn’t mind losing some plateformes when they want to move forward on something.


Git's main concern should, of course, be getting Rust in, in some shape or form.


I am curious, does anyone know what is the use case that mandates the use of git on NonStop? Do people actually commit code from this platform? Seems wild.


Nonstop is still supported? :o


Among ecosystems based on YAML-formatted configuration defaulting to YAML 1.1 is nearly universal. The heyday of YAML was during the YAML 1.1 era, and those projects can't change their YAML parsers' default version to 1.2 without breaking extant config files.

By the time YAML 1.2 had been published and implementations written, greenfield projects were using either JSON5 (a true superset of JSON) or TOML.

  > While JSON numbers are grammatically simple, they're almost always distinct
  > from how you'd implement numbers in any language that has JSON parsers,
  > syntactically, exactness and precision-wise.
For statically-typed languages the range and precision is determined by the type of the destination value passed to the parser; it's straightforward to reject (or clamp) a JSON number `12345` being parsed into a `uint8_t`.

For dynamically-typed languages there's less emphasis on performance, so using an arbitrary-precision numeric type (Python's Decimal, Go's "math/big" types) provide lossless decoding.

The only language I know of that really struggles with JSON numbers is, ironically, JavaScript -- its BigInt type is relatively new and not well integrated with its JSON API[0], and it doesn't have an arbitrary-precision type.

[0] See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... for the incantation needed to encode a BigInt as a number.


Arguably the root problem was lack of user namespacing; the incident would have been less likely to happen in the first place if the packages in question were named "~akoculu/left-pad" and "~akoculu/kik".


That's right and probably a lot less people would have used left-pad because it looks like a package for a specific org.


I think that statement is parsed as "npm was the first incredibly accessible package manager for [server-side JavaScript, which at the time was] an emergent popular technology,"


I get that, but there was plenty of prior art to learn from anyway.


You wrote "water has great compressive strength", sk5t directly (and correctly) refuted that claim. What is there to think about?

Are you confusing "compressive strength" with compressibility?


I think his point is that things very rarely experience purely compressive forces. Just being compressed induces tension in other directions, like water being squished out between your clapping hands. So even though water has great compressive strength, in practice this isn't very useful.


Exactly.

Many materials would have compressive strength easily, just by being relatively uncompressible.

But most loads have a (troublesome) tensile component. Fundamentally, the ability of a rigid material to resist deformation (in the most general sense) is what is most important, and that requires tensile strength.

See this comment elsewhere in this sub-thread that explains it probably better than I did: https://news.ycombinator.com/item?id=43904800


Look up the Wikipedia definition [1] of compressive strength:

> In mechanics, compressive strength (or compression strength) is the capacity of a material or structure to withstand loads tending to reduce size (compression). It is opposed to tensile strength which withstands loads tending to elongate, resisting tension (being pulled apart).

Google search AI summary states:

> Compressive strength is a material's capacity to resist forces that try to reduce its volume or cause deformation.

To be fair, compressive strength is a complex measure. Compressibility is only one aspect of it. See this Encyclopedia Britannica article [2] about how compressive strength is tested.

[1] https://en.wikipedia.org/wiki/Compressive_strength

[2] https://www.britannica.com/technology/compressive-strength-t...


Please tell me how to make a water prism to test compressive strength and deformation resistance. Water is an incompressible fluid, that is different.

These are well understood terms in the field. Unfortunately, this illustrates the bounds of ai in subfields like materials: it confuses people.


I'm not saying water meets the strict definition of a material with high compressive strength (it does meet some, since it resists forces that attempt to decrease its volume well). I am just using as an extreme example of the issues with the concept of compressive strength.


lower the temperature


Nothing that you wrote here indicates you understand what is being discussed.

Water has very low compressive strength, so low that it freely deforms under its own weight. You can observe this by pouring some water onto a table. This behavior is distinct from materials with high compressive strength, such as wood or steel.

(I say "very low" instead of "zero" because surface tension could be considered a type of compressive strength at small scales, such as a single drop of water on a hydrophobic surface)


Your comments betrays a lack of comprehension and understanding. Please reads my comments and linked definitions carefully.

See this comment elsewhere in this sub-thread that explains it probably better than I did: https://news.ycombinator.com/item?id=43904800


  > use SECCOMP_SET_MODE_STRICT to isolate the child process. But at that
  > point, what are you even doing? Probably nothing useful.
The classic example of a fully-seccomp'd subprocess is decoding / decompression. If you want to execute ffmpeg on untrusted user input then seccomp is a sandbox that allows full-power SIMD, and the code has no reason to perform syscalls other than read/write to its input/output stream.

On the client side there's font shaping, PDF rendering, image decoding -- historically rich hunting grounds for browser CVEs.


The classic example of a fully-seccomp'd subprocess is decoding / decompression.

Yes. I've run JPEG 2000 decoders in a subprocess for that reason.


Well, it seems that lately this kind of task wants to write/mmap to a GPU, and poke at font files and interpret them.


I flagged this for being LLM-generated garbage; original comment below. Any readers interested in benchmarking programming language implementations should visit https://benchmarksgame-team.pages.debian.net/benchmarksgame/... instead.

---

The numbers in the table for C vs Rust don't make sense, and I wasn't able to reproduce them locally. For a benchmark like this I would expect to see nearly identical performance for those two languages.

Benchmark sources:

https://github.com/naveed125/rust-vs/blob/6db90fec706c875300...

https://github.com/naveed125/rust-vs/blob/6db90fec706c875300...

Benchmark process and results:

  $ gcc --version
  gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
  $ gcc -O2 -static -o bench-c-gcc benchmark.c
  $ clang --version
  Ubuntu clang version 14.0.0-1ubuntu1.1
  $ clang -O2 -static -o bench-c-clang benchmark.c
  $ rustc --version
  rustc 1.81.0 (eeb90cda1 2024-09-04)
  $ rustc -C opt-level=2 --target x86_64-unknown-linux-musl -o bench-rs benchmark.rs

  $ taskset -c 1 hyperfine --warmup 1000 ./bench-c-gcc
  Benchmark 1: ./bench-c-gcc
    Time (mean ± σ):       3.2 ms ±   0.1 ms    [User: 2.7 ms, System: 0.6 ms]
    Range (min … max):     3.2 ms …   4.1 ms    770 runs

  $ taskset -c 1 hyperfine --warmup 1000 ./bench-c-clang
  Benchmark 1: ./bench-c-clang
    Time (mean ± σ):       3.5 ms ±   0.1 ms    [User: 3.0 ms, System: 0.6 ms]
    Range (min … max):     3.4 ms …   4.8 ms    721 runs

  $ taskset -c 1 hyperfine --warmup 1000 ./bench-rs
  Benchmark 1: ./bench-rs
    Time (mean ± σ):       5.1 ms ±   0.1 ms    [User: 2.9 ms, System: 2.2 ms]
    Range (min … max):     5.0 ms …   7.1 ms    507 runs

Those numbers also don't make sense, but in a different way. Why is the Rust version so much slower, and why does it spend the majority of its time in "system"?

Oh, it's because benchmark.rs is performing a dynamic memory allocation for each key. The C version uses a buffer on the stack, with fixed-width keys. Let's try doing the same in the Rust version:

  --- benchmark.rs
  +++ benchmark.rs
  @@ -38,22 +38,22 @@
   }
 
   // Generates a random 8-character string
  -fn generate_random_string(rng: &mut Xorshift) -> String {
  +fn generate_random_string(rng: &mut Xorshift) -> [u8; 8] {
       const CHARSET: &[u8] = b"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ";
  -    let mut result = String::with_capacity(8);
  +    let mut result = [0u8; 8];
   
  -    for _ in 0..8 {
  +    for ii in 0..8 {
           let rand_index = (rng.next() % 62) as usize;
  -        result.push(CHARSET[rand_index] as char);
  +        result[ii] = CHARSET[rand_index];
       }
   
       result
   }
   
   // Generates `count` random strings and tracks their occurrences
  -fn generate_random_strings(count: usize) -> HashMap<String, u32> {
  +fn generate_random_strings(count: usize) -> HashMap<[u8; 8], u32> {
       let mut rng = Xorshift::new();
  -    let mut string_counts: HashMap<String, u32> = HashMap::new();
  +    let mut string_counts: HashMap<[u8; 8], u32> = HashMap::with_capacity(count);
   
       for _ in 0..count {
           let random_string = generate_random_string(&mut rng);
Now it's spending all its time in userspace again, which is good:

  $ taskset -c 1 hyperfine --warmup 1000 ./bench-rs
  Benchmark 1: ./bench-rs
    Time (mean ± σ):       1.5 ms ±   0.1 ms    [User: 1.3 ms, System: 0.2 ms]
    Range (min … max):     1.4 ms …   3.2 ms    1426 runs
 
... but why is it twice as fast as the C version?

---

I go to look in benchmark.c, and my eyes are immediately drawn to this weird bullshit:

  // Xorshift+ state variables (64-bit)
  uint64_t state0, state1;

  // Xorshift+ function for generating pseudo-random 64-bit numbers
  uint64_t xorshift_plus() {
      uint64_t s1 = state0;
      uint64_t s0 = state1;
      state0 = s0; 
      s1 ^= s1 << 23; 
      s1 ^= s1 >> 18; 
      s1 ^= s0; 
      s1 ^= s0 >> 5;
      state1 = s1; 
      return state1 + s0; 
  }
That's not simply a copy of the xorshift+ example code on Wikipedia. Is there any human in the world who is capable of writing xorshift+ but is also dumb enough to put its state into global variables? I smell an LLM.

A rough patch to put the state into something the compiler has a hope of optimizing:

  --- benchmark.c
  +++ benchmark.c
  @@ -18,25 +18,35 @@
   StringNode *hashTable[HASH_TABLE_SIZE]; // Hash table for storing unique strings
   
   // Xorshift+ state variables (64-bit)
  -uint64_t state0, state1;
  +struct xorshift_state {
  +       uint64_t state0, state1;
  +};
   
   // Xorshift+ function for generating pseudo-random 64-bit numbers
  -uint64_t xorshift_plus() {
  -    uint64_t s1 = state0;
  -    uint64_t s0 = state1;
  -    state0 = s0;
  +uint64_t xorshift_plus(struct xorshift_state *st) {
  +    uint64_t s1 = st->state0;
  +    uint64_t s0 = st->state1;
  +    st->state0 = s0;
       s1 ^= s1 << 23;
       s1 ^= s1 >> 18;
       s1 ^= s0;
       s1 ^= s0 >> 5;
  -    state1 = s1;
  -    return state1 + s0;
  +    st->state1 = s1;
  +    return s1 + s0;
   }
   
   // Function to generate an 8-character random string
   void generate_random_string(char *buffer) {
  +    uint64_t timestamp = (uint64_t)time(NULL) * 1000;
  +    uint64_t state0 = timestamp ^ 0xDEADBEEF;
  +    uint64_t state1 = (timestamp << 21) ^ 0x95419C24A637B12F;
  +    struct xorshift_state st = {
  +        .state0 = state0,
  +        .state1 = state1,
  +    };
  +
       for (int i = 0; i < STRING_LENGTH; i++) {
  -        uint64_t rand_value = xorshift_plus() % 62;
  +        uint64_t rand_value = xorshift_plus(&st) % 62;
   
           if (rand_value < 10) { // 0-9
               buffer[i] = '0' + rand_value;
  @@ -113,11 +123,6 @@
   }
   
   int main() {
  -    // Initialize random seed
  -    uint64_t timestamp = (uint64_t)time(NULL) * 1000;
  -    state0 = timestamp ^ 0xDEADBEEF; // Arbitrary constant
  -    state1 = (timestamp << 21) ^ 0x95419C24A637B12F; // Arbitrary constant
  -
       double total_time = 0.0;
   
       // Run 3 times and measure execution time
  
and the benchmarks now make slightly more sense:

  $ taskset -c 1 hyperfine --warmup 1000 ./bench-c-gcc
  Benchmark 1: ./bench-c-gcc
    Time (mean ± σ):       1.1 ms ±   0.1 ms    [User: 1.1 ms, System: 0.1 ms]
    Range (min … max):     1.0 ms …   1.8 ms    1725 runs
  
  $ taskset -c 1 hyperfine --warmup 1000 ./bench-c-clang
  Benchmark 1: ./bench-c-clang
    Time (mean ± σ):       1.0 ms ±   0.1 ms    [User: 0.9 ms, System: 0.1 ms]
    Range (min … max):     0.9 ms …   1.4 ms    1863 runs
But I'm going to stop trying to improve this garbage, because on re-reading the article, I saw this:

  > Yes, I absolutely used ChatGPT to polish my code. If you’re judging me for this,
  > I’m going to assume you still churn butter by hand and refuse to use calculators.
  > [...]
  > I then embarked on the linguistic equivalent of “Google Translate for code,”
Ok so it's LLM-generated bullshit, translated into other languages either by another LLM, or by a human who doesn't know those languages well enough to notice when the output doesn't make any sense.


> my eyes are immediately drawn to this weird bullshit

Gave me a good chuckle there :)

Appreciate this write up; I'd even say your comment deserves its own article, tbh. Reading your thought process and how you addressed the issues was interesting. A lot of people don't know how to identify or investigate weird bullshit like this.


So glad I had read the 2nd agreement by Don Miguel Ruiz lol.


The "dsb nsh; isb" sequence after "svc 0" is part of OpenBSD's mitigations for Spectre.

https://github.com/openbsd/src/commit/bbeaada4689520859307d5...

https://github.com/openbsd/src/commit/0c401ffc2a2550c32105ce...

https://github.com/openbsd/src/commit/5ecc9681133f1894e81c38...

If I'm reading the commits correctly, the OpenBSD kernel will skip two instructions after a "svc 0" when returning to userspace, on the assumption that any syscall comes from libc and therefore has "dsb nsh; isb" after it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: