I didn't realize level 1 gave me 11 (eleven) walls at first. I thought it stood for II = roman 2. Maybe use a font that makes the difference between 1 and I clearer.
You're forgetting a very important problem: hard to implement. Sugar in drinks and CO2 emissions are easily measured. The definition of what's an ad is much harder.
"The University of Rhode Island based its report on its estimates that producing a medium-length, 1,000-token GPT-5 response can consume up to 40 watt-hours (Wh) of electricity, with an average just over 18.35 Wh, up from 2.12 Wh for GPT-4. This was higher than all other tested models, except for OpenAI's o3 (25.35 Wh) and Deepseek's R1 (20.90 Wh)."
These numbers don't pass sanity check for me. With 4x300W cards you can get a 1K token DeepSeek R1 output in about 10 seconds. That's just 3.3Wh right? And that's before you even consider batching.
Which is stupid as those are the vulnerabilities worth determining if they exist.
I can understand in a heavily regulated industry (e.g. Medical) that a company couldn't due to liability give you the go ahead to poke into other user's data in attempt to find a vulnerability, but they could always publish a dummy account detail that can be identified with fake data.
Something like:
It is strictly forbidden to probe arbitrary user data. However, if a vulnerability is suspected to allow access to user data, the user with GUID 'xyzw' is permitted to probe.
Now you might say that won't help. The people who want to follow the rules probably will, and the people who don't want to won't anyways.
Disagree, it can be learning as long as you build out your mental model while reading. Having educational reading material for the exact thing you're working on is amazing at least for those with interest-driven brains.
Science YouTube is no comparison at all: while one can choose what to watcha, it's a limited menu that's produced for a mass audience.
I agree though that reading LLM-produced blog posts (which many of the recent top submissions here seem to be) is boring.
Don't worry, it's an LLM that wrote it based on the patterns in the text, e.g. "Starting a new project once felt insurmountable. Now, it feels realistic again."
Yes, for an LLM. The good thing about LLMs is that they can infer patterns. The bad thing about LLMs is that they infer patterns. The patterns change a bit over time, but the overuse of certain language patterns remains a constant.
One could argue that some humans write that way, but ultimately it does not matter if the text was generated by an LLM, reworded by a human in a semi-closed loop or organically produced by human. The patterns indicate that the text is just a regurgitation of buzzwords and it's even worse if an LLM-like text was produced organically.
> While I have my opinions on existing crates, I believe we can share experiences and finally converge on a common good solution, no matter who made it.
I'm pretty sure an LLM will be able to handle an instruction such as:
"Wherever exceptions are thrown, add as much contextual information to the exceptions as possible. Use class RichException<Exception> to store the extra information". Etc. etc.
Sure, but writing and maintaining such instructions is also work. And not something one thinks about usually until the debugging session with insufficient errors.
Yeah. Certainly felt like that. On the other hand, the content does seem good. It definitely wasn't slop, even if I can't judge how useful it really was (in terms of giving a solution).
reply