> It's compilers and compiler optimizations that make code run fast
Well, then in many cases we are talking about LLVM vs LLVM.
> Ultimately, producing fast/optimal code in C kind of is the whole point of C
Mostly a nitpick, but I'm not convinced that's true. The performance queen has been traditionally C++. In C projects it's not rare to see very suboptimal design choices mandated by the language's very low expressivity (e.g. no multi-threading, sticking to an easier data structure, etc).
Compilers are only as good as the semantics you give them. C and C++ both have some pretty bad semantics in many places that heavily encourage inefficient coding patterns.
I generally agree with your take, but I don't think C is in the same league as Rust or C++. C has absolutely terrible expressivity, you can't even have proper generic data structures. And something like small string optimization that is in standard C++ is basically impossible in C - it's not an effort question, it's a question of "are you even writing code, or assembly".
Yes, it is the difference between "in theory" and "in practice". In practice, almost no one would write the C required to keep up with the expressiveness of modern C++. The difference in effort is too large to be worth even considering. It is why I stopped using C for most things.
There is a similar argument around using "unsafe" in Rust. You need to use a lot of it in some cases to maintain performance parity with C++. Achievable in theory but a code base written in this way is probably going to be a poor experience for maintainers.
Each of these languages has a "happy path" of applications where differences in expressivity will not have a material impact on the software produced. C has a tiny "happy path" compared to the other two.
Well, what about small CLI tools, like ripgrep and the like? Does multithreading not matter when we open a large number of files and process them? What about compilers?
Sure. But the more obviously parallel the problem is (visiting N files) the less compelling complex synchronization tools are.
To over explain, if you just need to make N forks of the same logic then it’s very easy to do this correctly in C. The cases where I’m going to carefully maintain shared mutable state with locking are cases where the parallelism is less efficient (Ahmdal’s law).
Java style apps that just haphazardly start threads are what rust makes safer. But that’s a category of program design I find brittle and painful.
The example you gave of a compiler is canonically implemented as multiple process making .o files from .c files, not threads.
> The example you gave of a compiler is canonically implemented as multiple process making .o files from .c files, not threads.
This is a huge limitation of C's compilation model, and basically every other language since then does it differently, so not sure if that's a good example. You do want some "interconnection" between translation units, or at least less fine-grained units.
It reminds me of the joke that "I can do math very fast", probed with a multiplication and immediately answering some total bollocks answer.
- "That's not even close"
- "Yeah, but it was fast"
Sure, it's not a trivial problem, but why wouldn't we want better compilation results/developer ergonomics at the price of more compiler complexity and some minimal performance penalty?
And it's not like the performance doesn't have its own set of negatives, like header-only libraries are a hack directly manifested from this compilation model.
Well, it's "a search engine that applies some transformations on top of the results" doesn't sound to me as a terrible way to think about LLMs.
> can follow logical rules
This is not their strong suite, though. They can only follow through a few levels on their own. This can be improved by agent-style iterations or via invoking external tools.
Let's see how this comment ages why don't we. I've understood where we are going and if you look at my comment history. I have confidence that in 12 months time. One opinion will be proved out with observations and the other will not.
For the "only few levels" claim, I think this one is sort of evident from the way they work. Solving a logical problem can have an arbitrary number of steps, and in a single pass there is only so many connection within a LLM to do some "work".
As mentioned, there are good ways to counter this problem (e.g. writing a plan and then iteratively going over those less-complex ones, or simply using the proper tool for the problem: use e.g. a SAT solver and just "translate" the problem to and from the appropriate format)
Nonetheless, I'm always open to new information/evidence and it will surely improve a lot in a year. As for reference, to date this is my favorite description of LLMs: https://news.ycombinator.com/item?id=46561537
Not really. A completely unintelligent autopilot can fly an F-16. You cannot assume general intelligence from scaffolded tool-using success in a single narrow area.
I assumed extreme performance of a general AI matching and exceeding average human intelligence when placed in an F16 or an equivalent cockpit specified for conducting math proofs.
That’s not agi at all. I don’t think you understand that LLMs will never hit agi even when they exceed human intelligence in all applicable domains.
The main reason is they don’t feel emotions. Even if the definition of agi doesn’t currently encompass emotions people like you will move the goal posts and shift the definition until it does. So as AI improves, the threshold will be adjusted to make sure they will never reach agi as it’s an existential and identity crisis to many people to admit that an AI is better than them on all counts.
That's called a hypothetical. I didn't say that we put an AGI into an F-16. I asked what the outcome would be. And the outcome is pretty similar. Please read carefully before making a false statement.
>You're claiming I said a lot of things I didn't; everything you seem to be stating about me in this comment is false.
Apologies. I thought you were being deliberate. What really happened is you made a mistake. Also I never said anything about you. Please read carefully.
WASM's current GC model is mostly about sharing large byte buffers. It's on about the order of OS-level memory page management. Mostly it is getting used to share memory surfaces to JSON serialization/deserialization without copying that memory across the WASM to JS boundary anymore.
It will be a while before WASM GC will look close to any language's GC.
Because people don't want to load 300MB for a simple website (and this is blocking the first render, not just loading in the background).
Not every language is a good source for targeting WASM, in the sense that you don't want to bring a whole standard library, custom runtime etc with you.
High-level languages may fare better if their GC is compatible with Wasm's GC model, though, as in that case the resulting binaries could be quite small. I believe Java-to-wasm binaries can be quite lean for that reason.
In c#'s case, it's probably mostly blazor's implementation, but it's not a good fit in this form for every kind of website (but very nice for e.g. an internal admin site and the like)
A modern blazor wasm app is nowhere near 300mb. There are techniques to reduce this size like tree shaking. There's no need to include lots of unused libraries.
Modern Blazor can do server side rendering for SEO/crawlers and fast first load similar to next.js, and seamlessly transition to client side rendering or interactive server side rendering afterwards.
Your info/opinion may be based on earlier iterations of Blazor.
That's still pretty bloated. That's enough size to fit an entire Android application a few years ago (before AndroidX) and simple Windows/Linux applications. I'll agree that it's justified if you're optimizing for runtime performance rather than first-load, which seems to be appropriate for your product, right?!
What is this 2 MB for? It would be interesting to hear about your WebAssembly performance story!
Regarding the website homepage itself: it weighs around 767.32 kB uncompressed in my testing, most of which is an unoptimized 200+kB JPEG file and some insanely large web fonts (which honestly are unnecessary, the website looks _pretty good_ and could load much faster without them).
If you prefer it, salaries correlate with years of experience, and the latter surely correlates with skills, right?
(No, this doesn't mean that every 10 years XP dev is better than a 3 years XP one, but it's definitely a strong correlation)
reply