Yes, I often say that the gold from America was a poisonous gift. It made the country so rich that they stopped caring about other stuff, they could just buy them from elsewhere. So there was little incentive to manufacture first, and industrialize later. Which is ironic because some of first steam engines you can find in Europe were invented in Spain (https://en.wikipedia.org/wiki/Jer%C3%B3nimo_de_Ayanz_y_Beaum...). It also enabled the funding of numerous stupid wars, with the human cost they bring. The name of this process is called the Dutch Disease https://en.wikipedia.org/wiki/Dutch_disease
> It made the country so rich that they stopped caring about other stuff, they could just buy them from elsewhere. So there was little incentive to manufacture first, and industrialize later.
Transcontinental fleet of ships regularly circumventing the globe and transporting cargo requires lots of technology. Why marine industry didn't stimulate manufacturing and industrialization?
> the Mississippi River Valley had more millionaires per capita
TIL.
Does anyone have any good recommendations for a (relatively) good social/economic history of the Southern US states? (because I guess that that type of history book would cover this type of information).
from the antebellum era? try anything by eric foner, but a good place to start is forever free: the story of emancipation and reconstruction. it's accessible history that includes a bit of background about the political economy leading up to the civil war. any of his books about reconstruction might be interesting but not exactly what you're looking for. "black reconstruction in america" by dubois is a keystone text that has a bit about the southern economic history. and maybe "old south, new south" by gavin wright for post reconstruction economic history
never heard of the dutch disease (i am dutch). pretty cool comment thanks.
kind of funny to think they still have trouble to actually extract that gas due to activism around earthquakes and ppl not eager to move away from these places. also now the climate push.
it kinda looks (from an uneducated perspective) they suffered this disease for nothing.
Spaniard there. We have gentlemen like Leonardo Torres Quevedo and the 'Telekino', something even the IEEE would get amazed of.
But our damn national motto on R&D was "Que inventen otros" (Let the -foreign- ones invent).
EDIT: It actually was "Que inventen -sth- ellos" (let the others invent -it-).
Something like let's just slack down/keep living under a traditionalistic, rural, Romantic life at the 19th century, let the rest do the modern inventions. OFC as I said Torres Quevedo was the exception, but overall I find our right wing politicians still have that Empire bound mindset. Even the progressive left are almost ranting luddites, they look the Science down from their Liberal Arts thrones.
In the end it's some kind of outdated rural-Romantic idiots fighting another share of left sided outdated jerks with, paradoxically, a similar love to the Rural Spain, with the pure, hard working, 'ecological' peasant against the polluting urbanite.
And sometimes I wish these Boomer (literal boomers in both sides) influenced journalists get to the times for once and all.
Use science to fight the climate change. Use libre software to expand education and knowledge like anywere else in History.
Act smart and not with the guts.
Loongson started with MIPS CPUs but current CPUs are not MIPS-compatible. LoongArch, while being very similar to MIPS, uses a different encoding. And some other details have changed. Better to say, MIPS-inspired.
What are LoongArch's technical advantages over RISC-V? In other words, why should a company develop their own architecture (which then they need to push support for) rather than use an existing, free one?
Back when LoongArch was announced, RISC-V did not yet have enough (ratified) extensions to achieve feature-parity.
Even if it had, LoongArch is much more similar to MIPS. LoongSon would have had to make more microarchitectural changes before being able to tape out their first non-MIPS CPU.
I don't know about advantages, but lead times in the chip business are long and you're not turning around on a dime without very pressing reasons. Loongson has probably had many things in the pipeline as RISC-V started gaining steam. Their current processors are more advanced designs than the best known RISC-Vs.
Impressive post, so many details. I could only understand some parts of it, but I think this article will probably be a reference for future graphics API.
I think it's fair to say that for most gamers, Vulkan/DX12 hasn't really been a net positive, the PSO problem affected many popular games and while Vulkan has been trying to improve, WebGPU is tricky as it has is roots on the first versions of Vulkan.
Perhaps it was a bad idea to go all in to a low level API that exposes many details when the hardware underneath is evolving so fast. Maybe CUDA, as the post says in some places, with its more generic computing support is the right way after all.
Yes, an amazing and detailed post, enjoyed all of it. In AI, it is common to use jit compilers (pytorch, jax, warp, triton, taichi, ...) that compile to cuda (or rocm, cpu, tpu, ...).
You could write renderers like that, rasterizers or raytracers.
(A new simple raytracer that compiles to cuda, used for robotics reinforcement learning, renders at up to 1 million fps at low resolution, 64x64, with textures, shadows)
Problem is that NVIDIA literally makes the only sane graphics/compute APIs. And part of it is to make the API accessible, not needlessly overengineered. Either the other vendors start to step up their game, or they'll continue to lose.
I'm having a hard time taking an API seriously that uses atomic types rather than atomic functions. But at least it seems to be better than Vulkan/OpenGL/DirectX.
At least where I work, writing new Java code is discouraged and you should instead use Kotlin for backend services. Spring Boot which is the framework we use, supports Kotlin just fine, at the same level as Java. And if you use Jetbrains tools, Kotlin tooling is also pretty good (outside Jetbrains I will admit it is worse than Java). Now, even in new Java projects you can still be using Kotlin because it is the default language for Gradle (previously it was Groovy).
So sorry for your loss. Some months ago I was very angry to found out that a person was dead for a bike injury that can be easily solved but he had to wait for almost 50 minutes for the ambulance. Not because he was far away, but because he was in between two regions and the 112 was discussing who should send the ambulance. In fact, they initially send one and later told the driver to go back while in the highway. He's dead just because he happened to have the accident near the border of two regions of the same country, each one with its own public health system.
As always with this kind of stuff, there are so many inaccuracies, at least in the parts I know of. Roads are mostly ok, although some of them are more like "suppositions" that real roads we have found. Let's take a look at the area around Valladolid: https://imgur.com/xMW6yiY
- Pintia is almost confirmed to be near the Duero/Douro river, much more to the south and to the east. It is one of the most explored pre-Roman settlements in the area and while there has not been a definitive proof, there are many hints that show that it's on the place I showed and not where it's shown on the map
- Amallobriga is also, for most historians, located in Tiedra, but it shows Tordesillas. As you can see on the map, the actual location of Tiedra is also a road intersection. The location in Tiedra is consistent with archeological evidence and with route books that show the distance from Amallobriga to other cities we know.
- Nobody really knows where Intercatia or Tela are. But note that a there's a big road intersection at the south. It is confirmed that there was a settlement but we do not know the name of it, several have been proposed. In any case, Intercatia is very difficult to be located as it is shown in the map with no roads going to it. Many archaeologists say it could be in the actual town of Paredes de Nava.
- I don't think there's any real evidence of a bridge that crosses the Douro/Duero river there. What we know is that there's a medieval bridge closer to Septimanca and that it could have had a Roman origin, but according to the map there's no road there.
I was wondering the same thing about the road crossing the Douro between present-day Vila Nova de Gaia and Porto. Was there a bridge there during Roman times? Interestingly, it would be right where the Luiz I Bridge is now.
After a quick research, there's no evidence of a bridge there and it seems difficult to do even for Romans. But there could have been some people with boats in Cale to help cross the river and still be considered part of the road.
I know that not too long ago there was a "bridge", which was a bunch of boats aligned from one margin to the other. Not sure if this counts as a bridge.
Having learned assembly with the book "Computer Organization And Design" from Patterson and Hennessy, it really shows how much RISC-V takes from MIPS. After all they share some of the people involved in both ISAs and they have learned from the MIPS mistakes (no delay slots!). Basically if you come from a MIPS the assembly is very very similar, as it was my case.
Now that book is also available with a RISC-V edition, which has a very interesting chapter comparing all different RISC ISAs and what they do differently (SH, Alpha, SPARC, PA-RISC, POWER, ARM, ...),...
However I've been exploring AArch64 for some time and I think it has some very interesting ideas too. Maybe not as clean as RISC-V but with very pragmatic design and some choices that make me question if RISC-V was too conservative in its design.
> However I've been exploring AArch64 for some time and I think it has some very interesting ideas too. Maybe not as clean as RISC-V but with very pragmatic design and some choices that make me question if RISC-V was too conservative in its design.
Not enough people reflect on this, or the fact that it's remarkably hazy where exactly AArch64 came from and what guided the design of it.
AArch64 came from AArch32. That's why it keeps things like condition codes, which are a big mistake for large out-of-order implementations. RISC-V sensibly avoid this by having condition-and-branch instructions instead. Otherwise, RISC-V is conservative because it tries to avoid possibly encumbered techniques. But other than that it's remarkably simple and elegant.
> That's why it keeps things like condition codes, which are a big mistake for large out-of-order implementations. RISC-V sensibly avoid this by having condition-and-branch instructions instead.
Respectfully, the statement in question is partially erroneous and, in far greater measure, profoundly misleading. A distortion draped in fragments of truth remains a falsehood nonetheless.
Whilst AArch64 does retain condition flags, it is not simply because of «AArch32 stretched to 64-bit», and condition codes are not a «big mistake» for large out-of-order (OoO) cores. AArch64 also provides compare-and-branch forms similar to RISC-V, so the contrast given is a false dichotomy.
Namely:
– «AArch64 came from AArch32» – historically AArch64 was a fresh ARMv8-A ISA design that removed many AArch32 features. It has kept flags, but discarded pervasive per-instruction predication and redesigned much of the encoding and register model;
– «Flags are a big mistake for large OoO» – global flags do create extra dependencies, yet modern cores (x86 and ARM) eliminate most of the cost with techniques such as flag renaming, out-of-order flag generation and using instruction forms that avoid setting flags when unnecessary. As implemented in high-IPC x86 and ARM cores, it shows that flags are not an inherent limiter;
– «RISC-V avoids this by having condition-and-branch» – AArch64 also has condition-and-branch style forms that do not use flags, for example:
1) CBZ/CBNZ xN, label – compare register to zero and branch;
2) TBZ/TBNZ xN, #bit, label – test bit and branch.
Compilers freely choose between these and flag-based sequences, depending on what is already available and the code/data flow. Also, many arithmetic operations do not set flags unless explicitly requested, which reduces false flag dependencies.
Lastly, but not least importantly, Apple’s big cores are among the widest, deepest out-of-order designs in production, with very high IPC and excellent branch handling. Their microarchitectures and toolchains make effective use of:
– Flag-free branches where convenient – CBZ/CBNZ, TBZ/TBNZ (see above);
– Flag-setting only when it is free or beneficial – ADDS/SUBS feeding a conditional branch or CSEL;
– Advanced renaming – including flag renaming – which removes most practical downsides of a global NZCV.
You are, of course, most welcome to offer your contributions — whether in debate or in contestation of the points I have raised – beyond the hollow reverberations of yet another LLM echo chamber.
The information I used to contest the original statement comes from the AArch64 ISA documentation as well as from the infamous «M1 Explainer (070)» publication, namely sections titled «Theory of a modern OoO machine» and «How Do “set flags” Instructions, Like ADDS, Modify the History File?».
Thanks for the link to that article, by the way! I missed a lot of the “ephemeral literature” that was being passed around when M1 was first released and we were collectively trying to understand it.
That will be amazing when it happens, and a year is VERY soon!
Tenstorrent's first "Atlantis" Ascalon dev board is going to be similar µarch to Apple M1 but running at a lower clock speed, but all 8 cores are "performance" cores, so it should be in N150 ballpack single-core and soundly beating it multi-core.
They are currently saying Q2 2026, which is only 4-7 months from now.
Afair, AArch64 was basically designed by Apple for their A-series iPhone processors, and pushed to be the official ARM standard. Those guys really knew what they were doing and it shows.
It's clear that Arm worked with Apple on AArch64 but saying it was basically designed 'by Apple' rather than 'with Apple' is demonstrably unfair to the Arm team who have decades of experience in ISA design.
If Apple didn't need Arm then they would have probably found a way of going it alone.
Apple helped develop Arm originally and was a (very) early user with Newton. Why would they go it alone when they already had a large amount of history and familiarity available?
I get the same impression w.r.t. RISC-V v. MIPS similarities, just from my (limited) exposure to Nintendo 64 homebrew development. Pretty striking how often I was thinking to myself “huh, that looks exactly like what I was fiddling with in Ares+Godbolt, just without the delay slots”.
Instructions are more easily added than taken away. RISC-V started with a minimum viable set of instructions to efficiently run standard C/C++ code. More instructions are being added over time, but the burden of proof is on someone proposing a new instruction to demonstrate what adding the instruction costs and how much benefit it brings and in what real-world applications.
> Instructions are more easily added than taken away.
That's not saying much, it's basically impossible to remove an instruction. Just because something is easier than impossible doesn't mean that it's easy.
And sure, from a technical perspective, it's quite easy to add new instructions to RISC-V. Anyone can draft up a spec and implement it in their core.
But if you actually want wide-spread adoption of a new instruction, to the point where compilers can actually emit it by default and expect it to run everywhere, that's really, really hard. First you have to prove that this instruction is worthwhile standardizing, then debate the details and actually agree on a spec. Then you have to repeat the process and argue the extension is worth including in the next RVA profile, which is highly contentious.
Then you have to wait. Not just for the first CPUs to support that profile. You have to wait for every single processor that doesn't support that profile to become irrelevant. It might be over a decade before a compiler can safely switch on that instruction by default.
It's not THAT hard. Heck, I've done it myself. But, as I said, the burden of proof that something new is truly useful quite rightly lies with the proposer.
The ORC.B instruction in Zbb was my idea, never done anywhere before as far as anyone has been able to find. I proposed it in late 2019, it was in the ratified spec in later 2021, and implemented in the very popular JH7110 quad core 1.5 GHz SoC in the VisionFive 2 (and many others later on) that was delivered to pre-order customers in Dec 2022 / Jan 2023.
You might say that's a long time, but that's pretty fast in the microprocessor industry -- just over three years from proposal (by an individual member of RISC-V International) to mass-produced hardware.
Compare that to Arm who published the spec for SVE in 2016 and SVE 2 in 2019. The first time you've been able to buy an SBC with SVE was early 2025 with the Radxa Orion O6.
In contrast RISC-V Vector extension (RVV) 1.0 was published in late 2021 and was available on the CanMV-K230 development board in November 2023, just two years later, and in a flood of much more powerful octa-core SpacemiT K1/M1 boards (BPI-F3, Milk-V Jupiter, Sipeed LicheePi 3A, Muse Pi, DC-Roma II laptop) starting around six months later.
The question is not so much when the first CPU ships with the instruction, but when the last CPU without it stops being relevant.
It varies from instruction to instruction, but alternative code paths are expensive, and not well supported by compilers, so new instructions tend to go unused (unless you are compiling code with -march=native).
In one way, RISC-V is lucky. It's not that currently widely deployed anywhere, so RVA23 should be picked up as the default target, and anything included in it will have widespread support.
But RVA23 is kind of pulling the door closed after itself. It will probably become the default target that all binary distributions will target for the next decade, and anything that didn't make it into RVA23 will have a hard time gaining adoption.
I'm confused. You appear to be against adding new instructions, but also against picking a baseline such as RVA23 and sticking with it for a long time.
Every ISA adds new instructions over time. Exactly the same considerations apply to all of them.
Some Linux distros are still built for original AMD64 spec published in August 2000, while some now require the x86-64-v2 spec defined in 2020 but actually met by CPUs from Nehalem and Jaguar on.
The ARMv8-A ecosystem (other than Apple) seems to have been very reluctant to move past the 8.2 spec published in January 2016, even on the hardware side, and no Linux distro I'm aware of requires anything past original October 2011 ARMv8.0-A spec.
I'm not against adding new instructions. I love new instructions, even considered trying to push for a few myself.
What I'm against is the idea that it's easy to add instructions. Or more the idea that it's a good idea to start with the minimum subset of instructions and add them later as needed.
It seems like a good idea; Save yourself some upfront work. Be able to respond to actual real-world needs rather than trying to predict them all in advance. But IMO it just doesn't work in the real world.
The fact that distros get stuck on the older spec is the exact problem that drives me mad, and it's not even their fault. For example, compilers are forced generate some absolute horrid ARMv8.0-A exclusive load/store loops when it comes to atomics, yet there are some excellent atomic instructions right there in ARMv8.1-A, which most ARM SoCs support.
But they can't emit them because that code would then fail on the (substantial) minority of SoCs that are stuck on ARMv8.0-A. So those wonderful instructions end up largely unused on ARMv8 android/linux, simply because they arrived 11 years ago instead of 14 years ago.
At least I can use them on my Mac, or any linux code I compile myself.
-------
There isn't really a solution. Ecosystems getting stuck on increasingly outdated baseline is a necessary evil. It has happened to every single ecosystem to some extent or another, and it will happen to the various RISC-V ecosystems too.
I just disagree with the implication that the RISC-V approach was the right approach [1]. I think ARMv8.0-A did a much better job, including almost all the instructions you need in the very first version, if only they had included proper atomics.
[1] That is, not the right approach for creating a modern, commercially relevant ISA. RISC-V was originally intended as more of an academic ISA, so focusing on minimalism and "RISCness" was probably the best approach for that field.
It takes a heck of a lot longer if you wait until all the advanced features are ready before you publish anything at all.
I think RISC-V did pretty well to get everything in RVA23 -- which is more equivalent to ARMv9.0-A than to ARMv8.0-A -- out after RV64GC aka RVA20 in the 2nd half of 2019.
We don't know how long Arm was cooking up ARMv8 in secret before they announced it in 2011. Was it five years? Was it 10? More? It would not surprise me at all if it was kicked off when AMD demonstrated that Itanium was not going to be the only 64 bit future by starting to talk about AMD64 in 1999, publishing the spec in 2001, and shipping Opteron in April 2003 and Athlon64 five months later.
It's pretty hard to do that with an open and community-developed specification. By which I mean impossible.
I can't even imagine the mess if everyone knew RISC-V was being developed from 2015 but no official spec was published until late 2024.
I am sure it would not have the momentum that it has now.
it's basically impossible to remove an instruction.
Of course not. You can replace an instruction with a polyfill. This will generally be a lot slower, but it won't break any code if you implement it correctly.
While I agree with you, the original comment was still valuable for understanding why RISC-V has evolved the way it has and the philosophy behind the extension idea.
Also, it seems at least some of the RISC-V ecosystem is willing to be a little bit more aggressive. With Ubuntu making RVA23 the minimum profile for Ubuntu, perhaps we will not be waiting a decade for it to become the default. RVA23 was only ratafied a year ago.
For the uninitiated in AArch64, are there specific parts of it you're referring to here? Mostly what I find is that it lets you stitch common instruction combinations together, like shift + add and fancier adressing. Since the whole point of RISC-V was a RISC instruction set, these things are superfluous.
My memory is a bit fuzzy but I think Patterson and Hennessy‘s “Computer Architecture: A Quantitative Approach” had some bits that were explicitly about RISC-V, and similarities to MIPS. Unfortunately my copy is buried in a box somewhere so I can’t get you any page numbers, but maybe someone else remembers…
Henessey and Patterson "Computer Architecture: A Quantitative Approach" has 6 published editions (1990, 1996?, 2003, 2006, 2011, 2019) with the 7th due November 2025. Each edition would have a varying set of CPUs as examples for each chapter. For example, the various chapters in the 2nd edition has sections on the MIPS R4000 and the PowerPC 620, while the 3rd edition has sections on the Trimedia TM32, Intel P6, Intel IA-64, Alpha 21264, Sony PS2 Emotion Engine, Sun Wildfire, MIPS R4000, and MIPS R4300. From what I could figure out via web searches, the 6th edition has RISC-V in the appendix, but the 3rd through 5th editions has the MIPS R4000.
Patterson and Hennessy "Computer Organization and Design: The Hardware/Software Interface" has had 6 editions (1998, 2003, 2005, 2012, 2014, 2020) but various editions have had ARM, MIPS, and RISC-V specific editions.
> As a mathematician doing a fair amount of numerical analysis, I must know several programming languages, all of which do roughly the same sort of thing.
But Mercury is not a language of the same paradigm as those (imperative, array oriented maybe). It's a logical programming language which I must guess, you probably never used any language of this category. In fact many features of logic programming languages never made to mainstream programming languages or they're behind some uncommon libraries.
For sure, I've never used a language of this paradigm. I'm also bothered by the fact that I don't have a single good reason why I should, and would love to know if somebody has one. The currently given reason is curiosity.
I guess that is my point; all of the languages I know are of the same paradigm, but I need to know them all for work. So I disagree with the assertion that only languages of a different paradigm from the one you know is worth learning.
> I guess that is my point; all of the languages I know are of the same paradigm, but I need to know them all for work. So I disagree with the assertion that only languages of a different paradigm from the one you know is worth learning.
I think you're taking that statement too literally, and way too seriously. Many of the epigrams are a bit tongue in cheek, and that one is too.
> 127. Epigrams scorn detail and make a point: They are a superb high-level documentation.
Don't take them literally and act like they're gospel truths you must live your life by. That's not what Perlis was going for with them. Just like you shouldn't take DRY (don't repeat yourself) literally. You should use judgement.
If you need to learn Fortran to write your numeric code, even though Fortran isn't teaching you anything, you should learn Fortran. You have a job to do. But if you don't need to learn Fortran for work, and it has nothing to offer over the other languages you know, why bother with it? That's the key point of the epigram.
Arab states could get into the same trap