Interestingly (to me), I learned about the "ridgeway" just a few days ago, from Jim Leary's episode on the "History Rage" podcast, where historians vent about popular historical misconceptions.
For the Ridgeway in particular, the claim from the podcast is that there is in fact no archaeological evidence that this was a prehistoric routeway, nor that it was a single coherent long-distance entity. The claim is that it appears this way because highland areas and ridges are better preserved, because they're generally not cultivated and are less subject to erosion, so the whole thing is just a selection effect.
Discussion starts around 39:25 in the podcast[1]
Jim Leary has a book about this, "Footmarks: A Journey into our Restless Past"[2].
To be fair, I personally am ill-equipped to assess the claim, and it does look like an interesting place to ramble. The linked article also, to continue being fair, does not call it a road, they limit themselves to calling it a "prehistoric trackway", which may well be defensible.
> For the Ridgeway in particular, the claim from the podcast is that there is in fact no archaeological evidence that this was a prehistoric routeway, nor that it was a single coherent long-distance entity. The claim is that it appears this way because highland areas and ridges are better preserved, because they're generally not cultivated and are less subject to erosion, so the whole thing is just a selection effect.
I had a search around and came across four or five citations claiming that the ridgeway wasn't actually the long-distance path it's popularly known as... all of which were dead links. So I'm inclined to suspect that it really is exactly what it's thought to be; at a minimum I'd want to see a publicly available text for why it's not, rather than a podcast or paid book.
There's no question that the tops of chalk ridges were used for route making in centuries past. They tend to be drier, less boggy and less woody (and safer from attack) than low-lying routes. So it may have not been one long trail, but as a succession of trails, it is still highly likely.
Slightly OT, but if you are interested in this sort of thing, William Dalrymple and Anita Anand co-host the Empire podcast, which has many episodes and guests and recommended reading covering lots of ancient history.
Kinetic energy scales as velocity squared, so a car at 100 MPH has 4x the energy of a car at 50 MPH. But in the 50 MPH scenario there are two cars, so the total energy dissipated in the 50 MPH head-on collision is half that of the 100 MPH brick-wall collision. In the brick-wall case, presumably all the energy is available to demolish the one car, but in the head-on case, the energy is spread out demolishing two cars.
Impact force is maybe trickier, it depends on the acceleration, but if the two 50 MPH cars end up with zero momentum, then they have 50 MPH of delta-v each over some collision time. The single car of course has 100 MPH of delta-v. If the collision times are the same (arguably a reasonable approximation if the head-on is a highly symmetric), then the impact force in the head-on case is half that of the brick-wall case.
Although the single car at 100 MPH has twice the initial kinetic energy of two cars at 50 MPH, not all of this gets dissipated in the collision.
After the collision in the "single car 100 mph hitting an identical stationary car" case, we have both cars moving at 50 MPH in the same direction, NOT two stationary cars. Half the original kinetic energy is still in the form of kinetic energy, in other words.
The 100MPH car does not experience a velocity change of 100 MPH, but only 50 MPH - the same as if it had hit an oncoming car at 50MPH.
Vaguely reminds me of the Adapteva Epiphany RISC multi-processors from the old Parallella Kickstarter project, and presumably others, but that's the one I played with for a while.
I'm not sure how this project's interconnect differs, they do say theirs is revolutionary, maybe that's the difference.
Lovely. "The Mythical Man Month" was a recommended supplementary text when I was in undergrad computer science, and it was well worth the read, especially for me -- I have a tendency to get hung up on the cool tools and fun methods in programming, which is OK in hobby environments, but if the purpose of the software is to solve a problem, you need to keep an eye on the problem. The book extends this insight to managerial methods, famously, and to me feels akin to Goodhart's law, about how once a metric becomes a target, it ceases to be a useful metric.
There's a great interview with Frederick Brooks on Youtube, not specifically about the System/360, but the guy was amazingly self-aware: https://www.youtube.com/watch?v=ul0dbgs8Mdk
I'm not sure if this is a fair question or not, but suppose I have a 6-byte blob I want to send. I can pad it out to 8 bytes, use this scheme to encode it, and send it.
Then I want the receiver to understand that only the first 6 bytes of their decoded results are part of the transmission -- how do I do that?
Base64 has a special character ('=') that is used for encoding padding, but this method doesn't seem to have that. The spec says "it is up to the application to ensure that frames and strings are padded if necessary", which suggests they've scoped this problem out.
I suppose I can always build a little "packet" that starts with the payload length, so that the receiver can infer the existence of padding if there is additional data beyond the advertised payload length, but now the receiver and I need to agree on that protocol.
Padding is actually not really necessary in base64, as you can infer the length from the number of characters received.
Unfortunately for Z85, they made the highly questionable decision to use big-endian, which means it can't take base64's route. You could probably define an incomplete group at the end to be right-aligned or similar, but you may as well be sensible and just go little-endian.
I was also surprised that they called out the complexity of EV drivetrains (and charging systems, but I don't know anything about those) as a factor. My prior understanding was that EV drivetrains are significantly simpler than ICE ones, the number of moving parts is comparatively low for EVs, and you don't have the large temperature gradients and intense mechanical impulses that come from repeatedly igniting fuel/air mixtures.
I don't dispute the empirical claims, but as the parent comment implies, I think this is mostly about the maturity of the repair ecosystem, and it's reasonable to expect that that will improve as EVs become more prevalent. It's not a reason to stop, it's a reason to keep going.
At least theoretically, EV drive trains should be more durable than their gasoline counterparts. But the batteries are the weak point. Nobody is making modular batteries that can be easily diagnosed and swapped, and those batteries are essentially incendiary devices that even an expert cannot assess. It's an opaque canister of chemicals that could be damaged internally without anyone being aware. External damage almost always causes spectacular fires that can't be extinguished. If you have an EV and get in a wreck, GTFO as soon as you can, especially if there are any fumes in sight.
The Ionic 5 has a quite modular battery array that looks far more repairable than other electric cars. Have a look at a teardown over here: https://www.youtube.com/watch?v=5PASNQU5RSw
If Hyundai won't fix it for less than $60k, and they total the car for scuff marks on the bottom, where does that leave you? Maybe you can get that Youtuber to fix it for you?
Dealers are known for being expensive. Furthermore, $60k is mostly a "go away" price from the dealer signalling that they don't have the skills and don't want to take on the risk on learning the skills needed to make the repair. The end result is that the article says more about the dealer than the manufacturer. There are plenty of decent third party mechanics in the world that are willing to try.
There's a similar video on youtube about repairing a dent in a Rivian truck. The official repair by a dealer was quoted at more than $40k as it involved replacing the entire single piece panel that was "damaged". Instead, a small auto body shop spent the 2 days needed to gently pull out the dent, fill in the scratches and touch up the paint for something like $3k.
Not all mechanics are created equal. Find one that's willing to work with you, which will become easier as electric vehicles become more common.
If you have a warranty, you may be obligated to go to the dealer or forfeit the warranty if I'm not mistaken. I never had this issue though. If the dealer wants you to gtfo with that garbage, a nearly new vehicle with their name on it, why do you think anyone else would be better able to do the job? I think a particular dealer not able to do the work would refer you to another dealer who could, if it was just a skill issue.
The problem with the Rivian was the body is all in one piece and made of aluminum, which is not easily reworked. Making the bed of a truck out of a soft metal that is hard to rework is a bad idea. But perhaps there is someone out there who could do it cheap.
Dealerships are like any other business: some of them are incompetent at certain tasks. My entire point is that if one dealer is not doing a good job, then go to someone else that is willing to try to do a better job.
Generally I agree, but only the dealers and manufacturer have all the technical data, software, and access to parts for these cars. They would not take telling a customer the job is infeasible lightly, especially if they have a warranty. They would probably send the customer to a different repair shop in their network if that was possible and necessary.
Your faith in dealerships is unreasonably high. Based on my experiences maintaining a fleet of trucks for my company in addition to my personal vehicles, there are most certainly dealerships / mechanics that are incompetent and unable to fix certain problems within a reasonable amount of time. Their mechanics are not magicians, and they vary in skill level significantly. Sometimes a quick 5 minutes spent on the internet will find a solution for a problem that the dealer's mechanics are clueless about, as was the case with the anti-theft lockout on the Alero I had. The stand-pipe issue on one of the older diesel trucks took the mechanic $3k worth of time to diagnose and fix, but once we knew about it, we found out that the issue was quite common for engines of that vintage (and we later found out that the other stand pipe in the engine had already been replaced by another mechanic prior to our purchase of that truck).
Other times a mechanic shows a degree of cleverness that makes them well worth compensating them for the time, like using an infrared camera app on their phone to find a wiring harness short in a minute rather than spending hours crawling around the chassis.
The article being from 2011 is perhaps why it can be as long as it is without mentioning "Coarse-grained reconfigurable arrays", or CGRAs, which, at least as of 2019 when I learned about them, seemed to occupy a good middle ground between conventional CPUs and FPGAs.
The idea is that, instead of being a bunch of gates like an FPGA, the components of the CGRA are at the scale of an ALU, or maybe an on-silicon network switch, with a single CGRA having different parts that are optimized for e.g. numerics, IO, encryption, caching, etc., which you can knit together into the processor you need.
That's maybe where this idea went?
Here's a more recent link covering similar ground:
It's worth noting that what you are describing is basically an FPGA nowadays.
FPGAs don't have "gates" as the basic building blocks.
Instead you have "logic cells" which are composed of a fixed size (often either 4 or 6 bit) LUT (look up table), one or two flip flops, and a multiplexer to choose whether to use the stored value or the new LUT value. They also sometimes contain basic ALU components like adders or multipliers. Those logic cells are then usually grouped together to form logic blocks which might have some amount of local memory/cache available. These blocks are the smallest "discrete" component of an FPGA and are configured as a whole block with configurations determined at synthesis time.
On top of this you have memory blocks and other "hard IP" like DSP slices, etc distributed around the IC for these logic blocks to take advantage of.
And then finally you have larger hard IP that a given chip only has a few of. These include your PLLs (phase locked loops) or other analog clock multiplier hardware (to allow you to run multiple clock domains on a single FPGA), your encryption and encoding/decoding accelerators, dedicated protocol hard IP (ethernet, PCIE, etc), and hardware that is directly attached to the IO (ADCs, DACs, pullup/pulldown resistor configuration, etc). And increasingly nowadays also full blown hard IP CPUs and GPUs that can interact directly with the FPGA.
No the previous poster has not described a FPGA, but a FPGA-like device that contains much more complex fixed-function blocks than the small DSP multipliers that are available in the currently existing FPGAs.
With 18-bit integer multipliers or the like you cannot compete in energy efficiency with the arithmetic execution units of a GPU.
The so-called CGRAs are an attempt to revive the idea of reconfigurable dataflow processors, with the hope of combining in the same device the advantages of the FPGAs with the advantages of the GPUs.
Xilinx FPGAs already contain a "dataflow processor" called "AI Engine" and they tend to have more TOPS/TFLOPS than the programmable fabric and DSPs combined.
That isn't even everything. Nowadays Xilinx FPGAs come with "AI engine tiles". Some FPGAs come with 400 tiles. Each of those tiles is a C programmable processor running with a local scratchpad memory. So an FPGA is probably the most heterogenous type of chip ever invented.
The claims about improved energy efficiency (due to the elimination of the instruction fetching and decoding and of the register files) can be correct only when such a CGRA is not used as a general-purpose CPU, but as an accelerator used to implement various iterative algorithms, i.e. when its dataflow compiler could be used as a replacement for something like CUDA.
A FPGA would have the same energy efficiency advantage for algorithms without much numeric computation, but it is not competitive with a GPU or a CGRA for most numeric computations, except DSP, because it includes only small fixed-point multipliers and adders, which are not as efficient as big vector floating-point fused-multiply-add execution units.
> The idea is that, instead of being a bunch of gates like an FPGA, the components of the CGRA are at the scale of an ALU, or maybe an on-silicon network switch, with a single CGRA having different parts that are optimized for e.g. numerics, IO, encryption, caching, etc., which you can knit together into the processor you need.
What the other guy said downthread, but seriously.
Xilinx FPGAs today do have LUTs (the 4-input or 6-input gate-like structures). But they also have VLIW + SIMD cores with L1 connected on powerful interconnect.
So CGRAs is probably "just" a modern Xilinx "AI Engine" FPGA.
-------------------
Major FPGAs for the last 10+ years have all had hardware multipliers as well. Multiplication is just one of those ASIC / hardware units that LUTs cannot emulate very well. Depending on your definition of "Coarse-grained reconfigurable arrays", you might want to look into DSP-slices and other "ALU-like" subunits of FPGAs of the last decade.
I love this, it's got that "technical correctness is the best correctness" vibe going on big-time, and it will probably change the way I see that slice of the world.
Butterflies loom weirdly large in my life, at one point many years ago my grandmother decided that butterflies were my wife's "thing" (they aren't, we're not sure how this happened), and so for many years we got butterfly-themed gifts from her. It was merely odd, so we just rolled with it, but now we have various butterfly tchochkes around, all of which, a brief survey reveals, are in the dead pose.
> at one point many years ago my grandmother decided that butterflies were my wife's "thing" (they aren't, we're not sure how this happened),
This is a wild shot in the dark but do you:
1. Live near Westminster, Colorado
2. Have had your grandmother visit you several times
3. And each time your wife took her to visit the butterfly pavilion?
Can confirm, I could never forget this article since I read it a long time ago and have developed a habit of always checking every butterfly depiction I see if its in a dead pose.
For the Ridgeway in particular, the claim from the podcast is that there is in fact no archaeological evidence that this was a prehistoric routeway, nor that it was a single coherent long-distance entity. The claim is that it appears this way because highland areas and ridges are better preserved, because they're generally not cultivated and are less subject to erosion, so the whole thing is just a selection effect.
Discussion starts around 39:25 in the podcast[1]
Jim Leary has a book about this, "Footmarks: A Journey into our Restless Past"[2].
To be fair, I personally am ill-equipped to assess the claim, and it does look like an interesting place to ramble. The linked article also, to continue being fair, does not call it a road, they limit themselves to calling it a "prehistoric trackway", which may well be defensible.
[1] https://www.historyrage.com/episodes/episode/69e607e6/histor...
[2] https://uk.bookshop.org/p/books/footmarks-a-journey-into-our...