That’s mainly because German has fucked up the smart meter rollout. In their wisdom they separated the meter and the gateway when other countries just combined it. They also made it super secure (good), but then didn’t look at the fact that lots of people live in rented apartments and their meters in the cellars have really poor or no cellular connectivity. When Germany can finally do steerable dynamic loads properly at 95% of the market rather than under 10%, it will finally make a difference on steering pricing for such consumers as yourself.
Germany is investing in massive battery parks dotted around the grid. This will make a difference to supporting base load and offsetting coal, but it will take time.
If there’s anything about the Germans you can count on, is that they move slowly.
This reminds me of Adrian Thompson’s (University of Sussex) 1996 paper, “An evolved circuit, intrinsic in silicon, entwined with physics,” ICES 1996 / LNCS 1259 (published 1997), which was extended in his later thesis, “Hardware Evolution: Automatic Design of Electronic Circuits in Reconfigurable Hardware by Artificial Evolution, Springer, 1998”.
Before Thompson’s experiment, many researchers tried to evolve circuit behaviors on simulators. The problem was that simulated components are idealized, i.e. they ignore noise, parasitics, temperature drift, leakage paths, cross-talk, etc. Evolved circuits would therefore fail in the real world because the simulation behaved too cleanly.
Thompson instead let evolution operate on a real FPGA device itself, so evolution could take advantage of real-world physics. This was called “intrinsic evolution” (i.e., evolution in the real substrate).
The task was to evolve a circuit that can distinguish between a 1 kHz and 10 kHz square-wave input and output high for one, low for the other.
The final evolved solution:
- Used fewer than 40 logic cells
- Had no recognisable structure, no pattern resembling filters or counters
- Worked only on that exact FPGA and that exact silicon patch.
Most astonishingly:
The circuit depended critically on five logic elements that were not logically connected to the main path.
Removing them should not affect a digital design
- they were not wired to the output
- but in practice the circuit stopped functioning when they were removed.
Thompson determined via experiments that evolution had exploited:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
- Electromagnetic interference from neighbouring cells
In short: the evolved solution used the FPGA as an analog medium, even though engineers normally treat it as a clean digital one.
Evolution had tuned the circuit to the physical quirks of the specific chip. It demonstrated that hardware evolution could produce solutions that humans would never invent.
Answering another commenter's question: yes the final result was dependent on temperature. The author did try using it over different temperatures. It only was able to operate in the region of temperatures it was trained at.
I’d argue that this was a limitation of the GA fitness function, not of the concept.
Now that we have vastly faster compute, open FPGA bitstream access, on-chip monitoring, plus cheap and dense temperature/voltage sensing, reinforcement learning + evolution hybrids, it becomes possible to select explicitly for robustness and generality, not just for functional correctness.
The fact that human engineers could not understand how this worked in 1996 made researchers incredibly uncomfortable, and the same remains true today, but now we have vastly better tooling than back then.
I don't think that's true, for me it is the concept that's wrong. The second-order effects you mention:
- Parasitic capacitive coupling
- Propagation delay differences
- Analogue behaviours of the silicon substrate
...are not just influenced by the chip design, they're influenced by substrate purity and doping uniformity -- exactly the parts of the production process that we don't control. Or rather: we shrink the technology node to right at the edge where these uncontrolled factors become too big to ignore. You can't design a circuit based on the uncontrolled properties of your production process and still expect to produce large volumes of working circuits.
Yes, we have better tooling today. If you use today's 14A machinery to produce a 1µ chip like the 80386, you will get amazingly high yields, and it will probably be accurate enough that even these analog circuits are reproducible. But the analog effects become more unpredictable as the node size decreases, and so will the variance in your analog circuits.
Also, contrary to what you said: the GA fitness process does not design for robustness and generality. It designs for the specific chip you're measuring, and you're measuring post-production. The fact that it works for reprogrammable FPGAs does not mean it translates well to mass production of integrated circuits. The reason we use digital circuitry instead of analog is not because we don't understand analog: it's because digital designs are much less sensitive to production variance.
Possibly, but maybe the real difference is the subtlety between a planned deterministic (logical) result versus deterministic (black box) outcome?
We’re seeing this shift already in software testing around GenAI. Trying to write a test around non-deterministic outcomes comes with its own set of challenges, so we need to plan can deterministic variances, which seems like an oxymoron but is not in this context.
That unreplicability between chips is actually a very, very desirable property when fingerprinting chips (sometimes known as ChipDNA) to implement unique keys for each chip. You use precisely this property (plus a lot of magic to control for temperature as you point out) to give each chip its own physically unclonable key. This has wonderfully interesting properties.
I wonder what would happen if someone evolved a circuit on a large number of FPGAs from different batches. Each of the FPGAs would receive the same input in each iteration but the output function would be biased to expose the worst-behaving units (maybe the bias should be raised biased in later iterations when most units behave well).
Either it would generate a more robust (and likely more recognizable) solution, or it would fail to converge, really.
You may need to train on a smaller number of FPGAs and gradually increase the set. Genetic algorithms have been finicky to get right, and you might find that more devices would massively increase the iteration count
The interesting thing about DMT is that it’s an ego-stripper. You have no sense of self. You are non-corporeal. Time and space are irrelevant.
People who have taken DMT find it very difficult to explain what the visions mean when they flash before your eyes. “Flash” in the sense that they are so fast and from every conceivable direction simultaneously and you can see in all directions. And beautifully purple.
Since we are beings that have a conscious “self”, we attribute these moving images to “our lives flashing before our eyes”, but I believe that to be our egotistical selves applying that after the fact.
I now believe that the human brain acts as a filter to a raw stream of collective human shared consciousness, normally out of our grasp.
What people see there is a short temporary window into everyone else’s exact same moment in time.
It’s like a back door hack into god’s admin console and you get to watch the interconnected consciousness of human existence in real time for a few minutes.
However our brains aren’t meant to run unfiltered. Our brains usually optimize and filter as much as they can to conserve energy. We notice the differences and not the usual. Our brains fill in gaps. Eventually the brain overloads as the trip runs to an end and everything goes black. A complete void overwhelms you.
The brain finally reboots and coming back is like watching an old Linux machine reboot, loading its kernel and drivers before adding the OS layers.
First you question what you are, before then discovering who you are. It’s like a process of birth but coming out of hibernation mode for fast boot.
Maybe death is the same. Returning to the collective consciousness.
Like the ant that cannot comprehend the existence of the universe or the neuron that only understands its nearest neighbors, maybe there exists a plane above human individuals as an analogy to the neuron or the ant, that we too cannot not perceive nor understand, because our brains are too small to comprehend it. Only for those fleeting moments when we overclock the system.
Not sure how it compares but we did some trials with Azure AI Document Intelligence and were very surprised at how good it was. We had a document example which was a poor photograph of a document that had quite a skew, and it (too our surprise), also detected the customer’s human legible signature and extracted their name from that signature.
I wouldn’t be surprised if people connected to the Chinese government are not also taking advantage of the fact that Trump is extremely easy to trigger, manipulate and predict.
I was caught out in the same way. I now take out a BahnCard subscription online and immediately cancel it. I know I have the BahnCard for one year, and have the security that it should not automatically rollover without me having the choice.
Back in the late 90’s we were at a tipping point of how to monetize the world wide web. It turned out that selling advertising was way easier than figuring out micropayments. Advertising turned into a billions of dollar business. Facebook then turned the World Wide Web into a snooping platform. We then moved into a global propaganda engine on a mass scale.
I wonder if micropayments had been solved before taking the easy route, we would have been in a much more healthy global scenario today.
Germany is investing in massive battery parks dotted around the grid. This will make a difference to supporting base load and offsetting coal, but it will take time.
If there’s anything about the Germans you can count on, is that they move slowly.