Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Before invoking parallel universes, how about comparing the system to nature's mind-boggling number of particles in the macroscopic world? A single gram contains 10^23=2^76 particles. Google's random circuit sampling experiment used only 67 qubits, Which is still order of magnitude below 76. I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

Did Google's experiment encounter problems when trying to run RCS on the full 105 qubits device?

Before saying that the computation invoked parallel universes, first I'd like to see that the computation couldn't be explained by the state being encoded classically by the state of the particles in the system.



Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible - yet the universe "computes" the correct result in real time. It feels like there is a huge gap between what actually happens and what can be done with a computer (even a quantum one).


The universe also computes Pi perfectly every time and nobody is surprised or calling side universes for help explaining it.


Universe does not calculate the digits of Pi. We do.


I think they mean that Pi is part of many formulas in physics.


It's a good questiopn why that is so. But I wouldn't draw from that the conclusion that Universe somehow "calculates Pi", and then puts it in all the forces it "has" so it turns out in our formulas. That is rather fantastical way of thinking and I do see its poetic appeal. A bit like "God doesn't play dice, or does he?"

What is calculation anyway we may ask. Isn't it just term-rewriting?


I think this shows how bad the definitions for computing are, there's a big rethink needed, but unfortunately it needs a galaxy brain to do it!


> It's a good questiopn why that is so

Pi is just a description used for calculating perfectly/near-perfect spheres. A sphere is nature's building block, since every point on it's surface is the same distance from the centre.


> yet the universe "computes" the correct result in real time

Does it? In what sense the result is "correct"? It's not because it's perfectly regular, or unique, or predictable, or reproducible. So what's "correct" about it?

Completely out of my depth here, but maybe there is a difference between evolution of a physical system and useful computation: and maybe there's much less useful computation that can be extracted from a physical system than the entire amount of computation that would be theoretically needed to simulate it exactly. Maybe you can construct physical systems that perform vast, but measurable, amounts of computation, but you can extract only a fixed max amount of useful information from them?

And then you have this strange phenomenon: you build controlled systems that perform an enormous amount of deterministic, measurable computation, but you can't make them do any useful work...


It does seem to, and can anyone credibly say they aren't out of their depth in these waters? (the sandpile thing is not original, it dates back many years). Taking the idea that the "universe is a simulation" [0], what sort of computer (or other device) could it be running on? (and how could we tell we're living in a VM?)

From the same school of thought, to simulate the path of a single particle seems it should require a device comprised of more than a single particle. Therefore, if the universe is a simulation, the simulator must have more than the number of particles in the universe.

[0] https://en.wikipedia.org/wiki/Simulation_hypothesis


If the universe is just the universe, it needs only the number of particles in the universe.


"In what sense is ground truth correct?"

In the tautological sense.


> Somehow the universe knows how to organise the sand in an egg timer to form an orderly pile. Simulating that with a classical computer seems impossible

Is it really?

There's only ~500,000 grains of sand in an egg timer.

I don't know anything here, but this seems like something that shouldn't be impossible.

So I'm curious. Why is this impossible?

What am I missing?


Maybe it's not that hard to simulate, but let's start with looking at just two of the sand grains that happen to hit each other? They collide, how they rebound is all angles, internal structure, Young's modulus, they have electrostatic interactions, even the Van der Walls force come into play. Sand grains aren't regular, consider how determining the precise point at which two irregular objects collide is quite a challenge (and this isn't even a game, approximations to save compute time won't do what the real world does 'naturally').

So while we can - for something as simple and regular as an eggtimer - come up with some workable approximations, the approximation would surely fall short when it comes to the detail (an analytical solution for the path of every single grain).


I guess I wasn't thinking of a PERFECT simulation.

Now it's obvious to me that you would have to simulate exactly what the universe is doing down to the smallest level to get a perfect simulation.

Thanks.

Is it really impossible to get a very close approximation without simulating down to the atomic level, though?


A close approximation should arguably include collapses/slides, which happen spontaneously because the pile organises itself to a critical angle; then an incredibly small event can trigger a large slide of salt/sand/whatever/rocks (or whatever else the pile is made of). Even working out something like "What's the biggest and smallest slides that could occur given a pile of some particular substance?".

Every approximation will by definition deviate from what really happens - I suppose that's why we talk of "working approximations", i.e. they work well enough for a given purpose. So it probably comes down to what the approximation is being used for.

There is the idea that we are all living in a simulation; if so maybe if we look closely enough at the detail all the way from the universe to atoms then we'll start to see some fuzziness (well, of course there's quantum physics....).


When the output looks the same as the original we would say that the simulation was successful. That is how computer games do it. We're not asking for the exact position of each grain, just the general outline of the pile.


An image of something is likely to be the simplest model of that thing that happened, and it has A LOT less information than a 3D model of arbitrary resolution would have.


Simulation is never an "image". It may simulate each grain, just saying it doesn't need to simulate each precisely, because the law of large numbers kicks in.

This is the basis for example Monte Carlo simulation, it simulates real world with random numbers it generates.


Every video game engine is a simulation and many of them are a very simplified model of images of things happening instead of simulating the actual physics. Even "physics" in these engines is often just rendering an image.


The real issue is that the sand isn't orderly sorted. At a micro level, it's billions and trillions of individual interactions between atoms that create the emergent behavior of solid grains of sand packing reasonably tightly but not phasing through each other.


> I wonder why, the chip had 105 qubits and the error correction experiment used 101 qubits.

I wonder why, byte has 8 bits and the Hamming error correction code uses 7 bits.

oh right - that's because *the scheme* requires 3-7-15-... bits [0] and 7 is the largest that fits

Same with surface error correction - it's just the largest number in a list. No need for conspiracies. And no connection to manufacturing capabilities, which determine qubits on a single chip

[0] https://en.wikipedia.org/wiki/Hamming_code




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: