Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Of course you can model quantum measurements!

No, you can't. You can statistically model the results of multiple quantum measurements in the aggregate but you cannot model the physical process of a single measurement because there is a fundamental disconnect between the physics of quantum measurements and TMs, namely, TMs are deterministic and quantum measurements are not. There is a reason that the Measurement Problem is a thing.



A statistical model is still a model. Lots of physics works that way. Newtonian mechanics might be deterministic, but because you often don't have perfect information on initial state that's not a useful model.

For example in statistical mechanics you work with ensembles of microstates. That doesn't mean thermodynamics is fake and only F=ma is real. Models are tools for understanding the behaviour of systems, not a glimpse into god's simulation source code where the hidden variables are.


For QM it can be shown that there are no hidden variables, at least no local ones [1]. Quantum randomness really cannot be modeled by a TM. You need a random oracle.

[1] https://en.wikipedia.org/wiki/Bell%27s_theorem


> For QM it can be shown that there are no hidden variables, at least no local ones

If we assume that the experimenter has free will in choosing the measurement settings, so that the hidden variables are not correlated with the measurement settings, then it can be shown.

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...

But it we are less strict on the requirement of the free will assumption, then even local hidden variables are back on the menu.


That's true, but "less strict" is understating the case pretty seriously. It's not enough for experimental physicists to lack free will. To rescue local hidden variables, nothing in the universe can have free will, not even God. That's a bridge too far for most people. (It's a bridge too far for me, and I'm an atheist! :-)

Note also that superdeterminism is unfalsifiable. Since we are finite beings living in a finite universe, we can only ever have access to a finite amount of data and so we can never experimentally rule out the possibility that all experimental results are being computed by some Cosmic Turing Machine churning out digits of pi (assuming pi is normal). But we also can't rule out the possibility that the moon landings were faked or that the 2020 election was stolen by Joe Biden. You gotta draw a line somewhere.

BTW, you might enjoy this: https://blog.rongarret.info/2018/01/a-multilogue-on-free-wil...


> Note also that superdeterminism is unfalsifiable.

I think the many worlds interpretation of quantum mechanics is also unfalsifiable. The annoying thing about quantum mechanics is that any one of the interpretations of quantum mechanics has deep philosophical problems. But you can't choose a better one because all of them have deep problems.


> many worlds interpretation of quantum mechanics is also unfalsifiable

Yes, that's true.

> all of them have deep problems

Some are deeper than others.


That is indeed what I was referring to. To clarify, plenty of classical physical models work only with distributions too. You don't need a random oracle because your model doesn't predict a single microstate. It wouldn't be possible or useful to do so. You can model the flow of heat without an oracle to tell you which atoms are vibrating.


Yes, all this is true, but I think you're still missing the point I'm trying to make. Classical mechanics succumbs to statistics without any compromises in terms of being able to make reliable predictions using a TM. But quantum mechanics is fundamentally different in that it produces macroscopic phenomena -- the results of quantum measurements -- that a TM cannot reproduce. At the most fundamental level, you can always make a copy of the state of a TM, and so you can always predict what a given TM is going to do by making such a copy and running that instead of the original TM. You can't make a copy of a quantum state, and so it is fundamentally impossible to predict the outcome of a quantum measurement. So a TM cannot generate a random outcome, but a quantum measurement can.


Sure, the 'problem' is that while the schrodinger equation is deterministic, we can only 'measure' the amplitudes of the solution. Is the wavefunction epistemic or ontological?


No, this has nothing to do with ontology vs epistemology. That's a philosophical problem. The problem here is that a measurement is a sample from a random distribution. A TM cannot emulate that. It can compute the distribution, but it cannot take a random sample from it. For that you need a random oracle (https://en.wikipedia.org/wiki/Random_oracle).


This is not really about quantum computing. A classical probabilistic Turing samples from a random distribution:

"probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed"

I remember that probabilistic Turing machines are not more powerful than deterministic Turing machines, though Wikipedia is more optimistic:

"suggests that randomness may add power."

https://en.wikipedia.org/wiki/Probabilistic_Turing_machine


Power is not the point. The point is just that probabilistic TM's (i.e. TMs with a random oracle) are different. For example, the usual proof of the uncomputability of the halting problem does not apply to PTMs. The proof can be generalized to PTMs, but the point is that this generalization is necessary. You can't simply reduce a PTM to a DTM.


The problem is about physics, not Turing machines. You don't need to make a random choice as part of your physical model, the model only makes predictions about the distribution. You can't represent the continuous dynamical manifolds of classical or quantum mechanics on a TM either, but that's ok, because we have discrete models that work well.


I am asking myself:

Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)

Epistemic uncertainty would mean her:

We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.

(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...


The whole point of Turing Machines is to eliminate all of these different kinds of uncertainty. There is in point of actual physical fact no such thing as a Turing Machine. Digital computers are really analog under the hood, but they are constructed in such a way that their behavior corresponds to a deterministic model with extremely high fidelity. It turns out that this deterministic behavior can in turn be tweaked to correspond to the behavior of a wide range of real physical systems. Indeed, there is only one known exception: individual quantum measurements, which are non-deterministic at a very deep fundamental level. And that in turn also turns out to be useful in its own way, which is why quantum computing is a thing.


Right, the point is that we don't need a solution to the 'measurement problem' to have a quantum computer.


Well, yeah, obviously. But my point is that you do need a solution to the measurement problem in order to model measurements in any way other than simply punting and introducing randomness as a postulate.


And is that solution required to be deterministic? If so, that is another postulate.


You have to either postulate randomness or describe how it arises from determinism. I don't see any other logical possibility.

BTW, see this:

https://arxiv.org/abs/quant-ph/9906015

for a valiant effort to extract randomness from determinism, and this:

https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...

for my critique.


> You don't need to make a random choice as part of your physical model

You do if you want to model individual quantum measurements.


Interaction in quantum physics is something that remains abstract at a certain level. So long as conservation principles are satisfied (include probability summing to one), interactions are permitted (i.e., what is permitted is required).


Yes. So? What does that have to do with modeling measurements, i.e. the macroscopic process of humans doing experiments and observing the results?


Would you agree that measurement is considered an interaction?


Sure. So?


Right I did hijack the thread a bit, but for me, the distribution is more than enough. The rest is just interpretation.


Well, no. The measurements are the things that actually happen, the events that comprise reality. The distribution may be part of the map, but it is definitely not the territory.


Isn't this just circling back to the original ontic vs epistemic though -> map vs territory?


No, because the original map-vs-territory discussion had to do with the wave function:

> Is the wavefunction epistemic or ontological?

https://news.ycombinator.com/item?id=42383854

Now we're talking about measurements which are indisputably a part of the territory.


Technically measurement devices are described by wavefunctions too.


Well, yeah, maybe. There's a reason that the Measurement Problem is called what it is.


I'm replying here since we appear to have reached the end.

Presumably measurement involves interaction with 3 or more degrees of freedom (i.e., an entangled pair of qubits and a measurement device). This is something, for most types of interactions (exclude exactly integrable systems for the moment), classical or quantum, we cannot analytically write down the solution. We can approximately solve these systems with computers. All that to say, is that any solution to any model of an 'individual' measurement will be approximate. (Of course, one of the key uses of quantum computing is improving upon these approximate solutions.) So what type of interaction should you pick to describe your measurement? Well, there is a long list and we can use a quantum computer to check! I guess part of the point I am trying to make, is when you open the box of a measurement device, you enter the world of many body physics, where obtaining solutions to the many-body equations of motion IS the problem.


> We can approximately solve these systems with computers.

Yes, but with quantum measurements you cannot even approximate. Your predictions for e.g. a two-state system with equal amplitudes for the two states will be exactly right exactly half of the time, and exactly wrong the other half.


I guess I don't have an issue with being wrong if we treat 'measurement' like a black box.


"God does not play dice with the universe" said Einstein.

But he hasn't met my Dungeon Master...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: