Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not really how MWI works. There isn't some fixed number of universes; actually, there aren't really distinct universes (or timelines) at all. Attempting to count them is about like trying to measure the length of a coastline; they blend together once you zoom far enough in.

Running the quantum computer causes 'new timelines' to be created. Though so would ordinary atoms just sitting there; the tricky thing about quantum computers is making it so the split is temporary.

So the quantum computer gets split into multiple versions of itself, does some computation in each, and merges the results. This isn't map-reduce; there's a strictly limited set of ways in which you can do the merger, all of which are weird from a classical perspective.

You can argue for MWI based on this, because the computations that got merged still had to happen somewhere. It's incompatible with Copenhagen, more so the bigger and longer-lasting the computation gets. It's not, strictly speaking, incompatible with pilot wave theory; but pilot wave theory is MWI plus an additional declaration that "Here, you see this timeline here? That's the real one, all the others are fake. Yes, all the computation needed to instantiate them still happens, but they lack the attribute of being real."

Though that makes PWT incompatible with computationalism, and hence with concepts such as mind-uploading. Which is a bullet you can choose to bite, of course...



I don't think this is accurate. There is no classical computation happening here, and there is no reason you have to have "room" for the classical computation to happen. That seems to be assuming that the universe is a classical substrate and a classical algorithm has to be instantiated somewhere.

Quantum computing isn't classical computing. It isn't a Turing machine. It is a fundamentally different kind of information processing device, making use of non-classical physical phenomena.

I'm not a defender of Copenhagen, but the wave collapse interpretation has no difficulty explaining quantum computation. A quantum computer creates extremely large, complexly entangled wave functions that upon collapse result in states that can be interpreted as solutions to problems that were encoded in the sequence of operations that setup the entangled wave function. The Everett interpretation is easier to think about in my opinion, and I prefer thinking in terms of MWI when I try to make sense of these results. But it is not necessary.

Computer science is the study of universal Turing machines and their application. But Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic, and that can (we believe) be used to simulate anything we can dream up. But there are intrinsic performance limitations of Turing machines, studied by algorithmic theory, which are artifacts of the Turing machine itself, not physical limitations of the universe we live in. That searching an unordered list with a serial processor takes O(n) time, for example. Grover showed that there are non-Turing machine quantum processes that could be used to perform the same computation in O(sqrt(n)) time. That doesn't mean we need to go looking for "where did that computation actually happen". That doesn't even make sense.


> Turing machines are only “universal” in the sense that they can represent any statement of mathematical logic

That's not quite right. TM's in general are not universal. The "universality" of TMs has to do with the existence of a universal TM which is capable of emulating any other TM by putting a specification of the TM to be emulated on the universal TM's tape. The reason this matters is that once you've built a universal TM, you never have to build any more hardware. Any TM can then be emulated in software. That result is due to Turing.

The relationship between TMs and mathematical logic is of a fundamentally different character. It turns out that any system of formal logic can be emulated by a TM (and hence by a UTM), but that is a different result, mainly due to Godel, not Turing. There is also the empirical observation (famously noted by Eugene Wigner [1]) that all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically, and hence can be emulated by a TM, and hence can be emulated by a UTM. But it is entirely possible that a new class of physical phenomena could be discovered tomorrow that cannot be modeled mathematically.

But here's the thing: no one has been able to come up with any idea of what such a phenomenon could possibly look like, and there is a reason to believe that this is not just a failure of imagination but actually a fundamental truth about the universe because our brains are physical systems which themselves can be modeled by mathematics, and that is (empirical) evidence that our universe is in some sense "closed" under Turing-equivalence (with quantum randomness being the lone notable exception). That is the kind of universality embodied in the idea that TM's can emulated "anything we can dream up". It's called the Church-Turing thesis [2] and unlike the universality of universal TM's it cannot be proven because a counterexample might be discovered at any time.

[1] https://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness...

[2] https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


I appreciate the detail, but for the record I don’t think we disagree. There’s a lot that I had to oversimplify in my already long comment.


> all known physical phenomena (with the exception of quantum measurements) can be modeled mathematically

Of course you can model quantum measurements! And Wigner certainly knew so.


> Of course you can model quantum measurements!

No, you can't. You can statistically model the results of multiple quantum measurements in the aggregate but you cannot model the physical process of a single measurement because there is a fundamental disconnect between the physics of quantum measurements and TMs, namely, TMs are deterministic and quantum measurements are not. There is a reason that the Measurement Problem is a thing.


A statistical model is still a model. Lots of physics works that way. Newtonian mechanics might be deterministic, but because you often don't have perfect information on initial state that's not a useful model.

For example in statistical mechanics you work with ensembles of microstates. That doesn't mean thermodynamics is fake and only F=ma is real. Models are tools for understanding the behaviour of systems, not a glimpse into god's simulation source code where the hidden variables are.


For QM it can be shown that there are no hidden variables, at least no local ones [1]. Quantum randomness really cannot be modeled by a TM. You need a random oracle.

[1] https://en.wikipedia.org/wiki/Bell%27s_theorem


> For QM it can be shown that there are no hidden variables, at least no local ones

If we assume that the experimenter has free will in choosing the measurement settings, so that the hidden variables are not correlated with the measurement settings, then it can be shown.

https://en.wikipedia.org/wiki/Bell%27s_theorem#Superdetermin...

But it we are less strict on the requirement of the free will assumption, then even local hidden variables are back on the menu.


That's true, but "less strict" is understating the case pretty seriously. It's not enough for experimental physicists to lack free will. To rescue local hidden variables, nothing in the universe can have free will, not even God. That's a bridge too far for most people. (It's a bridge too far for me, and I'm an atheist! :-)

Note also that superdeterminism is unfalsifiable. Since we are finite beings living in a finite universe, we can only ever have access to a finite amount of data and so we can never experimentally rule out the possibility that all experimental results are being computed by some Cosmic Turing Machine churning out digits of pi (assuming pi is normal). But we also can't rule out the possibility that the moon landings were faked or that the 2020 election was stolen by Joe Biden. You gotta draw a line somewhere.

BTW, you might enjoy this: https://blog.rongarret.info/2018/01/a-multilogue-on-free-wil...


> Note also that superdeterminism is unfalsifiable.

I think the many worlds interpretation of quantum mechanics is also unfalsifiable. The annoying thing about quantum mechanics is that any one of the interpretations of quantum mechanics has deep philosophical problems. But you can't choose a better one because all of them have deep problems.


> many worlds interpretation of quantum mechanics is also unfalsifiable

Yes, that's true.

> all of them have deep problems

Some are deeper than others.


That is indeed what I was referring to. To clarify, plenty of classical physical models work only with distributions too. You don't need a random oracle because your model doesn't predict a single microstate. It wouldn't be possible or useful to do so. You can model the flow of heat without an oracle to tell you which atoms are vibrating.


Yes, all this is true, but I think you're still missing the point I'm trying to make. Classical mechanics succumbs to statistics without any compromises in terms of being able to make reliable predictions using a TM. But quantum mechanics is fundamentally different in that it produces macroscopic phenomena -- the results of quantum measurements -- that a TM cannot reproduce. At the most fundamental level, you can always make a copy of the state of a TM, and so you can always predict what a given TM is going to do by making such a copy and running that instead of the original TM. You can't make a copy of a quantum state, and so it is fundamentally impossible to predict the outcome of a quantum measurement. So a TM cannot generate a random outcome, but a quantum measurement can.


Sure, the 'problem' is that while the schrodinger equation is deterministic, we can only 'measure' the amplitudes of the solution. Is the wavefunction epistemic or ontological?


No, this has nothing to do with ontology vs epistemology. That's a philosophical problem. The problem here is that a measurement is a sample from a random distribution. A TM cannot emulate that. It can compute the distribution, but it cannot take a random sample from it. For that you need a random oracle (https://en.wikipedia.org/wiki/Random_oracle).


This is not really about quantum computing. A classical probabilistic Turing samples from a random distribution:

"probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed"

I remember that probabilistic Turing machines are not more powerful than deterministic Turing machines, though Wikipedia is more optimistic:

"suggests that randomness may add power."

https://en.wikipedia.org/wiki/Probabilistic_Turing_machine


Power is not the point. The point is just that probabilistic TM's (i.e. TMs with a random oracle) are different. For example, the usual proof of the uncomputability of the halting problem does not apply to PTMs. The proof can be generalized to PTMs, but the point is that this generalization is necessary. You can't simply reduce a PTM to a DTM.


The problem is about physics, not Turing machines. You don't need to make a random choice as part of your physical model, the model only makes predictions about the distribution. You can't represent the continuous dynamical manifolds of classical or quantum mechanics on a TM either, but that's ok, because we have discrete models that work well.


I am asking myself:

Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)

Epistemic uncertainty would mean her:

We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.

(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...


The whole point of Turing Machines is to eliminate all of these different kinds of uncertainty. There is in point of actual physical fact no such thing as a Turing Machine. Digital computers are really analog under the hood, but they are constructed in such a way that their behavior corresponds to a deterministic model with extremely high fidelity. It turns out that this deterministic behavior can in turn be tweaked to correspond to the behavior of a wide range of real physical systems. Indeed, there is only one known exception: individual quantum measurements, which are non-deterministic at a very deep fundamental level. And that in turn also turns out to be useful in its own way, which is why quantum computing is a thing.


Right, the point is that we don't need a solution to the 'measurement problem' to have a quantum computer.


Well, yeah, obviously. But my point is that you do need a solution to the measurement problem in order to model measurements in any way other than simply punting and introducing randomness as a postulate.


And is that solution required to be deterministic? If so, that is another postulate.


You have to either postulate randomness or describe how it arises from determinism. I don't see any other logical possibility.

BTW, see this:

https://arxiv.org/abs/quant-ph/9906015

for a valiant effort to extract randomness from determinism, and this:

https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...

for my critique.


> You don't need to make a random choice as part of your physical model

You do if you want to model individual quantum measurements.


Interaction in quantum physics is something that remains abstract at a certain level. So long as conservation principles are satisfied (include probability summing to one), interactions are permitted (i.e., what is permitted is required).


Yes. So? What does that have to do with modeling measurements, i.e. the macroscopic process of humans doing experiments and observing the results?


Would you agree that measurement is considered an interaction?


Sure. So?


Right I did hijack the thread a bit, but for me, the distribution is more than enough. The rest is just interpretation.


Well, no. The measurements are the things that actually happen, the events that comprise reality. The distribution may be part of the map, but it is definitely not the territory.


Isn't this just circling back to the original ontic vs epistemic though -> map vs territory?


No, because the original map-vs-territory discussion had to do with the wave function:

> Is the wavefunction epistemic or ontological?

https://news.ycombinator.com/item?id=42383854

Now we're talking about measurements which are indisputably a part of the territory.


Technically measurement devices are described by wavefunctions too.


Well, yeah, maybe. There's a reason that the Measurement Problem is called what it is.


I'm replying here since we appear to have reached the end.

Presumably measurement involves interaction with 3 or more degrees of freedom (i.e., an entangled pair of qubits and a measurement device). This is something, for most types of interactions (exclude exactly integrable systems for the moment), classical or quantum, we cannot analytically write down the solution. We can approximately solve these systems with computers. All that to say, is that any solution to any model of an 'individual' measurement will be approximate. (Of course, one of the key uses of quantum computing is improving upon these approximate solutions.) So what type of interaction should you pick to describe your measurement? Well, there is a long list and we can use a quantum computer to check! I guess part of the point I am trying to make, is when you open the box of a measurement device, you enter the world of many body physics, where obtaining solutions to the many-body equations of motion IS the problem.


> We can approximately solve these systems with computers.

Yes, but with quantum measurements you cannot even approximate. Your predictions for e.g. a two-state system with equal amplitudes for the two states will be exactly right exactly half of the time, and exactly wrong the other half.


I guess I don't have an issue with being wrong if we treat 'measurement' like a black box.


"God does not play dice with the universe" said Einstein.

But he hasn't met my Dungeon Master...


… our brains are physical systems which themselves can be modeled by mathematics..

How do you know this? What is the model? Can an AI come up with the Incompleteness Theorems? It can be proven ZFC that PA is consistent. Can an AI or Turing Machine or whatever do the same?

EDIT: I’m equating “our brians” with consciousness.


> How do you know this?

I don't. But it seems like a plausible hypothesis, and I see no compelling evidence to the contrary.

> Can an AI or Turing Machine or whatever do the same?

Can you?


Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

If a Turing Machine can’t prove that it can’t prove the consistency of its axiomatic system from within that system but that it could from within a larger system but I can then this is evidence against your belief. At least as I see it.

I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.


> Yes. It’s a basic result proven in any mathematical logic course at the senior or beginning graduate level.

OK, but note that you've moved the goal posts here. Your original question was:

> Can an AI come up with the Incompleteness Theorems?

There is a difference between coming up with those theorems, and being able to reproduce them after having been shown how. There can be no doubt that an AI can do the latter, it's not even speculative any more. ChatGPT can surely recite the proof of the consistency of PA within ZFC.

> I have the minority view that the Incompleteness results (the proof of them) are a limitation of artificial intelligence.

Yeah, well, there's a reason this is the minority view. How do you know that the incompleteness results don't apply to you? Sure you can see that PA can be proven consistent in ZFC, but that is not the same thing as being able to see the consistency of the formal system that governs the behavior of your brain. You don't even know what that formal system is. It's not even clear that it's possible for you to know that. It's possible that your brain contains all kinds of ad-hoc axioms wired in by millions of years of evolution, and it's possible that these are stored in such a way that they cannot be easily compressed. Evolution tends to drive towards efficient use of resources. So even if you had the technology to produce a completely accurate model of your brain, your brain might not have the capacity to comprehend it.

History is full of people making predictions about how humans will ultimately prove to be superior to computers. Not a single one of those predictions has stood the test of time. Chess. Go. Jeopardy. Writing term papers. Generating proofs. Computers do all these things now, and they've come to do them in the span of a single human lifetime. I see absolutely no reason to believe that this trend will not continue.


Thanks for the response and perspective. I'll contemplate it more later.

While I personally may not have come up with the Incompleteness results humans did. The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

The second order Peano Axioms are categorical while the first order Peano Axioms are not. The first order axioms are used precisely because it was the dream of Hilbert and others to reduce mathematics to a computable system. The dream can not be realized. We humans can prove things like Goodstein's theorem. A statement that is true in the second order PA. How will a computer prove such a thing? There is no effective, computable means, for determining if a given statement is an axiom in PA.

I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes. Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

Thanks for the discussion. I'll contemplate what you've written and any response you care to make. I won't respond further since I'm delving into areas I know little about.

https://en.wikipedia.org/wiki/Chinese_room


> humans did

No. Not humans. One human.

> The discussion is about human intelligence in general (particularly applied to bright people) not about my own intelligence and its limitations.

OK, but if you're going to talk about human intelligence in general then you have to look at what humans do in general, and not what an extreme outlier like Curt Godel did as a singular event in human history.

> particularly applied to bright people

And how are you going to measure brightness?

> How will a computer prove such a thing?

I have no idea. (I was going to glibly say, "The same way that humans do", but one of the lessons of AI is that computers generally do not do things the same way that humans do. But that in no way stops them from doing the things that humans do.) But just because I don't know how they will do it in no way casts doubt on the near-certainty that they will do it, possibly even within my lifetime given current trends.

> I don't know anything about the chess algorithms but my understanding is that they rely, essentially, on searching a vast number of possible outcomes.

Yes, that's true. So?

> Can a computer beat Magnuson

I presume you meant Magnus Carlsen? Yes, of course. That experiment was done last year:

https://www.youtube.com/watch?v=dgH4389oTQY

> with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

What difference does that make? But the answer is still clearly yes because the computer could simply emulate Carlsen's brain. A 10x speed advantage would surely be enough to win.


I don’t believe you are engaging in a good faith discussion. Your previous comment is worthy of further contemplation but not this one. A computer can not emulate a person’s brain. At least not now and there isn’t sufficient evidence to believe that is even theoretically or practically possible to do in the future.

Your response here implicitly admits there’s difference in human thinking and computer “thinking”. A chess program that just searches a vast number of possibilities and chooses the best one is not thinking like a human. It’s not even close.

* > How will a computer prove such a thing? I have no idea*

If you knew about these things you’d know that it isn’t possible to have an algorithm that halts in a finite number of steps that determines whether or not a given statement is an axiom in 2nd order PA. A computer is incapable of reasoning about such things.


> A computer can not emulate a person’s brain.

Earlier you wrote:

> Can a computer beat Magnuson with the number of computations the computer can do limited to within one order of magnitude of what a human can do in the allotted time?

If a computer can't emulate a person's brain then how are you going to assess whether or not the number of computations it's doing is "within one order of magnitude of what a human can do"?

> A computer is incapable of reasoning about such things.

You want to bet on that? Before you answer you'd better re-read your claim very carefully. When you realize your mistake and correct it, then my answer will be that humans aren't guaranteed to be able to determine these things in a finite number of steps either. There's a reason that there are unsolved problems in mathematics.


Unless you're a Cartesian dualist, the mind maps to the brain which is governed by physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.


..physical laws that are, in principle, fully modeled by mathematical theory that can be simulated on a computer.

Consciousness so far appears to be something that can’t be modeled mathematically. Can a Turing Machine conclude that while it can’t prove the consistency of the axiomatic system it works under if one embeds that system in a larger system then it could be possible to prove consistency?

Aren’t you assuming super determinism? What if consciousness is not “computable” in any meaningful way? For example suppose the theoretically most efficient mathematical model of consciousness requires more variables than the number of particles in the observable universe.


> Consciousness so far appears to be something that can’t be modeled mathematically.

That doesn't tell us anything though: Almost every single physical phenomenon we can start to model mathematically looked impossible at some point in history. Just because something's hard is not a reason to expect it's magic.

If consciousness is somehow un-model-able, that will most likely be for a different reason, where the premise itself is somehow flawed, like how we can't bottle phlogiston or calculate the rate of air turning into maggots.


...something's hard is not a reason to expect it's magic.

Something that is impossible to accurately be modeled mathematically is not magic.


I think you are very confused about consciousness, which is nothing more than what an algorithm feels like from the inside. There is nothing mysterious about it.

But we are now way off topic.


You are claiming to be able to mathematically model consciousness or that it is theoretically possible. So what’s the model or proof that it can theoretically be modeled. Your statements regarding were lacking.


Dennett’s Consciousness Explained would be a good resource if you want more worked out theory and explanation.



More than adequately covered in Dennett. There's a whole chapter devoted to the Chinese room IIRC.


It’s not a settled issue in the sense that the experts overwhelmingly agree on one side vs. the other. My link was posted to show a contrarian view. Lots of research papers arguing on either side have been written about the Chinese Room. Your argument is similar to a Christian telling an atheist it’s a settled issue since the book “The Case for a Creator” covers the issue.


Hmm... Turing machines are "universal" in the sense that they can be used to model the behavior of any finite process that can compute a computable function. So if a process exists that can compute something, it must be modelable as a Turing machine, and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine. So I don't see how it doesn't make sense to ask the question "if this process has a better asymptotic behavior than is theoretically possible, where did the computation take place?" I would be more inclined to believe that someone is not counting properly, than it being some kind of hypercomputer. Or even more simply that the process is just physically impossible.


> if a process exists that can compute something, it must be modelable as a Turing machine

Yes.

> and its operation must be analyzable as a finite series of elementary instructions that are analogous to those of a Turing machine

Analyzable, sure. MWI is fine as an analytic tool in the same way e.g. virtual particles and holes are. Nothing here requires that any of these analytic tools are physically corresponded to.


I didn't say I favored the MWI. My point is simply that the question of where computation took place shouldn't be dismissed. Like I said, I would be more inclined to think someone is not counting properly. As a simple example, a parallel program may run in half the time, but that doesn't mean it executed half as many operations.


> the question of where computation took place shouldn't be dismissed

Sure. But it also shouldn’t be glorified as adding anything to an old debate—every QCD calculation has this dynamic of lots of classical “computations” happening seemingly simultaneously.


There isn’t hidden computation going on though.


Computation isn't hidden in a parallel program, either. But if you only count the running time, and not the total instructions retired or the computer's energy usage, you're not getting the full picture.


And there isn’t parallel computation going on either. I think there’s a fundamental misunderstanding of how quantum compute works here. There is no merge-reduce as another commenter is claiming. Even in MWI, the decoherence is one-way.


A Turing Machine operates serially one instruction at a time. But you could obviously multiply its output by multiplying the machines.

The question is then how could we efficiently multiply Turing machines? One way could be by using rays of light to do the computations. Rays of light operate in parallel. Light-based computers don't need to be based on quantum mechanics, just plain old laser-optics. They multiply their performance by using multiple rays of light, if I understand it correctly.

SEE: https://thequantuminsider.com/2024/03/19/lightsolver-laser-c...

So using multiple rays of light is a way to multiply Turing Machines economically, in practice, I would think.

I assume quantum computers similarly just multiply Turing Machines -like computations, performing many computations in parallel, similarly to light-based computers. That does not mean that such computations must happen "somewhere else", than where the quantum processes occur. And that does not require multiple universes, just good old quantum mechanics, from Copenhagen.


Parallel computers don't "multiply their performance" in computation theoretic terms. If you took half as long by computing on two Turing machines simultaneously you didn't decrease the time complexity of the algorithm. You still churned through about the same number of total operations as if you had done it serially.

The distinction between the total operations and the total time is important, because your energy requirements scale with the time complexity of what you try to compute, not with the total time it takes.

An optical computer, for example, has a limit on how densely it can pack parallel computation into the same space, because at some point your lenses overheat and melt from the sheer intensity of the light passing through them. It's possible QCs are subject to similar limitations, and despite the math saying they're capable of computing certain functions polynomially, that doing so requires pushing exponential amounts of energy into the same space.


Good points. I just wanted to draw attention to the idea that Turing Machine is just one way of doing "computing". Whatever is proven about Turing Machines is proven about Turing Machines, not about all mechanical devices, like quantum computers, and "light-computers", in general.

I'm wondering is there a universal definition of "computing"? Saying that "Conmputing is what Turing Machines do" somehow seems circular. :-)


Well, Turing machines are supposed to be that definition. The Church-Turing thesis still hasn't been shown to be false. Proving something of Turing machines that doesn't hold for one of those classes of devices would mean refuting it. Basically we'd need to find some problem that, memory not being an issue, one class of computer can solve in finite time while another can't solve in any finite amount of time.


“Any process can be modeled by a Turing machine” is not the same as “Every process must be isomorphic to a Turing machine, and therefore share the same limitations.”

If I have a magic box that will instantly calculate the Nth digit of the busy beaver number of any size, that can be modeled by a Turning machine. AFAIK there is no constant time algorithm though, which is what our magic box does. So a Turing machine can’t match the performance of our magic box. But no where is it written that a Turing machine puts a an upper bound on performance!

That’s what quantum computers are. They solve certain problems faster than a classical computer can. Classical computers can solve the same problems, just more slowly.


They are in fact the same, since Turing machines are mathematical models. If you had such a magic box, capable of computing things without regard for their time complexity, we might need to revise our assumptions about whether Turing machines are in fact capable of universally modeling all forms of computation. But like I implied on the sibling comment, I'm expressing skepticism that quantum computation is actually physically realizable, to the extent that functions can be computed in fewer operations (i.e. more efficiently) than is predicted by classical computing.


Well you are free to believe whatever you want. But at this point disbelief in quantum computers is akin to flat earth denial. There are thousands of labs across the world who have performed entanglement experiments, and dozens of labs that have built working quantum computers specifically.


That's a bit extreme.

While QM and QC theory is well-established, there has been very few experiments that confirm that quantum computing actually works as theorized. There are quantum computers that are "working", but some of them (esp. the older ones) are just the kind of "quantum computers show that 15 = 3 * 15 (with high probability)". From what I read on Scott Aaronson's blog, very few of those experiments show "quantum supremacy" (i.e. classical computing physically cannot compute the results in reasonable time). This is why the Google Willow thing is considered a breakthrough.

So basically empirical "proof" that quantum computing actually works as predicted in theory is rather recent stuff.


Quantum computers are a direct consequence of quantum mechanics. Just like ropes and pulleys are a direct consequence of newtonian mechanics. If you think that quantum computers won't work, then either (1) all the theorists studying them for the past half century have somehow screwed up their basic maths, or (2) our understanding of quantum mechanics is wrong in ways that experiments have already ruled out.


They're not a "direct consequence". If they were, quantum computers would arise spontaneously in the environment. What you mean to say is that quantum mechanics permits quantum computers to exist. That doesn't mean something else can't forbid them from existing, or forbid them from ever reaching supremacy in practical applications.

>If you think that quantum computers won't work, then either (1) all the theorists studying them for the past half century have somehow screwed up their basic maths, or (2) our understanding of quantum mechanics is wrong in ways that experiments have already ruled out.

You know this thing called science? Its goal is not to know things, it's to learn things. If we already knew everything there is to know about quantum mechanics people would have built a perfectly functioning quantum computer at the first try (or they would have known from the start it was impossible). Physicists are trying to build quantum computers partly because in doing so they learn new things about quantum mechanics, and because they want to learn if it's possible to build them. Quantum computers are themselves also an experiment about quantum mechanics.


I don't know what it is that make people think of science in an over-confident binary (yes or no) manner.

If you read the original article, you'd see that the idea of experiments about quantum supremacy is an important issue. The whole reason they want to conduct such experiments is to prove that quantum computing actually works empirically.

The question is not whether "I think" quantum computers won't work. I don't "think" anything, I'm not an expert in the field and as such what I personally think is irrelevant. Scientifically, there aren't enough empirical experiments to conclusively prove whether they work. Whatever I "think" or you "think" are pure speculation. The chances of QC working and scalable to non-trivial number of qbits might be pretty good, but they haven't built a machine that can break RSA yet for example.

And yes of course the theorists could be working on the "wrong" theory all the time. It happened with Newtonian physics. You can't build accurate GPS systems with Newtonian physics without taking into account relativity. Similarly, we already know QM does not take gravity into account. Is it possible that quantum-computers-as-we-know-it are not possible under a gravitational field? Unlikely(?), but it's possible. You can't just take your personal speculative belief as truth and call everyone else flat earthers.


Honestly, I'm skeptical that this even demonstrates quantum supremacy. Like I said in a different comment in this thread, all they did was let the device run for some time on whatever nonsense was in its memory and let it reach some meaningless state. Since it would take a classical computer a long time to simulate the qubits at the physical level to reach the same result, that shows that the quantum computer is much better at executing itself. But that's stupid. By that same logic, electronic computers are faster than themselves, because it would take a long to simulate a computer at the physical level on another, identical computer.

If the QC isn't doing any meaningful computation, if it's not actually executing an algorithm, then it's not possible to compare their respective efficiencies. Or rather it is possible, but the answer you get is meaningless. Let's make it fair. How long would it take a ~100 qubit quantum computer to simulate at the physical level a smartphone's SoC running for 1 second?


I didn't say I don't believe in quantum computers. What I said is I don't believe in quantum computing as a practical technology. Right now "quantum computers" are just research devices to study QC itself. Not a single one has ever computed anything of use to anyone that couldn't have been computed classically.


According to this ( https://arxiv.org/pdf/2302.10778 ), there are no branches, just decoherence (minus the "collapse").


A paper proposing a radically different formulation of quantum mechanics, with only a half dozen non-author-citations, so heavy with jargon that it's hard to figure out what the author is claiming.

Sorry, my crank meter is dialed to 11 right now and I don't think this is worth spending further time on.


Jacob Barandes is a professor of physics at Harvard (not a crank).

He recently did a great podcast with Kurt Jaimungal [1]. In the interview he explains that every year he searches for a good way to introduce his students to quantum mechanics, and every year he's ended up being unsatisfied with the approach and started again.

One year he decided to attempt describing systems with traditional mechanics, using probabilities (stochastic mechanics). He worked at it and worked at it, assuming that he would eventually have to take some leap to cross the chasm into the world of gauge theory, Lie Groups, and Hilbert spaces. What he found is, to his surprise, the math just seemed to fit together at some point. That is, he found a mapping, just as the path integral is a mapping into the math of the wave function, and it just kind of worked. He had been under the assumption that it shouldn't.

It turns out, that in doing his stochastic mechanics, he had used mathematical descriptions that were indivisible. That is once a given process began, it could not be sliced into smaller and smaller time slices. It had to complete and yield a result. This was what he called a non-Markov stochastic process. Apparently all previous attempts at this used Markov processes, which are divisible like Hilbert vector calculations or the path integral.

It turns out that things like "collapse" of the wave function, and all the quantum weirdness arose from how the math worked with the wave function and Hilbert space, not from anything intrinsic to the mechanics of the universe (at least that's what his equivalent math was telling him). So in his stochastic non-Markov model, there is no collapse, just decoherence. There is always a result, and the intermediate states (where all the quantum oddities live) aren't real.

He mentions being really disappointed at seeing all the magic of quantum mechanics just kind of vanish. From what he could tell, it was just a trick of the wrong kind of math.

[1] https://www.youtube.com/live/7oWip00iXbo


Mathematicians, even from prestigious universities, have been wrong before. It's the danger of a discipline that doesn't yet have machine-checkable proofs.


I'm not saying he's correct because he's from Harvard, but he's not some crank outside the system to be lightly dismissed. He's firmly in the mainstream of physics and foundations of physics and seems to have gotten a new result. Time will tell.


Why does the computation have to "happen somewhere"? I understand the desire to push the boundary of the classical reasoning, but I don't think it's something to be taken as an axiom, and I haven't heard about any proof either.


There's no reason to think the computation is "happening" anywhere but our universe, or even "happening" at all in any real sense.

Light follows a geodesic. Atomic reactions tend to minimize energy. Rivers follow the most efficient path to the sea. Where does any of that computation "happen"? Why is it any stranger that an electron "knows how" to follow a gradient in an electric field than that a quantum computer "knows how" to perform a random circuit sampling?


The easy comparison for devs is quantum computing is like git branching.

The big question being the PR approval process.


no that's not a good comparison for MWI, it's more continuous


That sounds like you having a narrow understanding of git - there is no way it is not continuous.


git branches are not continuous


You mean as opposed to discrete? (And not in the usual sense of continuous)

There is nothing stopping you putting probability distributions in git and branching them.


Git operates on commits (branches are really nicknames for specific commits with some extra features), which are discrete steps.

That’s what’s stopping you from doing anything continuous in git.


i think we are just not connecting in what we are saying, nbd


Maybe our consciousness also branches, and all branches are approved.


Ah, so it's all pointers after all...


Thank you. This makes sense.


I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

No wait, it’s actually God who exists in the wave collapse, and His divine intervention does the computation.


> I’d prefer to say we just go back in time the moment it collapses. How can you disagree with me? No proof of anything and we can all contrive untestable solutions.

That's close to retrocausality which is a serious theory/interpretation of quantum mechanics.


I'd like to see some serious attempts to prove any of these theories. If not, we can just wildly imagine all sorts of possibilities.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: