Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

but no one other than a quantum researcher cares about that problem.

The better question may be why the rest of us don’t care about the same things. If we sat here in 2014 and pondered why neural net folks cared about particular problems, I’m not sure where we’d be. Faith and Vision are truly spiritual things, even in tech.



By 2014 NN folks, of which I was one for a decade by that point, were solving real world problems at above human capabilities.

We'd been managing around human capabilities since around 2004 in certain tasks.

There were actual industrial applications in 1994.

You'd have to go back to the 1950s to find the type of research in neural networks that was of no practical application to anyone but NN researchers.


So go back to the 1950s and the point stands. What does it matter when the actual start date is in terms of the point being right or wrong?


> So go back to the 1950s and the point stands.

Yes. "Quantum computers will revolutionize computing by year 2100" is a claim I can take seriously. "Quantum computers will revolutionize computing real soon now" is a claim I am not taking seriously.

And yes, the 1950s AI researchers also boasted that it would take only months, what ended up taking many decades: https://en.wikipedia.org/wiki/Dartmouth_workshop


So quantum computers will be relavent in some time around 2075.

How much money should we putting into something with a time horizon longer than the working careers of everyone here?


Nobody wants your money


I'd like a rebate on my taxes then.


Well I'll be dead by that timeline so it matters a little.


> I’m not sure where we’d be.

They didn't solve any new problems. They took old ideas and just cranked the dial all the way up to 11. Then an entire market formed around building the hardware necessary to crank it even further. This all represents a revolution in computing power not in what neural net folks were doing at all.

> Faith and Vision are truly spiritual things, even in tech.

Yea, but money isn't.


Because it's all just scaled perceptron isn't it?

The reality is that we had to solve vanishing / exploding gradient problem. That wasn't accomplished by getting faster compute, but by devising better architectures and using activation functions with properties we needed.

That's how ANNs got out of the "somewhat useful" into "really useful" categories. Hardware helped move it even further. But without the work of computer scientists and mathematicians we would never get to the point when the hardware matters.


Even in the bad old days you got your golden tickets every so often.

I was working on an alternative approach of using much smoother activation functions at much higher precision, 64 bits in production using GPUs and 128 in testing using CPUs. It worked well enough that I don't think the field would have slowed down much. The only issue is that we'd be working on networks that are 10 times smaller.


Because when a new technology is demonstrated with a really esoteric use-case, it comes off as being useless for anything more useful.


You are engaging in some well-known logical fallacies, particularly hindsight bias. Linking a past event to an outcome we now know often ignores the many other past events that had no clear future or resulted in nothing significant. It is also important to highlight that this comment is not against quantum computing.


Well, yes, obviously. Since we can't predict the future we can't know which wild bets are gonna pay off, the idea is to expose yourself to as many as possible, rather than cut yourself off and say "aha, but that's the hindsight bias fallacy!". And this one in particular has a proven use case — Shor's algorithm, given a large enough number of logical qubits, can be used to break all existing RSA based encryption.

Also, I'm really struggling to identify with the naysaying around this thread. Every year we improve the error correcting (bringing the number of physical qubits needed to represent a logical qubit down), and increase the number of qubits. So what exactly is the worry here? That the secrets the government have dragnetted won't be relevant anymore by the time we crack it, due to the prevalence of Kyber?


Because I have shit to do and this doesn’t relate to any of that shit?


It’s a question of overpromising and underdelivering. I’m not anti science. The scientists should definitely go forth and science. It’s the marketing they are bad at. The boy who cried wolf is not a training manual.


Scientists have to get good at marketing because they will not get funding otherwise. They have to pitch their research towards an interested panel though, whose members have background knowledge. This means that they can merely bend the truth and exaggerate the potential impact of their research.

Popular science magazines are quite different. They publicize for views, and nobody cares how accurate they are.

Presentations of a public company are somewhere in between. The target audience is not nearly as qualified as a scientific grant panel, but they can sell shares or even sue the company if it underdelivers on its promises.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: