Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Forget the idea of an "AI" then, because the idea of "intelligence" makes the argument harder. Just think of a "new technology."

Is it possible that a new technology could destroy the world? Of course. It could've turned out that nuclear weapons would incinerate the atmosphere upon detonation, as some were worried they would. It could be that the next technological innovation will kill us, there's nothing prevent it in the laws of physics.

AGI is a specific technology we are worried about, because the whole premise is "once we build something that is extremely capable at a variety of things, one thing it will be capable of is destroying the world. Even by accident."

We're already using AI techniques to help with problems in biology like protein folding. Take it a few dozen iterations forward, and these systems will be helping design medicines and vaccines that no human can do by themselves. At that point, what's to stop the system from creating a super-flu that kills everyone? Forget about intent here, how about a bug?

ChatGPT often misunderstands queries, take something like ChatGPT but 100x more capable, do you really think people won't be using it to do things? And given that they will, it could easily have a bug that "oops, incinerates the atmosphere" as a side effect.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: