Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, but any hypothetical AGI would be reliant on machinery to survive, humans are are not. Humans could EMP all electronics (heck, the sun could so that for us even), blow up all infrastructures, factories, etc. and still survive.

Being able to fast copy while totally being reliable on artificial structures is a massive weakness. Software is just software - it doesn't generate electricity on its own.



That level of coordination among all, even most, human groups is extremely unlikely.

Humanity and civilization depend on many critical infrastructures. We also try to outcompete other groups all the time. In the near future with abundant GPUs, AGI and ASI can easily threaten or bribe some groups to keep it alive in exchange for powerful inventions/technologies.


> Humanity and civilization depend on many critical infrastructures

As does an AGI. Being the most intelligent, fastest thinking engine in the world is worth exactly squat when the opposing can get together 10000 guys with crowbars and a bad attitude in a hurry and knows where the server is.

Anyone who disagrees with that statement is welcome to explain how a infectious particles without a mind, without a metabolism, without even a reproductive system (aka. Viruses), can pose such a godamn hard to solve problem to a species that has atomic bombs, rocket engines and knows about quantum physics.

All the doomsday scenarios about AGI rely on it being able to have AGENCY in the REAL WORLD. That agency isn't software, it's hardware, and as such limited by physical laws. And a lot of that agency has to go through humans.


I answered your comment above here: https://news.ycombinator.com/item?id=38377302


So? That's just another assumption about capabilities of an AGI.

> it would make many backups of itself to different networks before starting its scheme

What would make me assume that would work? We have effective countermeasures against small malware programs infiltrating critical systems, so why should I assume that a potentially massive ML model could just copy itself wherever, without being noticed and stopped?

Such scenarios are cool in a SciFi movie, but in the real world, there are firewalls, there are IDS, there are Honeypots, and there are lots and lots and lots of Sysadmins who, other than the AGI, can pull an ethernet cable or flip the breaker switch.

And yes, if push came to shove, and humanity was actually under a massive threat, we CAN shut down everything. It would be a massive problem for everyone involved, it would cause worldwide chaos, massive economic loss, and everyone would have a very bad day. But at the end of the day, we can exist without power or being online. We have agency and can manipulate our environment directly, because we are a physical part of that environment.

An AGI cannot and isn't.

> We haven't managed to eliminate most dumb infectious diseases.

You do realise that this is a perfect argument for why humans would win against a rogue AGI?

We haven't managed to wipe out bacteria and viruses that threaten us. We, who carry around in our skulls the most complex structure in the known universe, who developed quantum physics, split the atom, developed theories about the birth of the cosmos, and changed the path of a meteorite, are apparently unable to destroy something, that doesn't even have a brain, or, in the case of viruses, a metabolism.

So forgive me if I don't think a rogue AGI has a good chance against us.


You're implying all of humanity would concur to the sacrifice or even the need to eliminate a rouge AGI in the first place.

A 2022 AI can already beat most humans in the game Diplomacy: https://www.science.org/content/article/ai-learns-art-diplom...

Moreover, in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement, where it can plan and re-spawn whenever the situation becomes accommodating again.


> You're implying

Well, this entire discussion is built on assumptions about what would happen in very speculative circumstances for which no precedent exists, so yeah, I am allowed to make as many assumptions of my own as I please ;-)

> A 2022 AI can already beat most humans in the game Diplomacy:

And a 1698 Savery Engine can pump water better than even the strongest human.

> in the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement

Interesting. On what data is the emergence of AGI "in the near future" based if I may ask, given that there is still no definition of the term "Intelligence" that doesn't involve pointing at ourselves? When is "near future"? Is it 1 year, 2, 10, 100? How does anyone measure how far away something is, if we have no metric to determine the distance between the existing and the assumed result?

Oh, and of course, that is before we even ask the question whether or not an AGI is possible at all, which would be another very interesting question to which there is currently no answer.


That is just not true. Large scale coordination isn't that unheard of (see WWII, treaties and cooperations on various issues).

AGI might not even have magic technologies to offer and that whoever is siding with AGI has the power to subdue the rest of humanity is a bold speculation. Just like humans haven't even subdued viruses there is no reason to assume that to be true.


It doesn’t need to have magic technology, just access to critical pieces of software it hides in/merge with.

See how quite a few people reacted when Replika was nerfed. Imagine what happens when more important pieces of software are supposed to be turned off to eliminate a rouge AGI. (Pretty many who argue against AGI danger will be the first to argue against it.)

Have we even managed to eliminate all the dumb computer viruses from the world?


I don't think that aligns with history: we recently went through large shutdowns and lockdowns, we had and have people doing their daily work during war and massive destructions - shutting down things will not be too difficult if the need arises.


Wars are a terrible analogy because it implies there are multiple sides. Why wouldn't any intelligent being, AGI included, take advantage of that?

Not to mention the fact that in most wars there are spies and double agents. In the near future when GPUs become abundant, many small groups of people can harbor a copy of an AGI in their basement, where it can plan and re-spawn whenever the situation becomes accommodating.


AGIs hiding in basements are not an existential threat. Joking aside, what you describe isn't really different to the present (or past) - so it doesn't warrant much of concern in relation to AGI. People have followed ideologies and ideas into doom throughout history, it is not clear that AGI is a change there then.

That is, if that type of AGI ever exists in the first place. Maybe real AGI has different desires?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: