Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.
Yes. But the exploitable vector in this case is still humans. AI is just a tool.
The non-deterministic nature of an LLM can also be used to catch a lot of attacks. I often use LLM’s to look through code, libraries etc for security issues, vulnerabilities and other issues as a second pair of eyes.
With that said, I agree with you. Anything can be exploited and LLM’s are no exception.
As long as a human has control over a system AI can drive, it will be as exploitable as the human.
Sure this is the same as positing P/=NP but the confidence that a language model will somehow become a secure determinative system fundamentally lacks language comprehension skills.
Naive view is it's suppose to create public interest measures with real valued results.
Unfortunately, it's pretty easy to see something, eventually, like "X won't be seen in public after December 31st, 2026" essentially creating an assassination market.
Guys, do you think AI is like newage bloatware? Like, it's gigabytes of just things you're never going to need, and now ram prices are skyrocketing because they have no idea how to make it efficient.
In fact, they state the oppposite: To really make it go, they need petawatts of energy and compute. It's like Windows incarnate.
EU should start forking all these pseudo-source projects and perserve open access for the future. The tech world and politics are becoming toxic partners with the most anti-social planning.
Human brain is about 12 watts: https://www.scientificamerican.com/article/thinking-hard-cal...
Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.
reply