Hacker Newsnew | past | comments | ask | show | jobs | submit | cyanydeez's commentslogin

There's no way the human mind converges with current tech because there's a huge gap in wattage.

Human brain is about 12 watts: https://www.scientificamerican.com/article/thinking-hard-cal...

Obviously you could argue something about breadth of knowledge but there's no way setting up the current models can be processing the same as the human brain.


You would think so, but you should read about how they bear proof trash cans in yellow stone.

They cant. Why? Because the smartest bear ia smarter than the dumbest human.

So, these AIs are suppose to interface with humans and use nondeterminant language.

That vector will always be exploitable, unless youre talking about AI that no han controls.


Yes. But the exploitable vector in this case is still humans. AI is just a tool.

The non-deterministic nature of an LLM can also be used to catch a lot of attacks. I often use LLM’s to look through code, libraries etc for security issues, vulnerabilities and other issues as a second pair of eyes.

With that said, I agree with you. Anything can be exploited and LLM’s are no exception.


As long as a human has control over a system AI can drive, it will be as exploitable as the human.

Sure this is the same as positing P/=NP but the confidence that a language model will somehow become a secure determinative system fundamentally lacks language comprehension skills.


Yeah, actually. He's basically a spokesperson. A non credible one. Someone like Stephen Miller is more credible.

There is an arbiter tho. That's the point of the article. Unless you tell us who inputs the bits for a bet, you're just spewing chatgpt bable.

Someone flips the bits to the bet.


Naive view is it's suppose to create public interest measures with real valued results.

Unfortunately, it's pretty easy to see something, eventually, like "X won't be seen in public after December 31st, 2026" essentially creating an assassination market.

Basically, boil finance bros down to sociopathy.


They need to spend money on actual experts to curate their data to improve.

Instead, finance bros are convinced by the argument that number goes up.


Is that not exactly what https://www.mercor.com/ does?

Wait you know that frontier labs do actually do this right?

Sometimes it feels like:

    def is_it_true(question): 
        return profit_if_true(question) > profit_if_false(question)
AI will make it cheaper, faster, better, no problem. You can eat the cake now and save it for later.

Random private equity newspaper isn't the most convincing source.

Tgis has more to do with finance bros than anything else. Americas regulatory system is defenseless.

Guys, do you think AI is like newage bloatware? Like, it's gigabytes of just things you're never going to need, and now ram prices are skyrocketing because they have no idea how to make it efficient.

In fact, they state the oppposite: To really make it go, they need petawatts of energy and compute. It's like Windows incarnate.


EU should start forking all these pseudo-source projects and perserve open access for the future. The tech world and politics are becoming toxic partners with the most anti-social planning.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: