> But decades of research have revealed a deeper truth
Truth is a strange thing in science. In normal language people would say “our latest interpretation”. Science would be more honest if it used language honestly.
I get what you’re saying, but the measurements are real. In some sense they are the truth.
In the article this refers to the finding that the quark is more complex than three valence quarks.
The measurements indicating that the three-quark-model is incomplete are overwhelmingly conclusive, so some degree of certainty in the language is warranted in my view.
Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.
I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.
How useful is the comparison with the worst human results? Which are often due to process rather than the people involved.
You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains.
Just as enterprise software is proof positive of no intelligence under the hood.
I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:
Well… it’s more a great example that great output is a good model with the right context at the right time.
Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.
Pretty special? They were making guns and selling computation tools to the Nazis for a bunch of those years.
I think they trade now mostly on legacy maintenance contracts (e.g. for mainframes) for e.g. banks who are terrified of rocking their technology-stack-boat, and selling off-shore consultants (which is at SIGNIFICANT risk of disruption - why would you pay IBM squillions to do some contract IT work, when we have AI code agents? Probably why the CEO is out doing interviews saying you cant trust AI to be around forever)
I have not really seen anything from IBM that signals they are anything other than just milking their legacy - what have they done that is new or innovative in the past say 10 or 20 years?
I was a former IBMer 15 odd years ago and it was obvious then that it was a total dinosaur on a downward spiral, and a place where innovation happened somewhere else.
How about check out how many companies exist today vs existed in 1958? If you look at it that way then just surviving is an achievement in itself and then you might interpret their actions as extremely astute business acumen.
Given power and price constraints, it's not that you cannot run them in 5 years time it's that you don't want to run them in 5 years time and neither will anyone else that doesn't have free power.
I don't think it's controversial that these things are valuable but rather the cost to produce use things is up for discussion, and the real problem here. If the price is too high now, then there will be real losses people experience down the line, and real losses have real consequences.
I heard from a frontier coder at deepmind that Gemini, whilst not great at novel frontier coding, is actually pretty good at debugging. Wonder if someone is going to automate a bug bounty pipeline with AI tools.
If this ai coding things are so good why nobody is capitalizing on it? Why are they selling this studf to developers instead of taking contracts and making billions?
Couple comments having read gist of comments here:
1) It's not about bad regulation either: it may be impossible to design good regulation
"The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design." - Friedrich Hayek
2) Everyone agrees that controlling bad externalities is good. The point is at what cost?
3) Regulation isn't the only answer to things. Perhaps the issue is private property isn't properly enforced? Perhaps things could be solved through insurance schemes? There are many complex systems that have been solved without the use of government mandated regulation
Truth is a strange thing in science. In normal language people would say “our latest interpretation”. Science would be more honest if it used language honestly.
reply