Hacker Newsnew | past | comments | ask | show | jobs | submit | m101's commentslogin

> But decades of research have revealed a deeper truth

Truth is a strange thing in science. In normal language people would say “our latest interpretation”. Science would be more honest if it used language honestly.


I get what you’re saying, but the measurements are real. In some sense they are the truth.

In the article this refers to the finding that the quark is more complex than three valence quarks.

The measurements indicating that the three-quark-model is incomplete are overwhelmingly conclusive, so some degree of certainty in the language is warranted in my view.


It's a pop sci magazine, of course they use language like that. Actual academic papers are different.

Not sure what this has to do with the article, it just seems like a nitpick. What did science do to you?

This is a great example of there being no intelligence under the hood.


Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.

I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.


> A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step

Which describes a lot of outsourced development. And we all know how well that works


Using outsourced coders is a skill like any other. There are cultural things you need to consider etc.

It's not hard, just different.


> Would a human perform very differently?

How useful is the comparison with the worst human results? Which are often due to process rather than the people involved.

You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains.


> Would a human perform very differently?

Yes.


A human trained with 0.00000001% of the money OpenAi uses to train models will perform better.

A human with no traning will perform worse.


I have seen some codebase doubling the number of LoC after "refactoring" made by humans, so I would say no.


No (human) developer would _add_ tests. ^/s


Just as enterprise software is proof positive of no intelligence under the hood.

I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:

https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


Well… it’s more a great example that great output is a good model with the right context at the right time.

Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.


Prove it beats models of different architectures trained under identical limited resources?


To all those people that think IBM don't know anything. Calculate this number:

# companies 100+ years old / # companies ever existed in 100+ years

Then you will see why IBM is pretty special and probably knows what they are doing.


Pretty special? They were making guns and selling computation tools to the Nazis for a bunch of those years.

I think they trade now mostly on legacy maintenance contracts (e.g. for mainframes) for e.g. banks who are terrified of rocking their technology-stack-boat, and selling off-shore consultants (which is at SIGNIFICANT risk of disruption - why would you pay IBM squillions to do some contract IT work, when we have AI code agents? Probably why the CEO is out doing interviews saying you cant trust AI to be around forever)

I have not really seen anything from IBM that signals they are anything other than just milking their legacy - what have they done that is new or innovative in the past say 10 or 20 years?

I was a former IBMer 15 odd years ago and it was obvious then that it was a total dinosaur on a downward spiral, and a place where innovation happened somewhere else.


I don't think selling things to nazis was worse than selling things to israel today.


Yep, both equally as bad and morally repugnant.


As evidenced by the fact that they are a 100+ year old company that still exists. People forget that.


How about check out how many companies exist today vs existed in 1958? If you look at it that way then just surviving is an achievement in itself and then you might interpret their actions as extremely astute business acumen.


Given power and price constraints, it's not that you cannot run them in 5 years time it's that you don't want to run them in 5 years time and neither will anyone else that doesn't have free power.


There's a big difference between:

"creating goods of actual value"

and

"creating goods of actual value for any price"

I don't think it's controversial that these things are valuable but rather the cost to produce use things is up for discussion, and the real problem here. If the price is too high now, then there will be real losses people experience down the line, and real losses have real consequences.


I heard from a frontier coder at deepmind that Gemini, whilst not great at novel frontier coding, is actually pretty good at debugging. Wonder if someone is going to automate a bug bounty pipeline with AI tools.


If this ai coding things are so good why nobody is capitalizing on it? Why are they selling this studf to developers instead of taking contracts and making billions?

Are they stupid or is just all a big lie?


Couple comments having read gist of comments here:

1) It's not about bad regulation either: it may be impossible to design good regulation

"The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design." - Friedrich Hayek

2) Everyone agrees that controlling bad externalities is good. The point is at what cost?

3) Regulation isn't the only answer to things. Perhaps the issue is private property isn't properly enforced? Perhaps things could be solved through insurance schemes? There are many complex systems that have been solved without the use of government mandated regulation


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: