Hacker Newsnew | past | comments | ask | show | jobs | submit | computerphage's commentslogin

Why do you think they have not trained a new model since 4o? You think the GPT-5 release is /just/ routing to differently sized 4o models?

they're incorrect about the routing statement but it is not a newly trained model

I think "blockbuster movie" is a moving target, so it's a bit hard to know


It's a relatively well defined measure of success though: a movie which is popular and high-grossing.


Yep. Totally agreed that it's well defined. Only pointing out that the technical execution required will shift, which seems relevant because it's likely to make it take much longer than without this effect


Because before buybacks there were dividends. Did the difference between buybacks and dividends really make the difference between doing basic research and not?


It’s likely, dividends provide higher levels of exponential growth long term for an otherwise steady state company. It makes them more compelling than many long term investments.

Convert X% of a stocks value into a dividend and you pay taxes on that before you can buy more stock, but someone who keeps buying stock sees an exponential return. (Higher percentage of the company = larger dividends)

A company buys back X% of its stock functions like a dividend w/ stock purchase, but without that tax on dividends you’re effectively buying more stock. Adding a tax on stock buybacks could eliminate such bias, but it’s unlikely to happen any time soon.


Indeed. There are trillions of dollars /per year/ paid to workers in the US alone.


Like, there is an argument that can be made here, but "there's just not enough money in the world to justify this" definitely isn't it


Just because trillions are currently spent on employees, does not mean that another trillions exists to spend on AI. And if, instead, one's position is that those trillions will be spent on AI instead of employees, then one is envisioning a level of mass unemployment and ensuing violence that will result in heads on pikes.


"due to privacy concerns about privacy"

This strikes me as a particularly funny typo


Probably wrote "due to concerns about privacy" then realized it should be "due to privacy concerns" and forgot to remove the original bit.

Many such cases.


I often do that frequently. I should do it, but forget to not fully proof read after a quick edit. I also regularly leave out n't a lot when changing where a negation happens (see above).


Definitely not using Apple's epic proofread feature.


"In hindsight, the ad slogan 'Sunshine on your privacy' was a little too obvious, even for modern consumers. Let's Dazzle them with the next shiny thing instead."


"Sometimes" and "sense" are both wrong. I don't think this library is very good


Why do you think systems need to be sentient to be risky?


OP isn’t talking about systems at large, but specifically about LLMs and the pervasive idea that they will turn agi and go rogue. Pretty clear context given the thread and their comment.


I understood that from the context, but my question stands. I'm asking why OP thinks that sentience is necessary for risk in AI


So, how would you do it?


Why is it relative humidity that matters here instead of absolute humidity?


Relative humidity is an easier unit to work with because it auto scales with temperature.


I think GP knows that and it's exactly their point: why do you need slightly less water to stop mould growth as the temperature slightly drops? (If anything, one might expect the opposite anecdotally - it's e.g. hot and humid bathrooms that are particularly prone.)


Wait, aren't they cancelling leases on non-ai data centers that aren't under Microsoft's control, while spending much more money to build new AI focused data centers that that own? Do you have a source that says they're canceling their own data centers?


https://www.datacenterfrontier.com/hyperscale/article/552705... might fit the bill of what you are looking for.

Microsoft itself hasn't said they're doing this because of oversupply in infrastructure for it's AI offerings, but they very likely wouldn't say that publicly even if that's the reason.


Thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: