Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you sure? Do you have any source for that? In this article[0] that was discussed here on HN this week, they say (claim):

> In fact, the O1 model used in OpenAI's ChatGPT Plus subscription for $20/month is basically the same model as the one used in the O1-Pro model featured in their new ChatGPT Pro subscription for 10x the price ($200/month, which raised plenty of eyebrows in the developer community); the main difference is that O1-Pro thinks for a lot longer before responding, generating vastly more COT logic tokens, and consuming a far larger amount of inference compute for every response.

Granted "basically" is pulling a lot of weight there, but that was the first time I'd seen anyone speculate either way.

[0] https://youtubetranscriptoptimizer.com/blog/05_the_short_cas...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: