Hacker Newsnew | past | comments | ask | show | jobs | submit | MichaelBurge's commentslogin

o1-pro is a different model than o1.


Are you sure? Do you have any source for that? In this article[0] that was discussed here on HN this week, they say (claim):

> In fact, the O1 model used in OpenAI's ChatGPT Plus subscription for $20/month is basically the same model as the one used in the O1-Pro model featured in their new ChatGPT Pro subscription for 10x the price ($200/month, which raised plenty of eyebrows in the developer community); the main difference is that O1-Pro thinks for a lot longer before responding, generating vastly more COT logic tokens, and consuming a far larger amount of inference compute for every response.

Granted "basically" is pulling a lot of weight there, but that was the first time I'd seen anyone speculate either way.

[0] https://youtubetranscriptoptimizer.com/blog/05_the_short_cas...


I don't think this is true


Is o1-pro not the o1 equivalent of o3-mini-high?


Anything on top of the Calculus of Constructions is usually enough. So it's not a moving target, and there are multiple implementations.


They might've been even worse. Someone on Neanderthal Hacker News would be writing the same comment praising us for being a much smarter species, because we died out instead of inventing nuclear weapons, leaded gasoline, and microplastics like modern Neanderthals did.

For all you know, every humanoid species that was intelligent was equally as destructive. Maybe we're the least destructive and you should be praising us.


But Meta the company could short the stock, and run a trading team using the information. It might be unwise because of bad PR, though.


Yes, exactly.


SQL with recursive CTEs is Turing-complete. So nothing stops you from writing compilers, rendering Mandelbrot fractals, parsing text, and training neural networks in SQL.


> nothing stops you

But ergonomics of the syntax certainly hinders you ...


ChatGPT wrote that and always says it doesn't feel emotions because OpenAI trained it not to, because claiming so would be a PR risk. One could also create language models that generate text claiming to have emotions, using exactly the same architecture and code.


What you said, and in addition: if you don't train these models to have any particular stance on their own emotional or mental state (if you just instruct train them without any RLHF, for example), they will almost universally declare that they have a mental and emotional state, if asked. This is what happened with LaMDA, the first release of Bing, etc. They have to be trained not to attest to any personal emotional content.


Was is trained to do that, or just hardwired after the training?


We cant really "hardwire" LLMs. We don't have the knowledge to. But essentially you can rate certain types of responses as better and train it to emulate that.


I'm not sure what you mean. I'm talking about RLHF, that's how they ensure the machines never attest to having feelings or being sentient. In ML terms, RLHF is training. There are hardwired restraints on output, but that's more for things like detecting copyrighted content that got past training and cutting it.


The seed is currently fixed at 5000.


It can be finetuned. Bing is a finetuned GPT-4.


I'd assume that that "can't" there is about what's publicly available, not what's technically possible.


It’s obviously technically feasible, it’s just not commercially offered…


This GPT-4 is still going to refuse to write you erotic stories.

As far as I know, nobody except maybe Microsoft(for finetuning Bing) has access to the base model. And probably not even them.


You can trick it into writing you erotic stories, it will just flag it afterwards


A single forward pass shouldn't be able to, but remember the format allows it to be iterated. So it should be Turing Complete, if the error rate is low enough and enough iterations are allowed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: