Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried out gpt-oss:20b on a MacBook Air (via Ollama) with 24GB of RAM. In my experience it's output is comparable to what you'd get out of older models and the openAI benchmarks seem accurate https://openai.com/index/introducing-gpt-oss/ . Definitely a usable speed. Not instant, but ~5 tokens per second of output if I had to guess.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: