Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From this point of view I don't understand what's happening between the actual SOTA models practice and the academic models. The former at this point are all MoEs, starting with GPT4. But then the open models, if not for DeepSeek V3 and Mixtral, are always dense models.


MoEs require less computation and more memory, so they're harder to setup in small labs


I assumed gpt 4o wasn't MOE, being a smaller version of gpt-4, but I've never heard either way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: