Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would it take a couple days? Is it not a matter of uploading the model to their registry, or are there more steps involved than that?


Ollama depends on llama.cpp as its backend, so if there are any changes that need to be made to support anything new in this model architecture or tokenizer, then it will need to be added there first.

Then the model needs to be properly quantized and formatted for GGUF (the model format that llama.cpp uses), tested, and uploaded to the model registry.

So there's some length to the pipeline that things need to go through, but overall the devs in both projects generally have things running pretty smoothly, and I'm regularly impressed at how quickly both projects get updated to support such things.


Issue to track Mistral NeMo support in llama.cpp: https://github.com/ggerganov/llama.cpp/issues/8577


> I'm regularly impressed at how quickly both projects get updated to support such things.

Same! Big kudos to all involved




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: