Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Would it be feasible to fine-tune a large, capable model (like the recent LIMA) on the source code (and maybe a few high quality libraries) of a niche language, such that it's much better at helping you write and understand it?

Imagine how many doors it would open if you could fine-tune models capable of writing language bindings for you and keeping them up to date.



Totally. GPT-4 can already do this, untuned, on niche languages and libraries. One of the main problems is still that you don't know when it's hallucinating a function or whatever though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: