Would it be feasible to fine-tune a large, capable model (like the recent LIMA) on the source code (and maybe a few high quality libraries) of a niche language, such that it's much better at helping you write and understand it?
Imagine how many doors it would open if you could fine-tune models capable of writing language bindings for you and keeping them up to date.
Totally. GPT-4 can already do this, untuned, on niche languages and libraries. One of the main problems is still that you don't know when it's hallucinating a function or whatever though.
Imagine how many doors it would open if you could fine-tune models capable of writing language bindings for you and keeping them up to date.