Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Exactly, we humans can use specialized models and traditional tool APIs and models and orchestrate the use of all these without understanding how these things work in detail.

To do accounting, GPT 4 (or future models) doesn't have to know how to calculate. All it needs to know how to interface with tools like calculators, spreadsheets, etc. and parse their outputs. Every script, program, etc. becomes a thing that has such an API. A lot what we humans do to solve problems is breaking down big problems into problems where we know the solution already.

Real life tool interfaces are messy and optimized for humans with their limited language and cognitive skills. Ironically, that means they are relatively easy to figure out for AI language models. Relative to human language the grammar of these tool "languages" is more regular and the syntax less ambiguous and complicated. Which is why gpt 3 and 4 are reasonably proficient with even some more obscure programming languages and in the use of various frameworks; including some very obscure ones.

Given a lot of these tools with machine accessible APIs with some sort of description or documentation, figuring out how to call these things is relatively straightforward for a language model. The rest is just coming up with a high level plan and then executing it. Which amounts to generating some sort of script that does this. As soon as you have that, that in itself becomes a tool that may be used later. So, it can get better over time. Especially once it starts incorporating feedback about the quality of its results. It would be able to run mini experiments and run its own QA on its own output as well.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: