Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When ChatGPT first came out I was asking it to draw pictures of goblins and wizards using Python and Turtle - it's pretty bloody good at it. You can ask it to make the goblin angry or happy or whatever.

There is some sort of magic in which the LLM is imagining what a goblin looks like, then converts those thoughts to turtle commands and then finally adds python. It's quite impressive.

For example, the eyes, head, hair, horns etc are all in the correct place.... how? Using imagination?



I think somewhere down the rabbit hole we lost perspective of what neural nets are doing and how we got to the post-GPT world. Remember deep neural-networks? The idea is that each layer _deeper_ the software encodes more _hidden_ relationships from the training data. That's where this "magic" is embedded. At certain depths the neural-net stores relationships that go beyond our human reasoning and conscious understanding, and the software results generated from these depths are unsettling and tend to startle us. Interesting enough, these are things we are perfectly capable of doing but unable to actually query in our heads as we, just like ChatGPT, do not posses the ability to physically examine our own neural-net content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: