Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One can use https://platform.openai.com/tokenizer to directly confirm that the tokenization of "b l u e b e r r y" is not significantly different from simply breaking this down into its letters. The excuse often given "It cannot count letters in words because it cannot see the individual letters" would not apply here.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: