I worked at Amazon and we were taught exactly the opposite. Say what you want about the company, but the writing culture there is superb. I wish other large firms valued clarity and precision as much.
This happened to me a few times for my reviews in Germany. My 1-star reviews were flagged by the business as "defamation" although it contained only facts and personal opinions. I provided additional proof like screenshot of their documents (one of them was a language school), but they deleted my review at the end.
I was so frustrated, I even considered deleting all of my two hundred something reviews from Google Maps.
You just made me check a business where I left a negative review and was threatened with a lawsuit. I didn't remove it, but Google did automatically. Looks like I'm still algo-banned from leaving a review there (I even tried a 5-star review with no text, and was told their AI found it a violation of content policy, lol) but now above most of the obviously bought 5-star reviews with generic test is a 6-month-old negative review with a lot of "likes", stating the owner files criminal complaints against negative reviewers, they appealed to Google twice, they defended themselves in court, and they saw other negative reviews had been reviewed. (possibly mine)
Of course, it also has a reply from the owner, stating this review that says he files criminal complaints against reviewers is a complete lie, and therefore, he's filing a criminal complaint against this reviewer.
I already deleted all my reviews from Google Maps. Spent all that money and effort installing a wheelchair elevator in a listed building, then when updating the info to say basically, "it's still not exactly wheelchair-friendly as a 120 year old building, but there is a wheelchair elevator and a HC toilet now", Google algorithmically accused me of lying.
LLMs aren't world models, they are language models. It will be interesting to see which of the LLM implementation techniques will be useful in building world models, but that's not what we have now.
When you ask an LLM a question about cars, it needs an inner representation of what a car is (how imperfect it may be) to answer your question. A model of "language" as you want to define it would output a grammatically correct wall of text that goes nowhere.
A map of how concepts relate in language is not a model of the world, except on the extremely limited sense that languages are part or the world.
And yeah, that wasn't clear before people created those machines that can speak but can't think. But it should be completely obvious to anybody that interacts with them for a small while.
"How concepts relate" is called a model. That it uses language to be interacted with is irrelevant to the fact that it's a model of of a worldly concept.
What of multi modal models according to you ? Are they "models of eyesight", "models of sound", or pixels or wavelengths... C'mon.
https://danluu.com/empirical-pl/
reply