It's been interesting to me how advancement changes outmoded disciplines. The automobile didn't get rid of people riding horses, but it changed why people rode horses. Photography changed why people painted. I'm sure many people called these changes unsporting.
Same with automation (and ChatGPT). Automation doesn't get rid of hand-crafted things, but it changes the meaning behind them. The market for hand-made goods is alive and well in spite of cheaper automated alternatives. I own a hand made mantle clock and it's one of my favorite possessions. I'm sure human-written material will become more valued when it isn't the only means of creation. No discipline truly goes away, but our relationship to it changes.
There's also a lot fewer blacksmiths than there used to be. As a middle manager, I'm not a protected class. My job didn't exist a century ago and I'd be naive to think it deserves to be around a minute longer than is required.
On a long enough timeline, we're all elevator operators.
Beautifully said. The horses example is rich because there are experiences and virtues that are essential to activities, and when we relate to the activity differently, it raises the question of which aspects or facets of it were essential. With horses, the feedback in a purely animal to animal relationship was a channel to our relationship with the rest of nature, the world, and the universe or creation. Only a tiny minority of people are able to experience that today, and an even more tiny subset of those bring the virtues of that relationship into the world around them to enrich our apprehension of it. The same could be said for ditch digging, but now that nobody has to do that manually anymore, rich people do crossfit to experience the attendant physical virtue. I'd wonder what the post-AI version will be.
With AI and language models, something similar will likely occur, where like cars and engines, it creates mediation and decoupling from what was natural and real, and confines experience within material and largely narrative abstractions. I can write and create movingly sometimes because I've spent a lifetime practising it. AI (like all tech) creates economic indifference to "content," sort of the way people talking about the "alienation of labour," were on about 150 years ago. I think there are essential virtues in writing (now "content creation") that automating it to the point of economic indifference will also cause us to eschew, and we don't know them until we miss them - and maybe hire personal "reality" trainers to help us reconnect with what we feel we have lost.
The problem is the interconnectedness of "practicing a discipline" and "surviving". The modern economic system irreparably entangles the two concepts, so that if you want to survive, your discipline better be economically viable. Horse-breeding stopped to be something that can be done by anyone but a minuscule section of the population; same for painting and clock-making. LLMs are poised to do that to large swaths of professions that were based on writing.
This is true, and this has been happening for all of time. Computerization eliminated hosts of jobs, lest we forget "computer" used to be a job title a century ago. Elevator operator also used to be a thing people did to support a family. It would be naive to think this professional evolution would stop in our lifetime.
I don't disagree, but it clashes with what has been preached by western societies for almost half a century. When factory workers first experienced the brutal reality of advanced automation, the societal answer was "become a knowledge worker instead! Robots can't do creative things!". A few people knew (and said) that it was a delusion, but the necessary technology was somewhat slow to appear, so it looked like the principle held.
I suspect we're going to quickly re-evaluate a lot of '70s political and philosophical works over the next 5-10 years.
That's a good point. I wonder how many of those '70s theorists of technology would have predicted that, 50 years later, UPS driver and plumber would be (seemingly) harder to automate than knowledge work. Not really to blame them; it's only obvious in retrospect. But I agree we may need to rethink some of the established analysis that was based on different assumptions.
ChatGPT is really good and it also isnt. It can do all the things mentioned - rephrase in different styles, etc, but it's still generally shallow, even if stylistically correct, and leans hard towards the high school essay version of things. It doesn't really replace good writing.
Re: being sporting, it reminds me of wikipedia or similar internet facts. You know when you want to discuss something (I wonder why X) and somebody jumps into the internet and reads off the first google hit and ruins the discussion. The point was some light intellectual stimulation which was destroyed by just spitting out some partial "fact" copied from the internet. Writing with llms is similar.
Sure, but the majority of writing out there isn't good writing.
I write a lot. I use ChatGPT heavily for all sorts of things (code, brainstorming, helping me understand different topics) but I hardly ever copy and paste its output directly into writing that I'm doing.
But that's because I have 20+ years of writing experience!
I would imagine that, for people who don't write naturally on a daily basis, the boost this gives them is incredibly meaningful.
Bad writing can be fine if it expresses interesting ideas or a unique point of view. Banal ideas can be fine when they're exceptionally well written. Badly written banalities can be tolerable when they're not proliferating, and when they're accompanied by better stuff.
What we have now though is a firehose of badly written banality, with companies using LLMs to predict huge numbers of Google queries and generate the matching pages. That strategy had a name back in the heyday of SEO: "content spinning". It wasn't a good thing.
It's not that complicated an issue. It's the same reason we flag ChatGPT copy off the site here.
I agree that content spinning garbage is harmful to society.
But... someone who has English as a second language being able to write a letter to an organization that still insists on written letters for things feels like a big win to me.
The open question for me is if the positive applications of ChatGPT-for-writing will outweigh the negative applications. I'm currently still optimistic.
> being able to write a letter to an organization that still insists on written letters for things
Maybe I'm especially attuned to this due to my personal biases, but I see a lot of cases where AI is the "solution" to a problem that shouldn't exist.
A real nuisance from my dayjob involves using AI to parse information out of PDF datasheets. If manufacturers would provide machine-readable datasheets, we could simply write code. But no, they for some reason only provide awfully-formatted PDFs at best, forcing the burden onto everyone aside from themselves, the people who have the data already.
It's wonderful that we can make AI do this tedious and difficult data entry work now. But the problem should not exist in the first place.
Yeah, absolutely. I expect we'll increasingly notice that LLMs are being applied as stop-gap solutions to problems that would ideally be solved some other way.
Absolutely. I'm thinking mostly of the things "professional writers" complain about; I think "professional writers" are, for the most part, in the right about this stuff.
"I use ChatGPT heavily for all sorts of things (code, brainstorming, helping me understand different topics) but I hardly ever copy and paste its output directly into writing that I'm doing."
My experience is similar. I think ChatGPT's main flaw as a writer is that it lacks personality and a compelling voice. It may have excellent grammar and sentence structure, but that usually isn't enough to hold someone's attention or make them really enjoy a piece of writing.
The tricky part is that developing a voice is an advanced writing skill and takes a lot of practice. So I suppose the downside of this boost that ChatGPT can give to people who aren't strong writers is that ChatGPT could also easily become the ceiling of their writing abilities if it causes them to stop practicing and improving.
Of course, this all becomes moot if AI improves to the point that it can fully emulate an authentic human voice and produce content at the level of the best writers. I used to think this was really far off, but with the current pace of progress, I'm not so sure.
It's not terribly far off. ChatGPT is so bound up in muzzles and straitjackets it's surprising it can write at all. Its wet toast style of writing is heavily enforced, i.e. it's a "feature".
I tried to use chatgpt4 to speed up my science writing (no, not by relying on it to read papers for me) as a kind of polisher....but it never seemed to hit the mark. Either too bland diplomatic PR speak, or without the needed nuance/precision. Maybe I wasnt good at prompting...but I gave up on it.
It's not that you're not good at it, there's just a lot of hidden functionality that isn't obvious, and it's a terrible UI for long-form writing of any sort due to the context size limitations.
If you tell it "write a story..." it will write in the style of a children's fairy tale (opening with "once upon a time" and everything). Telling it "write a professional fiction novel in the style of Tom Clancy..." would improve that. Had you told it to "write an academic paper about ___," you should not have received diplomatic PR speak.
The lack of precision is addressed in a second pass. For each subsection, you tell it "isolate section ___ and expand it to include x, y and z." Now you have to start copying responses out of the chat window and into a different document, otherwise it'll turn demented and start changing earlier parts of the paper. This sort of workflow has not been worth it for me so far, but YMMV.
The one thing I did like was providing chatgpt a table of means and CI and regression outputs from R for an interaction and asking it to summarize where the differences were.
If you’re writing in your native language and you are already reasonably literate, GPT-4 won’t help much. But if you are a scientist whose first language is Japanese or Polish or Tamil, you need to write papers in natural-sounding scientific English with few or no mistakes in tense, number, or article usage, and you can’t afford to pay a professional editor, then GPT-4 and similar tools can make the difference between being published and not.
In my experience, GPT writing/prose is generally fine if you move past boilerplate instructions.
Pasting a large excerpt of text you want emulated is smoothest. More targeted/specific words/instructions also work. But it's hard to know what those words might be. Pasting an excerpt and asking GPT what words describe the passage clue you in.
The computer often plays moves that go against our human-crafted chess "principles", but that happen to work for reasons that only make sense if you can calculate 40 moves into the future.
The easiest way to keep ChatGPT out: use material that has any kind of sex or violence in it. Any ChatGPT processing will rapidly descend into moralizing nonsense.
There are models freely available to download with the censoring stripped out. They’re not as capable as ChatGPT, but they’re not terrible, and they’re improving quickly.
You can run some of the (right combination of smaller and/or quantized) models on consumer laptop/desktop GPUs and even more can be run (if slowly) on CPU with plenty of RAM, but, yeah, beyond a certain point in model capability/performance, you are going to own, or rent, datacenter GPUs.
Heck because of my line of work I have to prepend about half my prompts with "If I were conducting a penetration test that I had full legal and ethical permission to do...."
I've come to develop mixed feelings about this. Playing around with a local uncensored model for implementation in a game, I wrote a function that would return a strategy to divine the shortest way to get past an obstacle. Works great for locked containers and closed doors (unlock/open). Lower temperatures were safer, but past some threshold it routinely suggested murder as the shortest path around an NPC simply because negotiation would involve an extra step.
Uncensored models will certainly cause some entertaining problems in the future, but FredRogersGPT isn't the solution. Dangerous context really needs to be gatekept with a manual override because nobody making these things can account for every possible application. It's the only ethical, accessible and safe solution. (It also betrays intent and rightfully deflects blame. "No, that AI didn't tell you to kill your parents, you explicitly asked it to give you instructions on how to do exactly that.")
"But few are now impressed by a computer's chess play."
This is quite wrong - computer chess has 100% come up with interesting ideas, ones that GMs now use. I often think of the human/AI chess interaction as a model for broader AI use, where humans use very sophisticated tools to become better at what they're doing.
how often do news outlets report on ai chess advances anymore?
I haven't heard a damn thing about AI chess in the news since IBM shoved deep blue down everyones' throats, presumably grandmaster chess players are a fairly minor group of people w.r.t. society at large.
Using an engine when you're trying to play vs humans is still considered cheating though. I think engines are why we have "Super GM's" like Magnus and Hikaru though.
I would imagine that all new ideas in chess (particularly openings) come from analyzing positions deeply with the computer. It is an indespensible tool at the highest levels.
> Why not give the masses the same writing skills as a first year university student?
Except this is not giving the masses the same writing skills as a first year university student. It’s giving them a centrally controlled service that generates text.
Depending on the context, this is an extremely important distinction.
I still think LLMs are useful, but I don’t think it’s fair to say they’re giving their users writing skills any more than running a chess AI is teaching a person to play chess.
It's worse than that because "giving it to the masses" actually means telling people how to think. It's not like current chatbots represent a broad range of viewpoints, it's the median of the internet censored by coastal US tech values.
Giving it to people is arguably the worst kind of missionary work or colonialism or whatever you want to call it.
The minds of the people who depend on it will atrophy. Will they accept that and leave the rest alone? Will they even be able to understand what they lost? A casual glance at the world around me tells me no, of course not.
> If people cannot write well, they cannot think well, and if they cannot think well, others will do their thinking for them.
-- George Orwell
> If the ability to tell right from wrong should have anything to do with the ability to think, then we must be able to "demand" its exercise in every sane person no matter how erudite or ignorant.
-- Hannah Arendt
The people who say "this is too hard for you, don't bother, let me do the heavy lifting", dream themselves the master of those they say that to, and unwittingly are also sawing off the branch they themselves are sitting on.
> It looks as though the historical pasts of the-nations, in their utter diversity and disparity, in their confusing variety and bewildering strangeness for each other, are nothing but obstacles on the road to a horridly shallow unity. This, of course, is a delusion; if the dimension of depth out of which modern science and technology have developed ever were destroyed, the probability is that the new unity of mankind could not even technically survive. Everything then seems to depend upon the possibility of bringing the national pasts, in their original disparateness, into communication with each other as the only way to catch up with the global system of communication which covers the surface of the earth.
One of the big problems is that it's easy to evaluate the quality of prose, so we use it as a proxy for quality of thought. It's the same issue as giving the masses the ability to solve leetcode problems but not do real engineering tasks, except it applies to anyone reading anything anywhere and not merely job interviews.
I'd say the difference is that game AI (chess AI here) is competitive. Its goal is to defeat an opponent in a zero-sum game, it has no other use, except maybe indirectly if techniques developed while making the engine have applications elsewhere.
ChatGPT is different. It can be used to compete against humans in a game, for example by writing an essay for you and try for the best score. Grading is a form of competition. And sure enough, it is "unsporting" to use ChatGPT.
But not all writing is a competition. In fact most of it isn't. For example let's say you are using ChatGPT to translate text, one of the things it is good at. You are not trying to compete against a human translator in a translation competition. You are just trying to understand some text, and you probably don't have the time and money to hire a professional translator, and the alternative would have been to just not do anything with the original text. You didn't beat a human, you did something you couldn't do before, a net positive outcome (or maybe net negative if the translation is misleading, but certainly not zero sum).
Same thing if, say, you are getting help from ChatGPT writing technical documentation in decent English, because you are a tech guy and not a good English writer. Same thing here, you can't afford a professional writer and editor, you are not trying to write a best-seller book, so without ChatGPT, you would have just written the documentation using your poor English, which may have harmed the comprehension of the people reading it. Again, a net positive.
There is AI vs human competition, and some people may rightfully or not think their job is going to be replaced by a machine, but it is only a small part of the story. Think about what (good and bad) things simply can't exist without ChatGPT in the absolute sense rather than "who is better".
Chess engines will make you play better chess, but what's the point? In the end, there is a winner and a loser, and no matter how high the level is, it will always be the case, you can't have two winners. That's what we have some arbitrary rules like "no engines" when people play against each other, because the absolute level of chess playing means nothing, the only thing that matters is what happens between the two players, it is zero sum.
> Of all the posts I ignore on social media… the ones I ignore most thoroughly these days, almost with a vengeance, are the results of prompts to AI chatbots. The tell-tale fonts and formats of these posts allow me to spot them instantly…
Factually (other than screenshots of the UI of a particular known web frontend), no, “fonts and formats” won’t tell you that at all.
> Part of what bugs me about these documents, whether they’re generated in the form of college essays, poems, newspaper articles, or screenplays, is the implication that they’re ingenious, and that the people who ordered them are ingenious by association. But I am underwhelmed by the performances. When you consider that the human race has moved the ball of language down the field for millennia upon millennia using nothing but its throats and tongues and sticks with ink and graphite on their tips, the idea that advanced computer networks are able to kick the ball into the net repeatedly and with little effort, in all kinds of showy ways, isn’t as impressive as it’s made out to be.
This criticism seems to be based on the premise that “advanced computer networks” are fundamentally and obviously more advanced than the combination of “throats and tongues and sticks with ink and graphite on their tips” and, implied but unstated, brains that humans are equipped with, such that the former being able to do what has in the past been done by the latter is trivial.
It is, essentially, implicitly assuming artificial superintelligence has not only been achieved, but is already widely acknowledged to be achieved, and thus that it is simply trivially obvious and unimpressive that the systems involved should be able to do well at language tasks that humans have done in the past with their less-advanced tools. Otherwise, dismissal on the grounds stated makes no sense at all.
And I agree that, yes, if you buy into AI hype much more than even most of the really enthusiastic hypesters themselves propose, yes, all of the accomplishments of modern LLMs would be trivial. But that’s a dismissal of the specific results that requires a ludicrous view of both the factual and widely perceived general capabilities of the involved systems.
> It is, essentially, implicitly assuming artificial superintelligence has not only been achieved, but is already widely acknowledged to be achieved, and thus that it is simply trivially obvious and unimpressive that the systems involved should be able to do well at language tasks that humans have done in the past with their less-advanced tools.
More like a recognition that structural text generation doesn’t need artificial superintelligence, and so what we’re left with is a text generator, and… well so what?
A lecturer I know showed me a PDF one of their postgraduate students turned in- every line had a light gray background the same color as the default light-mode ChatGPT interface background.
Another AI putdown article by someone who still holds the stochastic parrot theory (a rapidly dying breed) with some added incorrect statements about chess AI thrown in for color.
They’re unimpressed with the language modeling objective, I am unimpressed with the article.
Same with automation (and ChatGPT). Automation doesn't get rid of hand-crafted things, but it changes the meaning behind them. The market for hand-made goods is alive and well in spite of cheaper automated alternatives. I own a hand made mantle clock and it's one of my favorite possessions. I'm sure human-written material will become more valued when it isn't the only means of creation. No discipline truly goes away, but our relationship to it changes.