Intuitively I agree. In the long run, we’ll know better. But for now, nobody truly knows what the new equilibrium is.
That said: it’s one type of work that is getting dramatically cheaper. The debate is about the scope and quality of that labor, not whether it’s cheap or fast (it is). But if anything negative (errors, faults) compound, and the correction can NOT be done with the same tools, then you must still have humans triage errors. In my experience, bad code can already have negative value (it costs more to fix than rewrite).
In the medium term, the actual scope and ability for different tasks will remain unknown. It takes a lot of time to gather the experience to tell if something was a bad idea – just look at the graveyard of design patterns, languages and software practices. Many of them enjoyed the spotlight for a decade before the fallout hit.
Anyway, while the abilities are unknown, AI will be used everywhere for everything – which is only wise if it’s truly better at every general task – despite every available data about it shows vastly different ability in different domains/problem types. Many of those things will be both (a) worse than humans and (b) expensive to reverse, with compounding effects.
The funny thing is I have already seen enthusiasts basically acknowledging this but explaining that those compounding issues (think tech debt) is the right choice now because better AI will fix those issues in the future. To me, this feels like the early formations of religion (not metaphorically even). And I have a feeling that the goalpost moving from both sides will lead to an unfalsifiability deadlock in the debate.
That said: it’s one type of work that is getting dramatically cheaper. The debate is about the scope and quality of that labor, not whether it’s cheap or fast (it is). But if anything negative (errors, faults) compound, and the correction can NOT be done with the same tools, then you must still have humans triage errors. In my experience, bad code can already have negative value (it costs more to fix than rewrite).
In the medium term, the actual scope and ability for different tasks will remain unknown. It takes a lot of time to gather the experience to tell if something was a bad idea – just look at the graveyard of design patterns, languages and software practices. Many of them enjoyed the spotlight for a decade before the fallout hit.
Anyway, while the abilities are unknown, AI will be used everywhere for everything – which is only wise if it’s truly better at every general task – despite every available data about it shows vastly different ability in different domains/problem types. Many of those things will be both (a) worse than humans and (b) expensive to reverse, with compounding effects.
The funny thing is I have already seen enthusiasts basically acknowledging this but explaining that those compounding issues (think tech debt) is the right choice now because better AI will fix those issues in the future. To me, this feels like the early formations of religion (not metaphorically even). And I have a feeling that the goalpost moving from both sides will lead to an unfalsifiability deadlock in the debate.