Feels like the typical extrapolation: "Look how fast we've done the 80%! It is only a matter of time until we reach 100%".
Except that it never works like that, does it? The further we go, the harder it becomes. It's impressive how fast they went from nothing to almost-fully-autonomous cars, but actually-fully-autonomous cars may never happen, who knows?
As a developer, I honestly feel more threatened by the energy crisis to come (end of fossil fuels in the next couple of decades) than with AI replacing my job.
When I was 17 (I’m now 50) I was at a careers fair at my school. I was told “don’t go into computer programming, there won’t be any jobs because the computers will program themselves”
Similar here. It was a computer store sales guy telling preteen me not to bother with the MS-DOS book I had in hand because in a few years people will just verbally instruct their computers in plain language.
> Fast forward to today, and I am willing to bet good money that 99% of people who are writing software have almost no clue how a CPU actually works
The fact that many people do not know exactly enough what they are doing can be seen in the result. The people whose goal is to write as robust and efficient software as possible still do have to know and control the details. It's like driving a car; you do not have to be a engineer to drive one; but the more you want to push the limits of performance, the more you need to know about the details. And as far as AI is concerned, despite the predictions and full-bodied promises, we are obviously still a long way from replacing humans as drivers. I see no reason why software development should be any different. There are so many very complex issues involved that are not mentioned in the article. Just understanding the requirements of software will stretch the capabilities of AI for a few more decades.
> we are obviously still a long way from replacing humans as drivers.
This is only because we as a society have an extremely low tolerance for errors in automated driving and essentially require by default superhuman performance (a self-driving car with an error rate of the median human would never be allowed to be set loose by itself). In scenarios where a 0.1% error rate, 1% error rate, or maybe even 10% error rate are acceptable, AI is making huge strides.
> Just understanding the requirements of software will stretch the capabilities of AI for a few more decades.
I hope so. I'm not sure. And for a variety of reasons that's scary. What gives you a timeline of a few more decades?
AI is making stride in comparatively easy environments, i.e. highways and squared suburbs – when they don't crash into white trucks.
Now, what tends to be forgotten is that there AI-average vs. human average is that humans can also drive e.g. in Turin or Paris at rush hour, on mountain roads under the snow or in the Cornwalls roads while under tempest rains.
It's not that I believe that self-driving AI will never progress to this level, but let us be honest when comparing; they still drive themselves into fully-visible plots by daylight or run over cyclists at night.
> It's like driving a car; you do not have to be a engineer to drive one; but the more you want to push the limits of performance, the more you need to know about the details.
This is a potential argument why users don't have to know the arcane details of how the internals of a CPU works, but on the other hand a good argument why programmers should better have a quite good knowledge about that.
I know exactly how a CPU works, but I can’t say I ever spend much time thinking about that while I’m writing code. Thinking about low level details is often useful, but I can’t say I sit there pondering logic gates while trying to solve programming problems very often.
Writing a program is still the most efficient way to explain a lot of things, even to another human - I've been in plenty of meetings where hours of explaination and examples only added confusion, whereas 20 minutes of pseudocode or 5 minutes of real code made it very clear what we were talking about.
If you just want to do the same thing you currenty do but faster, AI will handle that. But modelling a business process properly and making it explicit will still bring huge value to those who care to put the effort in.
Right. My thoughts too. He said they are in "stealth mode." Yeah, so stealthy that he is publishing information on his company. Kinda anti-stealth if you ask me.
This keeps me awake at night I must admit. How do I best future proof my career?
I was sceptical about this until I started playing with GPT 3 and has it not only writen code for me, but also "explained" code to me. Sure, it's kind of limited right now, but it can only be a matter of time now before this all radically improves.
Maybe I should focus on system design and translating the messy real world into systems. That's the hardest bit of my job currently. I was also thinking of moving down the stack and getting deeply into security engineering or something like that (not that this is immune from AI either!!).
Just take a look at the generated code and explanations. A surprising amount of it is subtly but fundamentally wrong because gpt is just a regurgitation engine. The issues may look superficial, but when you start looking at why they happen, you realize the truth. The ml tools are usually great at writing boilerplate that's the same every time. The instant you do anything else, they fall over. They're statistical autocomplete, not any kind of important change to the process of programming.
I don't see any reason to believe the current approaches can extend to something that actually changes programming. They're not based on understanding code, they're based on generating text that matches what they would expect to see given the context. They have no model of what code means, so they can't model why sometimes code is subtly different if there are no local contextual cues. And when there are your prompt would need to reproduce those contextual cues for it to key off of. In other words, you as the programmer still are directing the generation of the code. You're just doing it via an undocumented and somewhat unpredictable autocomplete.
This doesn't remove the need to have someone who knows what they're doing in the loop. Best case is that it reduces the amount of time you spend typing by a little bit. As long as your job is to know what you're doing rather than to generate text, the current systems are no threat to it.
This was pretty much my take too until I played with GPT 3. I suspect that you're still basically right, but the code it was writing, whilst a bit quirky and sometimes full of errors, showed a simulacra of creativity. I use this term as I know it's an illusion, it is what you suggest, but it's amazing to me that this trick can be pulled off. I got the program to write some fairly esoteric functional programming code. I then pasted some of my own code and got a convincing "explanation". If nothing else, if GPT 3 can simulate understanding in some narrow cases through what is essentially a giant search engine trained on a gazillion data points, then it's a good trick.
It's quite possible that this avenue doesn't scale to anything more broadly useful. We shouldn't mistake solving 20% of a problem to being on the right path. Maybe this remains as auto-complete on steroids and it's a dead end. I was honestly just surprised by GPT 3's apparent abilities, smoke and mirrors though they may be!
Well, we absolutely have models in our heads. That's how we can understand what programs do.
It's possible our minds are also huge and complicated neutral networks, though I suspect that description is incomplete at best.
But the point is that current tools are trained on text generation. Something that would change programming would have to train on the meaning of programs. It's a rather different task, as it's no longer statistical. Doing it properly requires metacognition as well, to avoid falling into the trivial inconsistencies in most programming languages. And connecting that with real-world tasks that it hasn't seen before would require an understanding of the real world.
I'd call something with all of those capabilities AGI. I honestly don't think any system short of that will ever be more than an autocomplete, because it can only ever fit things together based on some statistics.
It was in 1982 that I for the first time heard about the fourth generation programming languages in which you only had to specify the problem and you would get the system for free. But now 40 years later, most of the programming is still done in third generation programming languages.
I am rather skeptical about the idea that AI is going to do away with programming soon. Yes, ML have shown some impressive results and will definitely show some improvements in the coming decades, but I think it will still take some time before the efficiency of electronic based ML systems will surpass that of organic based ML systems.
Please note that this blog post is from a start-up that aims to work at ML systems that aim at replacing programming. So, this blog is also in a sense a kind of job advertisement and/or investor pitch.
Even if this were possible, the value of starting over in the problem domain would require considerable starting investment. In the meantime, your competitors will continue to iterate and improve. Depending on the domain, it might take many years to get to feature parity, even with vastly increased productivity. Many companies or industries will also shy away from such an investment because the benefit might not actually be worth the cost. Also, I think Joel Spolsky's lesson will still apply -- it's human nature to think it will be better the second time through: https://www.joelonsoftware.com/2000/04/06/things-you-should-...
You can't guess the future so future proofing is next to impossible. Providing solutions to people wants and needs is the way to go. There you focus on the problem and then you look at what tools are available to provide the needed solution. People will always need food,recreation,sex, a place to live and religion to name a few. Figure how to fill needs in those areas and you'll be ok.
> 99% of people who are writing software have almost no clue how a CPU actually works, let alone the physics underlying transistor design.
My undergraduate education was in the early 90s, and at no point in my life have I ever had much of a clue regarding the physics underlying transistor design.
EDIT: also at one time I probably did have a reasonably solid grasp on how CPUs work, there's been an awful lot of advancement in the field over the decades, and I won't describe my understanding as anything more than a cartoon model.
For a CS person transistor design would be useless, but for EE it’s still core knowledge. But CPU design constraints are still the same now as they were in the 90s: speed of light, cache and coherency architecture, Amdahl’s law, pipelining. You’re way ahead of most coders if you understand even the basics of memory hierarchy.
I think the more realistic model of the future of programming given programs like Copilot is what’s happened in the SQL and compiler domain.
SQL and Compilers changed the goal of many programmers from writing custom extractors or serving as computer translators, to writing useful intents for extraction and actions. You still need to know WHY you’re extracting or adding your code in both cases, often need to understand the substructure enough to dive in and debug when results are unexpected or “not optimal enough”, and ultimately much of the cut content caused by the gained efficiency was ultimately boilerplate that, once eliminated, allowed programmers to take on more ambitious project scope than they did previously, because they weren’t spending their time rewriting a new data storage system or translating actions into machine language for the umpteenth time.
I feel we’ll see a similar progression - these code generators, very optimistically assuming a world where they work deterministically “good enough” to trust with even core business logic, will be treated as black box valid action generators, but in a world where action generation is free, but under-specification or incorrect specification is wrong, we still have something curiously resembling programming - the art of programming becomes one of maintainably chaining assemblages of black boxes into cohesive, maintainable superstructures.
I suspect we’ll say the same thing about code which was of simple and safe enough structure that we could trust it to black box generators that we currently say about SQL,
> Thank god I don’t have to redo all that work every time I start a new project
And, as with SQL, the project requirement boundaries will move to match your increased output capacity.
Much like the old saying “What Andy Giveth, Bill Taketh away”, it is perhaps modernized, “What Copilot giveth, Your PM Taketh Away”
I agree with the gist, and I actually just started doing a course on AI today as a result of not wanting to get left behind
However, this bit reads as needlessly hyperbolic to me:
> The engineers of the future will, in a few keystrokes, fire up an instance of a four-quintillion-parameter model that already encodes the full extent of human knowledge (and them some), ready to be given any task required of the machine.
I mean okay, sure, eventually. But people were predicting hand-wavy everything-solutions like this sixty years ago in Star Trek. It's not very imaginative. Not to mention- this four-quintillion-parameter model will be hugely inefficient for simple tasks. I think it'll be a long time before we care that little about efficiency.
But here's a much more near-term scenario I'm imagining:
You need to stand up a new microservice. You have an off-the-shelf "learned microservice" web framework that you reach for. You write a small training set of example request/response JSON, not unlike unit-tests. Maybe the training set includes DB mutations too. You start testing out the service by hand, find corner-cases it doesn't handle correctly, add more training examples until it does everything you need.
Now, in addition to saved effort vs hand-coding (which may or may not be the case, depending on how simple the logic is), you've got what I've started to think of as a "squishy" system component.
Maybe, because AI is fuzzy, this service can handle malformed data. Maybe it won't choke on a missing JSON key, or a weird status code. Maybe it can be taught to detect fishy or malicious requests and reject them. Maybe it can tolerate typos. Maybe it can do reasonable things with unexpected upstream errors (log vs hard-fail vs etc).
This is the really compelling thing for me: so much of what makes software hard is its fragility. Things have to be just so or everything blows up. What if instead, the component pieces of our digital world were a little squishy? Like human beings?
Yeah, fair. Maybe we'll have to change the level of abstraction to where the neural net determines a math equation, instead of openly translating to and from different numbers?
> enforce GDPR compliance
I actually think this would be an excellent use-case because it's such a sprawling problem (and because law is already "squishy" in this sense, because it has to be, because it's all about the messy real-world). Imagine watchdogs being able to hand companies a neural-net that continuously audits them for compliance, which works across different companies and systems (you'd probably have a human make the final ruling once a company is flagged, but still)
When the full car auto-pilot is getting released you said? Oh, I forget about foreign language translation. As a person who speaks multiple languages, we still cannot translate from one language to another so it doesn't look ugly.
It’s sort of non-shitty which is a glaring indictment of the difficulty of solving problems with AI. Languages are a closed system, the cars have no chance
I’ve used it a lot from German and Russian. I lived in Germany for a few years. Auto translation was absolutely essential for navigating official things like government websites, banking, flat renting, etc. Now, all of my in-laws are exclusive Russian speakers. When my partner isn’t around to translate, we use Google Translate’s conversation feature. It works great. My partner overhears a lot of our conversation. She’s never needed to clarify anything.
One day in the spring, at the hour of an unprecedentedly hot sunset, two citizens appeared in Moscow, at the Patriarch's Ponds. The first of them, dressed in a summer gray pair, was short, well-fed, bald, carried his decent hat with a pie in his hand, and on his well-shaven face were glasses of supernatural size in black horn-rimmed. The second, a broad-shouldered, reddish, swirling young man in a checkered cap twisted at the back of his head, was in a cowboy shirt, chewed white trousers and black slippers.
===
Human:
===
At the sunset hour of one warm spring day two men were to be seen at Patriarch's Ponds. The first of them--aged about forty, dressed in a greyish summer suit--was short, dark-haired, well-fed and bald. He carried his decorous pork-pie hat by the brim and his neatly shaven face was embellished by black hornrimmed spectacles of preternatural dimensions. The other, a broad-shouldered young man with curly reddish hair and a check cap pushed back to the nape of his neck, was wearing a tartan shirt, chewed white trousers and black sneakers.
===
"шляпу пирожком" has been auto-translated to "hat with a pie" - ridiculous and inaccurate AI translation. It is only one of many, many examples.
The example above was from a random book. I knew AI is going to fail.
Machine translation isn’t for translating literature at the moment. Maybe that’s why you’re feeling it falls short. For conversational vernacular or straightforward instruction, it’s great. I can’t remember the last time I used pork-pie hat in a conversation, for instance.
Perhaps you could qualify your initial statement that we can’t translate literature in a way that isn’t ugly. That would be true. But machine translation is a huge asset every day to people in need of understanding important things in a foreign language. Quite a miracle really.
Yes, it is for trivial conversations. I could give you few more examples, technical book - you would say "ah, and it isn't for translating technical books", and so on.
And this is the reason why I'm not buying "the end of classical Computer Science". AI doesn't work with text very well (reference to your comment "Machine translation works pretty well" - no, it isn't), and often can't even translate/recognize conversations. For example, auto-generated YouTube CCs often suck.
> But look at Stable Diffusion. If you had taken a GAN a few years ago and looked at its generative art potential.
Art is a little bit different, since it's subjective, and artist can say "oh, I just see things this way". Fluctuations in an artwork can be always seen as features, not bugs.
With translations you have to be more precise. The same for Computer Science, you often need to understand nuances to do the precise work.
You're saying "few years", but I've started using auto-translating software at least 15 years ago, maybe even more than that. We had this progress 15 years ago: yes, we were able to auto-translate simple conversations.
It's constantly improving, but at the same time there is no breakthrough, and machine translation still sucks.
ML has shown more promise in MT than any classical algorithm. Unless you believe there is a fundamental limitation to ML, or a new frontier on the horizon in classical CS, I don’t see a path for classical CS to hold a candle to ML in the machine translation domain.
Also, I disagree that translations need be precise. I read a collection of short stories recently called the Icarus Gland. I highly recommend it, especially if you can read it in the original language (Russian). The translation was simply comical. Probably it mostly translated via MT. Yet, it was an amazing book.
I’m not really sure how machine translation being less than perfect is related to whether or not the end of classical is near. Unless your argument is that because ML based translation is bad now, it will never be good unless there are developments in classical CS. But look at Stable Diffusion. If you had taken a GAN a few years ago and looked at its generative art potential. You could make the same argument. State of the art ML (at the time) is not good at generative art, therefore classical CS is still relevant. Of course, we know know that’s not a true statement.
I think humans being marginalized by their own inventions, might be a longer term consequence. Short term, we're still dealing with a growing demand for things where people with skills are more effective than any AI. And ironically, there's a lot of demand right now for people that can do some productive but low level stuff with AI.
Ten years ago, you needed a team with phd propeller heads to do anything with AI. These days, what you need is a lot of data engineers capable of moving data around via scripts efficiently and people that can use the off the shelf stuff coming out of a handful of AI companies. It's like database technology. You don't have to have a deep understanding of them in order to use them. I can get productive with this stuff pretty quickly. And I need a working knowledge of what's out there in order to lead others to do this stuff.
The consequences of a general AI, or even something close enough to that, coming online would be that, pretty soon after, we'd put that to use to do things currently done by really smart humans. Including programming things. The analogy is maybe that as an executive of a large tech company, you don't necessarily have to be a hard core techie yourself. You can delegate that stuff to "human resources". Adding AI resources to the mix is going to naturally happen. But it will be a while before that's cheap and good enough to replace everybody. For the foreseeable future, we'll have a growing number of AI resources but it will be relatively expensive to use them and we'll use them sparingly until that changes.
I totally disagree... Seeing actual development instead I think the era of end-user programming will came back. Really.
Try see ANY large enough project: no matter if it's a kernel or a GUI desktop application, at a certain point ALL of them try to integrate, this, that, that other etc becoming monsters. Original desktops was designed as a single OS-application-framework where "applications" where just "code added to the core image". That's the missing level of integration we can't achieve in modern systems and that's why all complex software became monsters trying to circumvent the lack of integration adding features directly.
Unix at first succeed over classic systems stating they are too complex and expensive, separating "the system" and "users" is the best cheap and quick solution. They they backpedal violating unix KISS logic with X11 and GUIs, libraries, frameworks etc because the KISS principle do not scale. Widget based GUIs born and succeed over document-oriented UIs stating those are too complex and expensive. The modern web prove they were wrong. In another ten years I think we will came back to Xerox...
And other things the author likes to tell themselves.. or perhaps they enjoy building clout for saying outrageous things.. yawn
Sure there will be obsolete concepts, algorithms, and plenty of AI assistance, but programming is building a state machine, or like a house in an abstract space that powers machinery to accomplish tasks of value using a general-purpose computation device.. computer science is informs and is informed by a craft (programming) and that can only be replaced by another craft (whatever this is rests in the imagination the author). You’re still doing creative work, and you will only be as effective as your abilities in practice of applying theory.. the theory will not be “use AI” or “don’t learn computer architecture lmao what a nerd”
> A time traveller from even 20 years ago would have a hard time making sense of the three sentences in the (75-page-long!) GPT-3 paper that describe the actual software that was built for the model
First off, the only one of those three sentences that a 2002 researcher would be stumped by is the first, and that solely due to the unfamiliar nouns. The other two sentences are perfectly classical, and the only difficulty one of the ancients would have is putting their eyes back in after they popped out on seeing the model sizes.
Second, isn't that good? It means the field has advanced, and there are new concepts being used, which I'd have thought is exactly what we want.
Third, how different is this than the past? Would a time traveller from 1982 be equally stymied by a paper from 2002? How about 1962 to 1982?
Meh. I read about neural networking in the 1990s, and it has been around as a concept from the 1940s.
in 1943, Warren McCulloch and Walter Pitts created a computational model for neural networks. One approach focused on biological processes while the other focused on the application of neural networks to artificial intelligence.
The author stated that a 3 sentence passage would not be comprehensible 20 years ago, but not true. Anyone could understand what they were saying from the context.
Code that writes code, which he is essentially talking about, heck, I wrote one when I was a freshman back in 1980. Because code that writes code is what it all boils down to. Using fancy words might get someone a job promotion for using the current buzzwords, but that's just buzzwords.
At best, you'll reduce the time spent writing code at the cost of greatly increasing the time spent writing tests to give you sufficient confidence that your autogenerated code actually does what it is supposed to.
Better and better code gen might mean you can review and give the ok on ai created PRs. If something goes wrong you can either intervene manually or turn to the AI again for a fix
As a great example of what "classical" computer science can do, take a look at the driverless metro introduced in Lille in 1983 [1]. Researchers in France have used formal systems to prove system correctness and this shows in the reliability and safety of the metro. I like to think that this is a better way to handle complex problems than to just throw data and algorithms against and hope that it will work correctly for new data.
The guy doesn’t know much DL, has recently learned it, and is simply hyped. Its that classic curve of how much you think you know vs how much you do, and all the excitement that comes with knowing little.
There will always be a specialization of CS when other segments splinter from it. It might not be as large but it will be there. The reference to programming being in a death spiral is a bit of a stretch. There will be common mechanisms for like process but business always needs customization so I would worry about the Linux kernel dying anytime soon. What I’d expect is that programming tools become even more forgiving than today allowing almost anybody to do programming tasks with computer aid.
I'd have more faith that we can eventually automate programming if we actually succeeded at automating a very well scoped problem domain such as accounting.
It'll be interesting to see who's right: Matt or Brooks. Matt essentially argues that AI can be trained to take the responsibility of specifying a system, which is the opposite of what Brooks argues in his essay No Silver Bullet.
Of course, I'm assuming that we are writing programs to specify what and how a system should work. It could be that AI (but not AGI) is so advanced that specifying a system can be compressed into training a model.
A computer program must produce correct outputs 100% of the time. Most AI assisted things I know are buggy black boxes. I wouldn't rely my bussines on that
Progress in deep learning has been quite astounding the last few years, but the output is still very dreamy, fuzzy, inexact etc, as though eg for image generation, the pixels are representing individual neurons, and you're viewing the 'dream state' of the network.
I think actual programming requires something more concrete; the 'atoms' of a program are not text letters or pixels, but something more abstract, more exact. I think once deep learning incorporates a symbolic or logic system of some kind, that might be a solution, but then that will apply not only to programming. All IMHO.
There is a reason we are getting robot dogs with guns on their backs at the same time AI is advancing; because once AI crosses a certain line it is going to be powerful enough to nullify most jobs. It's not science fiction.
What happens when you have a NN that understands how to integrate new physical input and render usable actions for creating outputs without human intervention? That's where we get machines building machines.
What happens when we start using AI to find the best recreational drugs? How about recreational drugs designed for specific kind of Overdoses - like crumple zones on a car? Or using AI to find the best cocktail of psychedelics that allow us all to work stoned and to maximum benefit all day long without diminishing returns?
Finally, what happens when these AI can layer themselves together through transfer protocols and problem solving distribution without us telling them to? A self-analyzing, self-correcting and self-improving system can be considered a kind of life.
> What happens when you have a NN that understands how to integrate new physical input and render usable actions for creating outputs without human intervention?
You're going to have to define "understands" and explain how to get there from where the technology is right now, because a model is a statistical artifact and doesn't "understand" anything, including its inputs and outputs.
> How about recreational drugs designed for specific kind of Overdoses - like crumple zones on a car?
Why would anyone want an overdose?
> Or using AI to find the best cocktail of psychedelics that allow us all to work stoned and to maximum benefit all day long without diminishing returns?
Who benefits from this? Because it doesn't sound like it would benefit the people doing the work.
> We really are very close now.
Just a few more puffs and I'm sure you'll have the solution.
>> Understand means to sense, interpret, analyze, articulate and affect the real world.
Why would anyone want a car accident? Not quite following me huh? Seems like maybe you just want to argue.
Who benefits from humans working on stimulants and psychedelic drugs? How about the entire 20th/21st century? Are you seriously pretending everything from caffeine to opium hasn't been a driving force of production?
The sensing machines we are building are paralleled in their design and utility to the functional parts of our brains. We've networked them all together already, and all we need is a simple spark to set it all in motion.
Except that it never works like that, does it? The further we go, the harder it becomes. It's impressive how fast they went from nothing to almost-fully-autonomous cars, but actually-fully-autonomous cars may never happen, who knows?
As a developer, I honestly feel more threatened by the energy crisis to come (end of fossil fuels in the next couple of decades) than with AI replacing my job.