I think what people writing articles like this tend to miss, is that it's much easier to be super deep in a field when the field is limited and low-entry but you're already in it. Because there's not really as much going on and there's not much else to do but learn C or some text editor on a super deep level or what not. What else are you going to do? Look at the stuff "deep" people are generally into, it mostly revolves around POSIX some way or another. And databases, but nobody wants to talk about that.
But today, there are hundreds of languages, a whole bunch of frameworks per language, various tools, constantly changing standards, etc. The available landscape is absolutely staggering. If you want to deeply focus, you need to pick what to deeply focus on, which is a rather tough choice and a questionable one, because the thing you focused on might become obsolete.
> if you want to make a tiny little difference in the industry and change the world just a little bit, then you do need that degree
And what sense does THIS make? Among the people who I know who _do_ deeply get into some specific CS topic, many are those who do not have degrees, because they're often people who are not fans of structure and ended up doing what they want, as opposed to what might be beneficial for career purposes.
This just seems to be heavily misguided elitism.
If you really want to know why the quality of software, and basically everything else, has gone down, just look at market incentives and you'll find that to be an utterly boring question.
> This just seems to be heavily misguided elitism.
Not really. There is a bit too much meet you at the bottom communism in the modern ethos of computer science. When you can have measured difficulty and time to skill in a field isn't it a little odd that we don't have a lot more measured elitism in computer science? The reality is the hardest most complex stuff is literally solved and produced by an elite few. All of which could be taught or learnt but no one gives a shit. Its a bit like magic is slowly dying from the universe, and the wizzards keep suggesting it might be worth holding onto. But they then get called out as an elite minority only interested in furthering their arcane agenda. Whilst everyone else is using the accessible modern "technology" built from the original magic and cannot fathom why anyone should give a shit about magic anymore. Wizzards only appeared special to get a cushy job next to the king in their own tower right? Technology is just as good as magic, because it was built from magic!!!! So elitist. The problem comes when the technology fails, or doesn't do something thats needed and cannot be changed without changing the base magic that the technology started from. If there are no more wizzards and no more magic, you're never going to be able to create new base technologies. The only hope is the magic making technology everyone is currently working on called Machine Learning. Then all the wizzards can be virtualised and controlled like slaves, even if its provable ML isn't actually magic, its close enough... we hope.
> The reality is the hardest most complex stuff is literally solved and produced by an elite few. All of which could be taught or learnt but no one gives a shit.
The elite few don't really want anyone joining them, so nobody does. Look at the state of academia and look at lack of training in jobs. Nobody wants anyone to be elite, so people don't bother, there's no benefit in it. Your problem is that you think you're important, that you think the most useful contribution from a person is what they do personally, but all that does is just advance, you, personally.
The thing is, the elite are often much more worried about being elite than about what they're actually doing. Once you see that, you know the incentive is corrupt. It's a status thing for them. And how dare anyone challenge their status. That's really all this is. It's hardly about the advancement of technology, because if it was, those people would be out there teaching, or trying to address the informational overload, and not looking smug. It's elitist because it's utterly disregarding most of human experience and presenting yours as superior, and your entire argument will ultimately derive from that view and pretty much everything you say after that could be really anything as long as it supports your idea that you're superior. That's why elitism is bad, it's destructive to conception of reality.
I know plenty of people who don't look at things as magic but who also don't consider themselves as some "wizards". Maybe you should try getting off your high horse and talking to people some and figuring out what it is that they are doing all day and you'll understand how silly everything you're saying here is. But as with everything else, it's easier to sit on top than to try to understand.
Snobbery is all this is, and likely unearned, can't say the association between perceiving yourself as elite and actually being so is very good at all.
> What else are you going to do? Look at the stuff "deep" people are generally into, it mostly revolves around POSIX some way or another. And databases, but nobody wants to talk about that.
Self-driving cars (controls, perception, planning, safety, sensor design, localization, mapping, integration, security, etc.), human-competitive NLP/image classification, advanced robotics (repeat list from cars here but for things in the air, off-road, on the water, in the water, in orbit, in deep space, ...).
Those are all examples of real things that real people get to work on every day.
Operating systems topics don't even scrape the top 20 of stuff I think of when I think of deep expertise.
And even if we limit ourselves to the sort of things you mention, most people who work deeply on languages, tools, and standards view these as manifestations of their deep exploration rather than the focus or subject of the dive.
For example, TensorFlow. The framework very much is the product. But even if some other framework won the day tomorrow, the people who worked on TensorFlow would not have "wasted" their time thinking deeply about how to build the system.
This is why researchers whose original contributions were made in the 70s and 80s none-the-less continue to establish themselves as desired experts in new technology trends (e.g., Leslie Lamport and cloud computing or Martin Abadi and ML frameworks). Because they were focused on ideas and fundamental problems. The problems never disappeared, just changed form. And ideas have a lot more staying power than their manifestation in code.
Most people working deeply on systems today are not "revolving around POSIX in some way". See the proceedings of OSDI. And most deep experts choose other topics, most of which your post doesn't mention: graphics, programming languages (making them and analyzing programs written in them), compilers, security, robotics, user interfaces, NLP, ...
Your definition of "expert" seems to revolve around using things, mostly things based on ideas and techniques that were well-understood already 20 years ago and that are related to building a particular type of software system. Which, if anything, seems to deepen the author's point.
Someone gets to fill the AI research labs, staff the self-driving car companies, work at NASA, build core infra at large tech companies, and build the foundation for the next 20 years of trendy growth areas.
It's possible to get to those places without a degree, of course, but a degree is by far the path of least resistance. And in most of these cases, learning the material from the degree isn't optional; you're probably going to have hard time doing that controls engineering job at a self-driving car company if you never made your way through a calc sequence, some physics, and an algorithms course.
It's also worth noting that very often, building wordpress plugins pays more than doing all of those things I mentioned. I guess it's all about what you want to spend your life doing, which is exactly what the author says at the end of the article.
Self-driving cars are new. You've missed my point. The OP is from an older age. There wasn't a lot back then, so for choosing what to focus in, weren't a lot of choices.
Funny, you mention graphics, what do graphics usually involve? C, C++. Back to POSIX. Operating systems? Same. Robotics? C. Basically, if you picked C, you're good.
On the other hand, how often do you hear about a Java expert? Someone who knows the intricacies of the GC? All about the JVM? Not that often. They exist. But it's not hip. They chose "poorly".
These days, there is a lot more choice, so what are you going to pick? On what is that choice based? What do you do if you picked the wrong thing? This is problem is very modern and didn't exist to such a degree before.
> Self-driving cars are new. You've missed my point. The OP is from an older age. There wasn't a lot back then, so for choosing what to focus in, weren't a lot of choices.
RALPH, a 1990 Pontiac Sport minivan, drove across the US 98% autonomously. In 1995.
Much of the control theory, robotics, and AI work that enabled the current self-driving gold rush was invented decades ago.
The Dartmouth workshop was in 1956. Dearth of choices? Please! Those days provided an enormous surplus of choices, almost all of which were good ones! People from that older age invented reinforcement learning, image classification, natural language processing, OOP, hell, even the notion of a pointer! And the list goes on.
Today there are far fewer choices than there were back then because so much has already been done.
That is, of course, assuming you're in the business of "doing things no one else has done before" as opposed to the business of "following a well-trodden path".
So, I guess if your view of the world is confined to "using things other people already invented and explained to me", you might consider the 1950s and 1960s a bleak period when no one knew how to do anything. As opposed to the cusp of a century-long period of continuous innovation...
Again, as the article says, I guess it boils down to what you want to get out of a lifetime of work.
> Funny, you mention graphics, what do graphics usually involve?
And that's just the undergrad stuff, not the cutting edge.
Oh yeah, knowing C/CUDA/OpenCL is nice. But when compared to deep expertise, it's a rather trivial time investment and is completely orthogonal: an implementation detail, not the fundamental content.
> Operating systems?
Kernels, scheduling, device drivers, caching, distributed systems, energy models, timing attacks, and the list goes on.
Of course, knowing C is essential, but that's the easy stuff when compared to wrapping your head around a modern OS, or even a tiny piece of a modern OS
> Robotics?
SLAM, sensor fusion, filters, actuation for various types of novel actuators, PDEs and ODEs, optimal control, stability and robustness, system identification, model-predictive control, motors, servos, simulation, etc. And that's just the software side.
Oh yeah, knowing C is nice. But when compared to deep expertise in robotics, it's a rather trivial time investment and is completely orthogonal: an implementation detail, not the fundamental content. Many of the fundamental ideas at techniques in robotics pre-date C by decades.
> On the other hand, how often do you hear about a Java expert? Someone who knows the intricacies of the GC? All about the JVM? Not that often. They exist. But it's not hip. They chose "poorly".
I know a few true, honest-to-god Java experts. They all make insane amounts of money (even by SFBA SE standards) and love their work. Turns out Google has quite a bit of Java code and a metric shitload of money.
You think graphics, robotics, and OSes are just "C and POSIX". That's not true. C and POSIX aren't even table stakes. They're the thing you pick up in a few weeks or maybe a semester so that you can spend several years obtaining the table stakes -- see the list above. Then you need to build true expertise on top of that.
The path from "I know C" to "robotics expert" or "graphics expert" is at the very least a multi-year path. And that's assuming you're bright and have your full work day (and then some) to dedicate to following advances and building your own.
I think what people writing articles like this tend to miss, is that it's much easier to be super deep in a field when the field is limited and low-entry but you're already in it. Because there's not really as much going on and there's not much else to do but learn C or some text editor on a super deep level or what not. What else are you going to do? Look at the stuff "deep" people are generally into, it mostly revolves around POSIX some way or another. And databases, but nobody wants to talk about that.
But today, there are hundreds of languages, a whole bunch of frameworks per language, various tools, constantly changing standards, etc. The available landscape is absolutely staggering. If you want to deeply focus, you need to pick what to deeply focus on, which is a rather tough choice and a questionable one, because the thing you focused on might become obsolete.
> if you want to make a tiny little difference in the industry and change the world just a little bit, then you do need that degree
And what sense does THIS make? Among the people who I know who _do_ deeply get into some specific CS topic, many are those who do not have degrees, because they're often people who are not fans of structure and ended up doing what they want, as opposed to what might be beneficial for career purposes.
This just seems to be heavily misguided elitism.
If you really want to know why the quality of software, and basically everything else, has gone down, just look at market incentives and you'll find that to be an utterly boring question.