I share the same view, based on the jobs in my area, everyone is using Python to prototype and Java/Scala to productionize, I've also seen Node.js increasingly being used, probably for things that are I/O-intensive and not compute-bounded. Rust could change the landscape for Node/Deno, critical code which is memory/cpu-intensive could be exported to a WASM library or an external dependency that can be called through FFI (though that would be very slow and you introduce the 2-language problem that Julia is trying to fix for Python). Anyway, even if we are in a way better state today than, let's say, 5 years ago when it comes to picking tools for data engineering, I strongly believe that JVM is here to say for a long, long time.
but have found that only companies at the largest scale want to pursue the distsys approaches that I'm most interested in (e.g., gossip/CRDTs, consensus algorithms, etc.).
Can this be disproven? Are there companies of smaller scale that are attempting to solve these problems?
I would also say, having worked a company of smaller scale trying to solve distsys problems, that they're hard to sell. But the distsys skill set does seem (to me) marketable at the large cloud providers.
There are definitely companies at smaller scale that work on these problems. I would say that the only time it really feels solved to me is if the scale is very large.
Online courses may be helpful, but I recommend applying for a junior development position and seeking immediate employment. Professional experience will impart far more technical knowledge than coursework.
I'm not in computational work as a dev, but I would argue that coursework and the how things work in the professional world are two different ballgames. Jumping into the deep-end isn't always bad advice =)
I also agree with you at the same time. Just playing devil's advocate!
1. Install Duolingo on your phone
2. Change your phone's microphone default language to Spanish
3. Practice by dictating spanish directly to your phone without typing