The Twilio API integration with the Twilio API lessons/levels was impressive, but otherwise I found TwilioQuest quite "meh."
It teaches programming OK, but it's not a very interesting/fun game. It's more like programming lessons with game elements tacked on. So you actually waste a lot of time doing boring "game" things like walking around a map looking for stuff. You collect stuff like weapons/armor but 99% of the items don't affect the game in any way.
I 100% completed the Twilio API levels and API academy. It wasn't fun so I didn't bother to try the JavaScript/Python modules. (Also it was a little too basic for my level. TwilioQuest may be more engaging for kids who don't know how to program, yet.)
---
This doesn't exist yet, but I'd like to create a VR experience that teaches programming. You can write "spells" that use the magic of loops and variables to generate objects in VR and automate them.
For example, a quest like building a forest of 10,000 trees could be done manually, but writing a VR "spell" would be the smarter, faster way. Later, this VR spell could be used/modified in the future for other tasks or just general VR content generation. It's not just a game, it's a platform for VR content generation.
The reason I'm asking this question is because I want to make a game about learning programming and wanted to see what the market has come up with for now.
For now there doesn't seem to be a massively popular game on the topic probably because most educational games feel forced.
This is the impression that Twilio Quest gave me after looking at let's plays on YouTube.
Anyway, your idea sounds interesting however to know if people would like this you need to prototype it somehow.
Have you tried https://codecombat.com? I think it's a YC startup. I think that's the closest to what you're looking for. I tried it back in the day, and it was OK.
It's not exactly programming, but I think https://vim-adventures.com/ is the best example of a game that teaches a skill.
Just making a good game is difficult. Just making good teaching content is difficult. So trying to both simultaneously will be at least doubly as difficult.
---
As for my VR scripting idea, teaching programming isn't the main goal. When VR becomes the next social media, the content creators will either need to do a lot of manual work constructing experiences or learn how to program. It's really hard and messy to create an interactive VR experience without knowing how to program.
So my VR scripting tool/platform will solve a painful problem. A lot of creators won't be programmers, so I think teaching them how to program/use the tool via a game would be helpful.
There are few easier ways built in to do some of the file functions you've made, instead of `open(f -> read(f, String), "input")` you can use `readchomp("input")`. There's also `readlines` which is equivalent to `open(f -> read(f, String), "input") |> f -> split(f, "\n")`, and `eachline` which is like `readlines` but returns a generator instead of an array.
There are curried forms of the operators that might make it easier to write functional code, like instead of `x -> x == "0"` you can write `==("0")`. Another useful thing is you can broadcast pipes, so instead of `x |> xs -> f.(xs)` or `x |> Map(f)`, you can write `x .|> f`.
It's funny that the first section is about everything loading as a matrix and not knowing the easy way to load things as a vector, I had the opposite problem and every problem that has a matrix I've ended up making a giant vector then reparsing the first line to figure out what dimensions to reshape it to. I'll have to try `readdlm`, I'm currently working through the older AoC years in Julia.
I made a synth a few years back that lets you use a bytebeat formula as a signal generator. The `t` variable which represents the 8kHz sample rate of the audio in a typical bytebeat is instead proportional to the frequency of the pitch of the pressed key. There is also a `tt` variable, "tempo time", where the sample rate is proportional to a global tempo, making it easy to make beatsynced rhythms. The bytebeat itself can be written in either JavaScript, WebAssembly Text format, or a little stack based language that can be written in either RPN or S-expressions, which compiles to Wasm and is basically a thin wrapper around the Wasm semantics, which is why it doesn't include the DUP and SWAP commands from the original Glitch machine. It works with WebMIDI too, so you can hook up a keyboard to it.
Most of the Web Audio stuff has already been present in all major browsers for a little while, this mostly standardises what's already there. The main thing that this brings in that up until now was just a feature of Chrome is the AudioWorklet, so real-time low-level audio processing in JS worklets will work cross-browser when its implemented in the other browsers. It's very difficult to implement low-level audio processing off of the main thread in non-Chromium browsers at the moment.
AudioWorklet (which allows you to work with audio sample-by-sample in a dedicated, high priority "thread") is available and works well in Firefox and the latest Safari. I haven't tried it in Edge, but I believe it's also working well.
This is true for many web features, including many basic PWA features like web push notifications, url capture, etc...
As a mobile first web developer, ios is the thing that's really holding us back.
The very frustrating thing about this is that the end user tends to blame the app, and there's not a good way to communicate to them "hey, we would totally do this, but mobile Safari has limited functionality and Apple won't allow other browsers to be installed"