I met his dad, who was a professor emeritus at University of Michigan's nuclear engineering department. He wrote the classic textbook on radiation detection.
My undergrad math professor created one of the first fully online linear algebra texts: http://linear.ups.edu/html/fcla.html It's integrated with Sage, a Python library for studying (among other things) number theory. Another prof at the same university also wrote his own linear book, using a lot more illustrations, but as a traditional textbook.
I see this book as a solid evolution in both directions. Nicely done!
I suggest you try new devices to read papers. Often the perception that paper is a better support is due to a lack of more convenient devices. Paper is better than a 15'' screen for sure, for many reasons including size and posture while reading. But have you tried larger screens (> 27''), large tablets (>= A4) or as large as possible E-Ink readers? Depending on your preferences, you might find that some of these work actually better than paper also for you :-)
There is no way I can perceive reading on an expensive device as more comfortable than paper. Paper is fairly cheap, lightweight and resilient; I can carry it around, fold it, toss it aside, sit on it by accident while thinking, annotate it with scribbles, and pour coffee on it with aplomb and finesse. I can flip it, half-tear it in anger, drool on it when I reach my brain capacity. I can take it hiking with me without fear of breaking or losing it. In other words, paper is a tool that gets out of my way.
I did try all the devices you listed above, even had my department pay serious money, and ended up barely using them for all those reasons. I am a mathematician, I am clumsy and I want to focus on my problem-solving; I want to think, and babysitting devices and tools is not what I want to spend my brainspace on.
In my experience those people don't even talk about ink, it's all about paper. They must think that you get just a few sheets of paper for a single tree or something like that, when in reality you get like 10 000 sheets of paper for one averagish tree. And those trees are not rare or anything like that, and the process of making paper is nowhere near as bad as electronics industry. Using paper is as ecological as it gets.
With the typical reading volume of an academic and the amount of plastic in my toner cartridges, I'm not sure paper comes out ahead in that comparison.
A high yield toner cartridge can print between 3000 and 8000 pages of text [1]. Average number of pages in a scientific manuscript is 10 [2]. This means that it would take 300 to 800 printed scientific papers to deplete one cartridge. I would have to assume that a single toner cart is not the same amount of waste as a reading device just due to the recyclability of toner carts, but it is up to you how to count them. If I was going to pull a number out of my ass, I would say 10 carts would be equal to one reading device with battery. Let's go low-end and pick 300 papers, which means you would need to print 3000 full scientific manuscripts to equal the waste of one reading device. How many do you read in two years?
Paper enables non-vision based(scroll bar) random access to content, when you keep going back an forth between two pages, it is very annoying on all current devices except paper. Vision pro/VR/AR or a particular multi-screen set-up can achieve that, but so far all alternatives are not as good.
I’m typing this comment on an iPad ;-) Which I love using for marking up papers and other documents for quick feedback. But doing research is another story.
Running Shor's algorithm requires essentially error-free logical qubits. Not a single logical qubit of that quality has ever been demonstrated. The Harvard results are impressive, but their logical qubits are worse than some physical qubits.
It is not enough to place qubits on a single chip or on a grid. We already know how to do that. The hard part is keeping them isolated while allowing arbitrary control of their interference.
The core issue why Schor algorithm is hard is that it requires exponential supppression of error with number of qubits to produce meaningful results. Therefore we don’t actually see much results here as the error rates have not yet reached thresholds to do it with more qubits. The error correction would not be a panacea either because it would necessitate it’s repeated application to get necessary threshold to run Schor algorithm. This would require unreasonable amount of physical qubits.
This is what a minifier does, and those go even further to rename variables.
Another thing that should be pruned away entirely are data files, including all constant strings within the code, since humans should avoid those when focusing on algorithms
At that point you pretty much have a highly compressed version of what you'd find in CLRS or any other algorithmic text.
This is why grants are really important. That usually means deliverables in a specific timeframe. To me, that elevates open source from a full-time hobby to a job.
Grants are great but are often not nearly enough and they can vanish from a year to another. You'd better secure other sources of income. Grants will also usually fund specific features of your product but not the whole thing.
Other kinds of income are also good ways to fund open source like service, consultancy, support and even paid open source apps (which works particularly well for apps that have enterprise oriented features, turns out it doesn't matter that the source code is available under a free software license if it's convenient enough to click and buy).
The behind the scenes documentaries for the prequels have aged well: https://youtu.be/da8s9m4zEpo?si=5y5gHUMxztwVzMny