If you liked Withnail & I, try "Naked" for the hard-edge & existential experience... you'll never forget "Naked" - it stays with you, for better or worse.
To support that last scenario, where the language service, debugger, simulator, even package references, can run entirely in the browser, we built the whole thing using Rust compiled to WebAssembly, and our VS Code extension runs as pure JavaScript and Wasm. If interested you can dig into the implementation at https://github.com/microsoft/qsharp .
As another commenter mentioned, this is more like a physical limit for efficiency of light capture. However they also follow a scaling law of angular resolution versus diffraction. Smaller lenses means more blur from diffraction but smaller lenses also allow better angular resolution. Many insects with compound eyes follow this scaling law, having ther same (provably optimal) ratio of eye radius to lens size, despite having very different sizes of eyes. William Bialek is a biophysicist who uses this frequently as an example.
I put together a "High Performance Organizations Reading List" which includes a reference to "Reinventing Organizations: A Guide to Creating Organizations Inspired by the Next Stage of Human Consciousness" by Frédéric Laloux. You and "LeFantome" who wrote about Creo might find that book and other resources there of interest. Laloux's book provides examples of various companies with better management.
https://github.com/pdfernhout/High-Performance-Organizations...
Better and healthier organizations are possible, as Laloux writes about. They are rare though -- and difficult to sustain in our current Western society. As with Creo, you definitely need enlightened management or enlightened major shareholders to "hold the space" as Laloux writes.
Prosthaphaeresis works because an angle measure inherently is a type of logarithm. It’s the logarithm of a rotation, instead of the logarithm of a scale. Like other kinds of logarithms, this converts multiplication (composition of rotations) to addition (addition of angle measures).
If you want to multiply two numbers, you can treat them as rotations, take the logarithm of each (i.e. find the angle measure), add the two, then take the exponential.
Few people have tried Anki (a free alternative to SuperMemo). Even fewer have tried its add-ons, which is where it truly shines! Vanilla Anki is mostly good for text flashcarding, with the phone app helpful for making use of those little bits of downtime throughout the day.
Glutanimate has taken Anki to a whole new level. He created a painless UI for occluding any part of an image and making Anki flashcards out of it (Image Occlusion Enhanced). I used it to efficiently create and memorize thousands of anatomy flashcards. Ordered lists can be a real pain to memorize, but when not missing steps is crucial (as it is for medical OSCEs), Cloze Overlapper has worked very well.
Anki lets you share your decks with others, and medical students have collaboratively created very high quality study decks for studying for the various STEP exams.
To be honest, I was always a fairly mediocre student. Big-picture concepts and figuring things out on the fly, no prob. But I've always struggled to nail down bits of information long-term or learn sequential information, meaning calc and ochem required inordinate amounts of time. No longer the case. Spaced repetition plus visual/text/auditory learning is a recipe for success.
The possibilities here should not be underestimated. Folks on Hacker News have talked about the value of information commons, and this right here is the next best example behind Wikipedia I have ever encountered. While quality collaborative decks exist for things like medical school, learning languages, and more, none exist specifically for the program I am in (PA school). I am currently creating my own with the goal of it being the core of a quality collaborative deck. I am using Gephi to spatially organize concept maps for diagnoses complete with incidence and strength of associations, when such information is available. I then export these maps to Anki, use Visual Occlusion Enhanced to block out the information I want to recall, and voila: time-efficient first-time learning and long-term retention.
https://melscience.com/US-en/chemistry/ — chemical kits (with some very interesting reagents and experiments) by subscription, something new for your kids (and yourselves) for $30/month.
(not affiliated by them, just a happy customer)
…and, DIY genetic engineering kits: splice some genes and grow fluorescent yeast. https://www.the-odin.com/
…and, you can follow any of the numerous Youtube instructions for building a DIY Wilson chamber (with some isopropyl alcohol and dry ice), buy some uranium salts (perfectly legal in the US), and watch elementary particles with your own eyes!
…and, it is possible to open an actual Sigma-Aldrich account as a hobbyist researcher, and buy nearly anything you want, after signing some papers that you won’t be making drugs, explosives and stuff! (their reactives are insanely expensive though).
…and, you can build a fusor and do real nuclear fusion in your garage! (This is a big project with about $10,000 in the bill of materials, but still accessible to amateurs (some 14 year old kids built working fusors successfully), and with great and supporting community of other people who already built their own neutron source).
…and you can construct a homemade nitrogen laser! Works for real, very dangerous. There are other amateur laser constructions.
While trying to learn the latest in Deep Reinforcement Learning, I was able to take advantage of many excellent resources (see credits [1]), but I couldn't find one that provided the right balance between theory and practice for my personal experience. So I decided to create something myself, and open-source it for the community, in case it might be useful to someone else.
None of that would have been possible without all the resources listed in [1], but I rewrote all algorithms in this series of Python notebooks from scratch, with a "pedagogical approach" in mind. It is a hands-on step-by-step tutorial about Deep Reinforcement Learning techniques (up ~2018/2019 SoTA) guiding through theory and coding exercises on the most utilized algorithms (QLearning, DQN, SAC, PPO, etc.)
I shamelessly stole the title from a hero of mine, Andrej Karpathy, and his "Neural Network: Zero To Hero" [2] work. I also meant to work on a series of YouTube videos, but didn't have the time yet. If this posts gets any type of interest, I might go back to it. Thank you.
P.S.: A friend of mine suggested me to post here, so I followed their advice: this is my first post, I hope it properly abides with the rules of the community.
Thanks a lot, and another great suggestion for improvement. I also found that the common advice is "tweak hyperparameters until you find the right combination". That can definitely help. But usually issues hide in different "corners", both of the problem space and its formulation, the algorithm itself (e.g., just different random seeds have big variance in performance), and more.
As you mentioned, in real applications of DRL things tend to go wrong more often than right: "it doesn't work just yet" [1]. And my short tutorial definitely lacks in the area of troubleshooting, tuning, and "productionisation". If I carve time for expansion, this will likely make top of list. Thanks again.
One of my old registrars co-founded this company: https://tortus.ai. They are doing a trial at Great Ormond Street at the moment - I haven't tried what they're building but it's an AI assistant that reduces some of the admin burden.
I am really hopeful that systems like this will take off – the reality of being a junior doctor in the UK is that most of your time will be used on quite tedious admin tasks (documenting every patient interaction, filling forms, booking clinics etc.) using very & slow outdated computer systems. I don't think anyone expects this when they apply to medical school, and it can be quite demoralising when you start your first job.
"Responses by GPT-4 and clinicians were collected and compared. Differential diagnoses were also generated using a medical diagnostic decision support systemIsabel DDx Companion; Isabel Healthcare)
Six patients 65 years or older (2 women and 4 men) were included in the analysis. The accuracy of the primary diagnoses made by GPT-4, clinicians, and Isabel DDx Companion was 4 of 6 patients (66.7%), 2 of 6 patients (33.3%), and 0 patients, respectively. If including differential diagnoses, the accuracy was 5 of 6 (83.3%) for GPT-4, 3 of 6 (50.0%) for clinicians, and 2 of 6 (33.3%) for Isabel DDx Companion"
Interestingly, Glenn Reid also escaped from making software (Touchtype.app, PasteUp.app, wrote "The Green Book", _PostScript Language Program Design_) to making dovetail joined furniture by hand.
That said, I've always described the "Maker" movement as "Geeks who missed shop class", and have argued that the world would be a better place if the Sloyd system of woodworking as a basic constituent of education was prevalent:
>Students may never pick up a tool again, but they will forever have the knowledge of how to make and evaluate things with your hand and your eye and appreciate the labor of others.
> It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.
Irony is, Kahneman had himself written a paper warning about generalizing from studies with small sample sizes:
"Suppose you have run an experiment on 20 subjects, and have obtained a significant re-
sult which confirms your theory (z = 2.23, p < .05, two-tailed). You now have cause to run an additional group of 10 subjects. What do you think the probability is that the results will be significant, by a one-tailed test, separately for this group?"
"Apparently, most psychologists have an exaggerated belief in the likelihood of successfully replicating an obtained finding. The sources of such beliefs, and their consequences for the conduct of scientific inquiry, are what this paper is about."
Then 40 years later, he fell into the same trap. He became one of the "most psychologists".
Thatchaphol Saranurak is widely considered as one of the greatest upcoming TCS talents, pumps out meaningful results like a monster. Expect him to leave Michigan for MIT/Berkely/Stanford tbh. I took this class and it was comparable to the UMich honors math sequence.
The interviews I had at Apple were surprisingly straightforward compared to those at many other tech companies. I didn’t find the need to memorize anything, in part because I knew the concepts they were questioning me about after putting them in practice during the last decade of work, but also because the questions were quite straightforward, almost akin to general knowledge, for example:
• What is an SSL certificate? Follow up question: What is inside them?
• How would you go about designing a modern NTP to sync a fleet of local-first systems?
• Do you see a security problem with this code? If yes, what is it and how would you fix it?
• Same question as above but with twelve different pieces of code, each one more complicated than the previous one.
• What would you consider Personal Identifying Information (PII) in a food delivery app?
• Explain XSS, SQL, XSRF…
• Hashing vs. Encryption
• How to protect customers from phishing attacks?
• Say you discover a data leak, for example, a backup in a public S3 bucket. Communicate leak or not? If yes, who will you communicate with? How will you communicate?
• Improve event processing logic and performance of the following Java application. Our company revolves around processing events, quickly. Implement the alarm system to monitor incoming events for any delays in processing events by other event processors. (If you find it surprising that Apple uses Java, I share that sentiment. Interestingly, I opted for Go instead of Java to solve the problem during the interview, and the interviewers were pleased to explore a new programming language. I believe this aspect earned me some additional points.)
• Design a table booking system for a restaurant (database, API endpoints, etc.)
• A continuous integration (CI) build step fails (in tests), what do you do?
• What is your ideal Developer→Customer feedback loop? Realistically, what would a bad one look like?
• How would you design macOS Photos.app? (The best part of the interview was a discussion about what to do with content that is out of view and the performance implications of different solutions.)
• Third-party company wants Apple’s data to show ads. What do you do? (The obvious answer is you don’t give the raw data to them, but if you have to, how do you anonymize it?)
• so on and so forth.
There was this interviewer who appeared to be commited to ask me one hundred questions in the span of an hour, and while I didn’t count, I think we got pretty close to a hundred. The questions started at an easy level, for example, what is HTML, and increased in complexity every five minutes or so. Fortunately, I realize what type of interview this was and proceeded to give very one-sentence answers, sometimes just 2-3 words before jumping to the next question. One of the funniest, craziest, most exhausting, but somehow rewarding interviews I have ever had in my whole career. At times I thought the interviewer was some sort of artificial intelligence and the webcam was fake. Later, I met the interviewer in person and we became friends.
Now, having become an insider, I can attest that it’s acceptable to some extent to memorize responses for coding exercises and rehearse answers for behavioral questions. As you rightly point out, many other candidates engage in these practices as well. However, if a candidate’s proficiency is limited to merely regurgitating code without a genuine understanding and, consequently, an ability to articulate the underlying concepts in both the programming and behavioral interviews, that candidate is likely to face failure.
Before landing the job at Apple, I was also in the interview process for a position at Microsoft. The nature of the work appeared to be significantly more fulfilling than what I currently have. The level of technical expertise required during the interviews was remarkable, with the two medium and hard level LeetCode-style questions being the relatively easier segment. Memorizing answers was technically impossible based on the interviewing style. I would have unquestionably accepted that job if it weren’t for the fact that I needed additional income to continue providing for my extended family and covering my mortgage.
In conclusion, if you decide to memorize LeetCode problems, ensure that you also grasp the underlying concepts. Unless all your interviewers are gullible, it will become quite apparent if you're just mechanically writing code without a genuine understanding of the problem or the interview's objectives.
Any ideas where to start learning C++ as an embedded developer? I wrote many lines of bare metal C code and want to transit to higher level jobs. I see many expensive or completely free courses, but I am not sure which one is usable in my complicated situation.
If you have an application server then you still have RPCs coming from your user interface, even if you run the whole DB in process. And indeed POSIX has nothing to say about this. Instead people tend to abuse HTTP as a pseudo-RPC mechanism because that's what the browser understands, it tends to be unblocked by firewalls etc.
One trend in OS research (what little exists) is the idea of the database OS. Taking that as an inspiration I think there's a better way to structure things to get that same simplicity and in fact even more, but without many of the downsides. I'm planning to write about it more at some point on my company blog (https://hydraulic.software/blog.html) but here's a quick summary. See what you think.
---
In a traditional 3-tier CRUD web app you have the RDBMS, then stateless web servers, then JavaScript and HTML in the browser running a pseudo-stateless app. Because browsers don't understand load balancing you probably also have an LB in there so you can scale and upgrade the web server layer without user-visible downtime. The JS/HTML speaks an app specific ad-hoc RPC protocol that represents RPCs as document fetches, and your web server (mostly) translates back and forth between this protocol and whatever protocol your RDBMS speaks layering access control on top (because the RDBMS doesn't know who is logged in).
This approach is standard and lets people use web browsers which have some advantages, but creates numerous problems. It's complex, expensive, limiting for the end user, every app requires large amounts of boilerplate glue code, and it's extremely error prone. XSS, XSRF and SQL injection are all bugs that are created by this choice of architecture.
These problems can be fixed by using "two tier architecture". In two tier architecture you have your RDBMS cluster directly exposed to end users, and users log in directly to their RDBMS account using an app. The app ships the full database driver and uses it to obtain RPC services. Ordinary CRUD/ACL logic can be done with common SQL features like views, stored procedures and row level security [1][2][3]. Any server-side code that isn't neatly expressible with SQL is implemented as RDBMS server plugins.
At a stroke this architecture solves the following problems:
1. SQL injection bugs disappear by design because the RDBMS enforces security, not a highly privileged web app. By implication you can happily give power users like business analysts direct SQL query access to do obscure/one-off things that might otherwise turn into abandoned backlog items.
2. XSS, XSRF and all the other escaping bugs go away, because you're not writing a web app anymore - data is pulled straight from the database's binary protocol into your UI toolkit's data structures. Buffer lengths are signalled OOB across the entire stack.
3. You don't need a hardware/DNS load balancer anymore because good DB drivers can do client-side load balancing.
4. You don't need to design ad-hoc JSON/REST protocols that e.g. frequently suck at pagination, because you can just invoke server-side procedures directly. The DB takes care of serialization, result streaming, type safety, access control, error reporting and more.
5. The protocol gives you batching for free, so if you have some server logic written in e.g. JavaScript, Python, Kotlin, Java etc then it can easily use query results as input or output and you can control latency costs. With some databases like PostgreSQL you get server push/notifications.
6. You can use whatever libraries and programming languages you want.
This architecture lacks popularity today because to make it viable you need a few things that weren't available until very recently (and a few useful things still aren't yet). At minimum:
1. You need a way to distribute and update GUI desktop apps that isn't incredibly painful, ideally one that works well with JVM apps because JDBC drivers tend to have lots of features. Enter my new company, stage left (yes! that's right! this whole comment is a giant ad for our product). Hydraulic Conveyor was launched in July and makes distributing and updating desktop apps as easy as with a web app [4].
2. You're more dependent on having a good RDBMS. PostgreSQL only got RLS recently and needs extra software to scale client connections well. MS SQL Server is better but some devs would feel "weird" buying a database (it's not that expensive though). Hosted DBs usually don't let you install arbitrary extensions.
3. You need solid UI toolkits with modern themes. JetBrains has ported the new Android UI toolkit to the desktop [5] allowing lots of code sharing. It's reactive and thus has a Kotlin language dependency. JavaFX is a more traditional OOP toolkit with CSS support, good business widgets and is accessible from more languages for those who prefer that; it also now has a modern GitHub-inspired SASS based style pack that looks great [6] (grab the sampler app here [7]). For Lispers there's a reactive layer over the top [8].
4. There's some smaller tools that would be useful e.g. for letting you log into your DB with OAuth, for ensuring DB traffic can get through proxies.
Downsides?
1. Migrating between DB vendors is maybe harder. Though, the moment you have >1 web server you have the problem of doing a 'live' migration anyway, so the issues aren't fundamentally different, it'd just take longer.
2. Users have install your app. That's not hard and in a managed IT environment the apps can be pushed out centrally. Developers often get hung up on this point but the success of the installed app model on mobile, popularity of Electron and the whole video game industry shows users don't actually care much, as long as they plan to use the app regularly.
3. To do mobile/tablet you'd want to ship the DB driver as part of your app. There might be oddities involved, though in theory JDBC drivers could run on Android and be compiled to native for iOS using GraalVM.
4. Skills, hiring, etc. You'd want more senior devs to trailblaze this first before asking juniors to learn it.
Evgeny Morozov wrote a very interesting, but also quite long, critique of the "technofeudalism" concept from a Marxist perspective in the New Left Review back in 2022:
> In the case of well-known figures like Varoufakis and Mazzucato, tantalizing their audiences with invocations of feudal glamour may provide a media-friendly way to recycle arguments they have made before. In Varoufakis’s case, techno-feudalism seems to be mostly about the perverse macroeconomic effects of quantitative easing. For Mazzucato, ‘digital feudalism’ refers to the unearned income generated by tech platforms. Neo-feudalism is often proposed as a way to bring conceptual clarity to the most advanced sectors of the digital economy, where the left’s brightest minds still find themselves very much in the dark. Are Google and Amazon capitalists? Are they rentiers, as Brett Christophers’s Rentier Capitalism suggests? What about Uber? Is it just an intermediary, a rent-taking platform that has inserted itself between drivers and passengers? Or is it producing and selling a transportation service? These questions are not without consequences for how we think about contemporary capitalism itself, heavily dominated by technology companies.
> The idea that feudalism is making a comeback also coheres with left critiques condemning capitalism as extractivist. If today’s capitalists are mere lazy rentiers who contribute nothing to the production process, don’t they deserve to be downgraded to the status of feudal landlords? This embrace of feudal imagery by media- and meme-friendly figures of the left intelligentsia shows no signs of ceasing. Ultimately, though, the popularity of feudal-speak is a testament to intellectual weakness, rather than media savviness. It is as if the left’s theoretical framework can no longer make sense of capitalism without mobilizing the moral language of corruption and perversion. In what follows I delve into some landmark debates on the distinguishing features that differentiate capitalism from earlier economic forms—and those that define political-economic operations in the new digital economy—in hope that a critique of techno-feudal reason may throw fresh light on the world we’re in.
I really enjoyed reading the latter last year, because it does a great job slowly building up. By the end, I felt like I didn't just understand the models, but also how the author found them. I can recognize these patterns in nature, and figure out how they were generated.
One long-term project I’m planning is actually a fully open source desktop computing platform. While originally meant as a learning project, I realized a few years ago Ben Eater[0] has done this in a way far superior to anything I could create myself, so I started focusing on very basic hardware, beginning with power supplies. My goal ultimately is to select a processor that is as close to open source as possible, design a motherboard around it, make it fast enough to be suitable for general purpose uses, design a PSU, daughter boards (hot swap SATA backplane, I/O ports), a few PCIe x16 lanes, and ultimately a custom graphics card.
Designing the motherboard is surprisingly easy, the way PCIe is setup makes routing high speed connections fairly straightforward, and most I/O chips are just some sort of bus input and the interface output. The hardest part is finding the ICs that actually have good documentation not locked behind an NDA, or have good alternatives as one of my criteria is that every chip I select must have a pin-for-pin drop in replacement available.
6gbs SATA is the hardest one to source. I suspect this problem only will compound if I ever get to creating a graphics card.
Microsoft IntelliTest (formerly Pex) [1] is internally using Z3 constraint solver that traces program data and control flow graph well enough to be able to generate desired values to reach given statement of code. It can even run the program to figure out runtime values. Hence the technique is called Dynamic Symbolic Execution [3]. We have technology for this, just not yet applied correctly.
I would also like to be able to point at any function in my IDE and ask it:
- "Can you show me the usual runtime input/output pairs for this function?"
- "Can you show me the preconditions and postconditions this function obeys?"
There is plenty of research prototypes doing it (Whyline [4], Daikon [5], ...) but sadly, not a tool usable for the daily grind.
I have a list of some I'm keeping tabs on, I'll share it here. I do it out of a combination of personal obsession plus because I want to source angel investments.
Note though that I'm very biased toward AI companies...
Most established, clear product-market fit:
- OpenAI
- Midjourney
- Character ai
- Runway ML
Ones that are interesting:
- Adept AI
- Modal, Banana.dev
- new.computer
- Magic.dev
- Modular (Mojo)
- tiny corp
- Galileo
- Hippo ML
- Tenstorrent
- contextual.ai
- Chroma
- e2b.dev
- Steamship
- Patterns.app
- GGML
Ones that I want to learn more about before deciding:
- Inflection AI
- GetLindy
- Embra
- Jam.dev
- Vocode.dev
That's about 50% of my list. Happy to clean up the rest and write a post if there's interest
It's somewhat silly to read about animation using a static doc. My preferred intro to quaternions in the context of motion is the pair of articles at https://acko.net/tag/quaternions/. They were critical helping me learn quaternions for computer graphics. I also recommend https://acko.net/blog/how-to-fold-a-julia-fractal/ as a primer if you aren't comfortable with regular complex numbers.
I launched a baby book, Computer Engineering for Babies (https://computerengineeringforbabies.com/), back in September. And have surpassed my regular salary by a 3x. It’s still a side project because I am having a hard time leaving my own job. I’ll probably leave soon, but when you have a mortgage and kids, something about a regular paycheck is hard to leave behind.
I speak English natively, and have learned French, German, Sesotho and Japanese with a mixture of books and immersion. Obviously immersion is the best way.
I used Duolingo to help me learn Spanish, and I was struck by how artificial it is. It may teach you to understand a language, but not to speak it.
Far superior, in my experience, is https://www.languagetransfer.org, which has free audio lessons to learn French, Spanish, Italian, Greek, Turkish, Arabic, and Swahili (and English for Spanish speakers). This is the most natural method short of immersion I have ever experienced, and very effective. Amazingly, it is all done by one man, and runs on donations.
There is an app, which is delightfully clean and usable.
Mihalis also has an introduction to music theory, which gets excellent reviews!