nasal voice Actually, there's no difference between AOT compilation and interpreting either.
I agree with your point. I think the point of JIT is that a program can be best sped up when there's a large amount of context - of which there's a lot when it's running. A powerful enough JIT compiler can even convert an "interpreter" into a "JIT compiler": See PyPy and Truffle/Graal.
Doesn't explain why Chrome beat IE. Or why MacOS has higher market share on the desktop than Linux.
Wine and Proton should have levelled the playing field. But they haven't. Also, if you've only just started using Linux, I recommend you wait a few years before forming an opinion.
This might be why OpenBSD looks attractive to some. Its kernel and all the different applications are fully integrated with each other -- no distros! It also tries to be simple, I believe, which makes it more secure and overall less buggy.
To be honest, I think OSes are boring, and should have been that way since maybe 1995. The basic notions:
haven't changed since 1970, and the more modern GUI stuff hasn't changed since at least the early '90s. Some design elements, like
tree-like file systems, WIMP GUIs, per-user privileges, the fuzziness of what an
"operating system" even is and its role,
are perhaps even arbitrary, but can serve as a mature foundation for better-concieved ideas, such as:
ZFS (which implements in a very well-engineered manner a tree-like data storage that's
been standard since the '60s) can serve as a founation for
Postgres (which implements a better-conceived relational design)
I'm wondering why OSS - which according to one of its acolytes, makes all bugs shallow - couldn't make its flagship OS more stable and boring. It's produced an
anarchy of packaging systems, breaking upgrades and updates,
unstable glibc, desktop environments that are different and changing seemingly
for the sake of it, sound that's kept breaking, power management iffiness, etc.
OpenBSD—all the BSDs really—have an even more unstable ABI than Linux. The syscall interface, in particular, is subject to change at any time. Statically linked binaries for one Linux version will generally Just Work with any subsequent version; this is not the case for BSD!
There's a lot to like about BSD, and many reasons to prefer OpenBSD to Linux, but ABI backward-compatibility is not one of them!
One of Linux's main problems is that it's difficult to supply and link versions of library dependencies local to a program. Janky workarounds such as containerization, AppImage, etc. have been developed to combat this. But in the Windows world, applications literally ship, and link against, the libc they were built with (msvcrt, now ucrt I guess).
Why should everything pretend to be a 1970s minicomputer shared by multiple users connected via teletypes?
If there's one good idea in Unix-like systems that should be preserved, IMHO it's independent processes, possibly written in different languages, communicating with each other through file handles. These processes should be isolated from each other, and from access to arbitrary files and devices. But there should be a single privileged process, the "shell" (whether command line, TUI, or GUI), that is responsible for coordinating it all, by launching and passing handles to files/pipes to any other process, under control of the user.
Could be done by typing file names, or selecting from a drop-down list, or by drag-and-drop. Other program arguments should be defined in some standard format so that e.g. a text based shell could auto-complete them like in VMS, and a graphical one could build a dialog box from the definition.
I don't want to fiddle with permissions or user accounts, ever. It's my computer, and it should do what I tell it to, whether that's opening a text document in my home directory, or writing a disk image to the USB stick I just plugged in. Or even passing full control of some device to a VM running another operating system that has the appropriate drivers installed.
But it should all be controlled by the user. Normal programs of course shouldn't be able to open "/dev/sdb", but neither should they be able to open "/home/foo/bar.txt". Outside of the program's own private directory, the only way to access anything should be via handles passed from the launching process, or some other standard protocol.
And get rid of "everything is text". For a computer, parsing text is like for a human to read a book over the phone, with an illiterate person on the other end who can only describe the shape of each letter one by one. Every system-level language should support structs, and those are like telepathy in comparison. But no, that's scaaaary, hackers will overflow your buffers to turn your computer into a bomb and blow you to kingdom come! Yeah, not like there's ever been any vulnerability in text parsers, right? Making sure every special shell character is properly escaped is so easy! Sed and awk are the ideal way to manipulate structured data!
It would not solve the ABI problem, but it would give at least an opinionated end to end API that was at some point the official API of an OS. It has some praise on its design too.
It was more about everything since the Amiga being a regression. BeOS was sometimes called a successor (in spirit) to the Amiga : a fun, snappy, single-user OS.
I regularly install HaikuOS in a VM to test it and I think I could probably use it as a daily driver, but ported software often does not feel completely right.
Because Linux not an OS. The flagship OSS OS is Ubuntu, and it's mostly pretty stable. But OSS inherently implies the ability to make your own OS that's different from someone else's OS, so a bunch of people did just that.
Ubuntu still suffers the same kind of breakage though. You can't take an moderately complex GUI application that was built on ubuntu 2014 and run it on the latest version. Heck, there's a good chance you can't even build it on the newer version without needing to update it somehow. It's a property of the library ecosystem around linux, not the behaviour of a given distro.
(OK, I have some experience with vendors where their latest month-old release has an distro support release where the most up-to-date option is still 6 months past EOL, and I have managed to hack something together which will get them to work on the newer release, but it's extremely painful and very much not what either the distros or the software vendors want to support)
Is it the flagship of Linux Distros right now? I though RHEL (The most common to see paid software package for) would be up there, along side its offshoots of Rocky / Fedora
This might be a silly thing to point out, but where do people draw the line between an allocation happening or not happening? You still need to track vacant/occupied memory even when there's no OS or other programs around. It's especially bewildering when people claim that some database program "doesn't allocate".
This is the fundamental question which motivated the post. :)
I think there are a few different ways to approach the answer, and it kind of depends on what you mean by "draw the line between an allocation happening or not happening." At the surface level, Zig makes this relatively easy, since you can grep for all instances of `std.mem.Allocator` and see where those allocations are occurring throughout the codebase. This only gets you so far though, because some of those Allocator instances could be backed by something like a FixedBufferAllocator, which uses already allocated memory either from the stack or the heap. So the usage of the Allocator instance at the interface level doesn't actually tell you "this is for sure allocating memory from the OS." You have to consider it in the larger context of the system.
And yes, we do still need to track vacant/occupied memory, we just do it at the application level. At that level, the OS sees it all as "occupied". For example, in kv, the connection buffer space is marked as vacant/occupied using a memory pool at runtime. But, that pool was allocated from the OS during initialization. As we use the pool we just have to do some very basic bookkeeping using a free-list. That determines if a new connection can actually be accepted or not.
Hopefully that helps. Ultimately, we do allocate, it just happens right away during initialization and that allocated space is reused throughout program execution. But, it doesn't have to be nearly as complicated as "reinventing garbage collection" as I've seen some other comments mention.
> This is sort of why I think software development might be the only real application of LLMs outside of entertainment.
Wow. What about also, I don't know, self-teaching*? In general, you have to be very arrogant to say that you've experienced all the "real" applications.
* - For instance, today and yesterday, I've been using LLMs to teach myself about RLC circuits and "inerters".
I would absolutely not trust an LLM to teach me anything alone. I've had it introduce ideas I hadn't heard about which I looked up from actual sources to confirm it was a valid solution. Daily usage has shown it will happily lead you down the wrong path and usually the only way to know that it is the wrong path, is if you already knew what the solution should be.
LLMs MAY be a version of office hours or asking the TA, if you only have the book and no actual teacher. I have seen nothing that convinces me they are anything more than the latest version of the hammer in our toolbox. Not every problem is a nail.
> LLMs MAY be a version of office hours or asking the TA
In my experience, most TA's are not great at explaining things to students. They were often the best student in their class, and they can't relate to students who don't grasp things as easily as they do--"this organic chemistry problem set is so easy; I don't know why you're not getting it."
But an LLM has infinite patience and can explain concepts in a variety of ways, in different languages and at different levels. Bilingual students that speak English just fine, but they often think and reason in their native language in their mind. Not a problem for an LLM.
A teacher in an urban school system with 30 students, 20 of which need customized lesson plans due to neurological divergence can use LLMs to create these lesson plans.
Sometimes you need things explained to you like you're five years old and sometimes you need things explained to you as an expert.
On deeper topics, LLMs give their references, so a student can and should confirm what the LLM is telling them.
Self-teaching pretty much doesn't work. For many decades now, the barrier has not been access to information, it's been the "self" part. Turns out most people need regimen, accountablity, strictness, which AI just doesn't solve because it's yes-men.
It's not bogus at all. We've had access to 100,000x more information than we know what to do with for a while now. Right now, you can go online and learn disciplines you've never even heard of before.
So why arent you a master of, I don't know, reupholstery? Because the barrier isn't information, it's you. You're the bottle neck, we all are, because we're humans.
And AI really just does not help here. It's the same problem with professor Google, I can just turn off the computer, and I will. This is how it is for the vast majority of people.
Most people who claim to be self taught aren't even self taught. They did a course or multiple courses. Sure, it's not traditional college, but thats not self taught.
It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic. No doubt you can learn some true things, but you’ll also learn some blatant falsehoods and a lot of incorrect theory. And you won’t know which is which.
One of the most important factors in actually learning something is humility. Unfortunately, LLM chatbots are designed to discourage this in their users. So many people think they’re experts because they asked a chatbot. They aren’t.
I think everything you said was true 1-2 years ago. But the current LLMs are very good about citing work, and hallucinations are exceedingly rare. Gemini for example frequently directs you to a website or video that backs up it's answer.
> It’s somewhat delusional and potentially dangerous to assume that chatting with an LLM about a specific topic is self-teaching beyond the most surface-level understanding of a topic
It's delusional and very arrogant of you to confidently asserts anything without proof: A topic like RLC circuits has got a body of rigorous theorems and proofs underlying it*, and nothing stops you from piecing it together using an LLM.
* - See "Positive-Real Functions", "Schwarz-Pick Theorem", "Schur Class". These are things I've been mulling over.
This doesn't have the distracting long "ſ"es of Oliver Byrne's editions. It was arguably a bonkers decision to have it in the first place, given that the long S was on the way out by then.
[EDIT: Corrected to use his actual first name, "Oliver"]
I agree with your point. I think the point of JIT is that a program can be best sped up when there's a large amount of context - of which there's a lot when it's running. A powerful enough JIT compiler can even convert an "interpreter" into a "JIT compiler": See PyPy and Truffle/Graal.
reply