To be fair, postgres still suffers from a poor choice of MVCC implementation (copy on write rather than an undo log). This one small choice has a huge number of negative knock on effects once your load becomes non-trivial
I'm also curious about this, especially if anyone has operated postgres at any kind of scale. At low scale, all databases are fine (assuming you understand what the particular database you're using does and doesn't guarantee).
Postgres has some really great features from a developer point of view, but my impression is that it is much tougher from an operations perspective. Not that other databases don't have ops requirements, but mysql doesn't seem to suffer from a lot of the tricky issues, corner cases and footguns that postgres has (eg. Issues mentioned in a sibling thread around necessary maintenance having no suitable window to run at any point in the day). Again I note this is about ops, not development. Mysql has well known dev footguns. Personally I find Dev footguns easier to countenance because they likely present less business risk than operational ones.
I would like to know if I am mistaken in this impression.
I'd say that the 'poor man's closures' aspect of OOP - that is, being able to package some context along with behaviour is the most useful part for day to day code. Only occasionally is inheritance of anything other than an interface valuable.
Whether or not this is an endorsement of OOP or a criticism is open to interpretation.
> Your program must be really small and scoped for this to make sense.
to me suggests that it's not really global state if your program has to stay small and scoped. In a sense, your program has just become the context boundary for the state, instead of a function, or class, or database.
I realise that this line of argument effectively leads to the idea that no state is global, but perhaps that gives us a better way to understand the claim that 'global variables can work', which they undoubtedly can. It's fine for a program (or a thread, as in the original article) to be the context which bounds a variable's scope.
Well it's still technically a global state if it's a collection of scoped singletons, it's not much different than having a map of object names and their data as one big global variable, it's just formatted slightly more practically.
Indeed, we also have technology that can read the written language. Seems like that would be the way to go, and you get to keep backwards compatibility with eyeballs
I appreciate this survey for how thought-provoking it is. Ironically, I'd say that the survey is itself art. And not a piece of art that AI in it's current state could ever pull off. Maybe that's when the AI art turing test will truly be passed, when AI is capable of curating such a survey.
For me what really distinguished the more obvious human art is that it had a story. It was saying something more than the image itself. This is why Meeting at Krizky stands out as obviously human, and so is The Wounding of Christ whereas muscular man is not.
As with other commenters, I'm surprised the author liked the big gate so much. To me it was one of the easier AI pieces just by virtue of it's composition. It's a big gate. With no clear reason for being there, there are no characters that the gate means something to. It's just a big gate. Obvious slop. Paris scene on the other hand, did convince me. It does a pretty good job of capturing a mood, it sort of feels a bit Lowry but more french impressionist.
I think this has similar parallels to good character writing. A few words of dialogue of action can reveal complex inner beliefs and goals. The absence of those can feel hollow. It's why "have the lambs stopped screaming?" is more compelling than "somehow, palpatine returned".
To some extent, we already have had this competition between human made high art and human made generic slop for hundreds of years. The slop has always been more popular to the chagrin of those that consider high art to be superior. I don't blame anyone for consuming slop. I do. It's fun.
This is a bit of a ramble but I honestly appreciate that this survey genuinely adds another perspective to the question of what art is. Sorry if that sounds extremely pretentious. But then again, I like slop.
To be fair, newer research is demonstrating that smaller more power efficient models with the same performance are possible, so the hope is that these giant LLMs are just a stepping stone to a less energy hungry place. In contrast, proof of work fundamentally needs more energy then bigger the network gets. It's no guarantee but we can at least see some hope that as energy impact drops and increasing value is found that 'AI' will cross the threshold of being worth the energy.
Edit: although yes I do agree that the 'value' part is tricky. If internet spam can generate more 'value' for some people than doing science, then when intelligence is cheap we are in for a rough time.
To be clear, I'm not against AI or LLM as a technology in general. What I'm against is the unethical way how these LLMs trained and how people are dismissive of the damage they're doing and saying "we're doing something amazing, we need no permission".
Also, I'm very aware that there are many smaller models in production which can run real-time with negligible power and memory requirements (i.e. see human/animal detection models in mirrorless cameras, esp. Sony and Fuji).
However, to be honest I didn't see the same research on LLMs yet. Can you share if you have any, because I'd be glad to read them.
Lastly, I'm aware that AI is not something only covers object detection, NLP, etc. You can create very useful and light AI systems for many problems, but how LLMs pumped with that unstopping hype machine bothers me a lot.
This is sort of the role that L3 cache plays already. Your proposal would be effectively an upgradable L4 cache. No idea if the economics on that are worth it vs bigger DRAM so you have less pressure on the nvme disk.
Coreboot and some other low-level stuff uses cache-as-RAM during early steps of the boot process.
There was briefly a product called vCage loading a whole secure hypervisor into cache-as-RAM, with a goal of being secure against DRAM-remanence ("cold-boot") attacks where the DIMMs are fast-chilled to slow charge leakage and removed from the target system to dump their contents. Since the whole secure perimeter was on-die in the CPU, it could use memory encryption to treat the DRAM as untrusted.
Yeah, you’re basically betting that people will put a lot of effort in trying to out/optimize the hardware and perhaps to some degree the OS. Not a good bet.
When SMP first came out we had one large customer that wanted to manually handle scheduling themselves. That didn’t last long.
Effort? It's not like it's hard to map an SRAM chip to whatever address you want and expose it raw or as a block device. That's a 100 LOC kernel module.
I'd say there is a fourth category too - things that would be perfectly fine as a simple, local program purchased once that grow over-complicated cloud features to justify a subscription model.
Examples of this would be Lens, Postman and now Insomnia. This sort of behaviour is why I use k9s and Bruno instead.
100% agree with this. And I’d put most of the software into this category.
Also, let me offer you a different view on the software you use a lot and therefore want to support. The more a software/service is important to you, the more you should worry about having that as subscription, because it can go away in a matters of hours without you being able to do anything about that.
Think if slack went bankruptcy. Or if it was acquired by someone that shut it down. What would all those people that heavy relies on slack for their workflows? Or what about GitHub?
Having spent an unhealthy amount of time thinking about this, I think it's even worse than an abstraction _level_.
I suspect that the fundamental problem with visual languages is that you have to reify _something_ as the objects/symbols of the language. The most widely used text languages tend to be multi-paradigm languages which have significant flexibility in developing and integrating new abstractions over the lifetime of projects and library ecosystems.
It's not clear to me how this can be overcome in visual languages without losing the advantages, and instead ending up with a text language that is just more spread out.
I think the solution is just to stop trying to allow Real Programming in visual languages.
Make the abstractions purely high level, and let people write Python plugins for new blocks with an app store to share them.
Visual can easily provide enough flexibility for gluing blocks together.
IFTT, Excel, and lots of others do it perfectly well.
The issue is programmers like to program. Mostly they like "programming in circles", making little apps to help make more other little apps to explore ideas.
They see it mathematically, as a human centered activity all about understanding, while users see machines as a box you put stuff in to make it not your problem anymore.
They're always talking about empowering users to create their own fully custom ways of using computers... But.... I like apps that have a canned, consistent workflow that I don't have to fuss with or maintain or reinstall.
Software like Excel has the appropriate amount of power for working on a project that uses computers but isn't really related to programming or done by developers.