I really like this way of introducing a new feature/service. Straight to the point, explains what it does, which problem it solves, gives practical examples and walks the reader through them. So many times when I read about a new feature/service I am left with more questions than I had started with, but this was great!
One of the more interesting things about WL is that Stephen Wolfram is really a genuine daily user of the software his company makes and he's the final say on what ships and in what form. They used to livestream his meetings reviewing potential new features on Youtube, an interesting watch. It didn't make me want to work there but I did feel like he cared very much. Quite Jobsian, dare I say.
As a French person who grew up going to a school in Belgium for a bit as a kid, I was quite amused by their numbers.
My thought as a 6 year old was "aw, are soixante-dix, quatre-vingt, and quatre-vingt-dix too complicated for you?"
Even now, while I think the French numbers make objectively no sense (even the countries that do count in 20s are at least more consistent than us), I can't help but find the Swiss and Belgian numbers "cute". Like "Baby's first 70 to 99".
And for whatever reason, I don't have the same opinion about 70-99 in English, Portuguese or Spanish.
Edit: just to be clear, I think my thoughts about it are absurd but they're too deeply engrained and decades old to shed completely.
It’s a well-known phenomenon that with the internet and modern media, large countries’ version of a language can affect the speech of the smaller countries using that language. Think kids in Portugal today growing up using lots of Brazilian words to their parents’ dismay, or americanisms slipping into UK speech. This makes me wonder if any young Vallon French speakers have started to pick up standard French higher numerals.
This is already true for just UI vs. API. It’s incredible that we weren’t willing to put the effort into building good APIs, documentation, and code for our fellow programmers, but we are willing to do it for AI.
I think this can kinda be explained by the fact that agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. There's a lack of incentive in the human direction (and in a business setting that means priority goes to other stuff, unfortunately).
In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).
> agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. ... In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).
Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.
I also think it makes a difference that an AI agent can read the docs very quickly, and don't typically care about formatting and other presentation-level things that humans have to care about, whereas a human isn't going to read it all, and may read very little of it. I've been at places where we invested substantial time documenting things, only to have it be glanced at maybe a couple of times before becoming outdated.
The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical
The feedback loop from potential developer users of your API is excruciatingly slow and typically not a process that an API developer would want to engage in. Recruit a bunch of developers to read the docs and try it out? See how they used it after days/weeks? Ask them what they had trouble with? Organize a hackathon? Yuck. AI, on the other hand, gives you immediate feedback as to the usability of your “UAI”. It makes something, in under a minute, and you can see what mistakes it made. After you make improvements to the docs or API itself, you can effectively wipe its memory by cleaning out the context, and see if what you did helped. It’s the difference between debugging a punchcard based computing system and one that has a fully featured repl.
Yeah, this is so true. Well designed APIs are also already almost good enough for AI. There really was always a ton of value in good API design before LLMs. Yet a lot of people still said, for varying reasons, let's just ship slop and focus elsewhere.
After reading the README, the only missing thing seems to be the equivalent of Dataview from Obsidian. Will wait for something like it before considering switching.
Wow this is such a great idea. Also the controls on mobile were top notch. No issues with random zooming, text selection, weird scrolling etc. Felt like a downloaded app.
In Clojure this isn't syntax per-se: defn- and defn are both normal identifiers and are defined in the standard library, but, still, I think it's useful precedent for helping us understand how other people have thought about the minus character.
personally, i like that raku goes the other way, with exported bits of the interface explicitly tagged using `is export` (which also allows for the creation of selectably importable subsets of the module through keyed export/import with `is export(:batteries)`/`use TheModule :batteries`, e.g. for a more featureful interface with a cost not every user of the module wants to pay).
it feels more natural to me to explicitly manage what gets exported and how at a different level than the keyword used to define something. i don't dislike rust's solution per se, but if you're someone like me who still instinctually does start-of-line relative searches for definitions, suddenly `fn` and `pub fn` are separate namespaces (possibly without clear indication which has the definition i'm looking for)
Actually, a module can implement any export heuristics by supplying an EXPORT subroutine, which takes positional arguments from the `use` statement, and is expected to return a Map with the items that should be exported. For example:
sub EXPORT() { Map.new: "&frobnicate" => &sum }
would import the core's "sum" routine, but call it "frobnicate" in the imported scope.
Note that the EXPORT sub can also be a multi, if you'd like different behaviour for different arguments.
neat! i've never needed more than i could get away with by just sneaking the base stuff into the mandatory exports and keying the rest off a single arg, but that'll be handy when i do.
For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.
I am not sure whether the videos are representative of real life performance or it is a marketing stunt but sure looks impressive. Reminds of the robot arm in Iron Man 1.
It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.
Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.
The robot arms in the movie are implied to have their own AIs driving them; Tony speaks to the malfunctioning one directly several times throughout the movie.
Jarvis is AGI, yes, but is not what's being referred to here.
Not specifically trained on but most likely the Vision models have seen it. Vision models like Gemini flash/pro are already good at vision tasks on phones[1] - like clicking on UI elements and scrolling to find stuff etc. The planning of what steps to perform is also quite good with Pro model (slightly worse than GPT 4o in my opinion)