Hacker Newsnew | past | comments | ask | show | jobs | submit | metayrnc's commentslogin

I really like this way of introducing a new feature/service. Straight to the point, explains what it does, which problem it solves, gives practical examples and walks the reader through them. So many times when I read about a new feature/service I am left with more questions than I had started with, but this was great!


One of the more interesting things about WL is that Stephen Wolfram is really a genuine daily user of the software his company makes and he's the final say on what ships and in what form. They used to livestream his meetings reviewing potential new features on Youtube, an interesting watch. It didn't make me want to work there but I did feel like he cared very much. Quite Jobsian, dare I say.


> Quite Jobsian, dare I say.

Good for product: not so good for people.

I am told that he gave a great deal of agency to people he trusted, though.

In my career, I ran into two [brilliant] individuals that had, at one time, worked with Jobs.

They both hated him.


“Pure numbers and French are not compatible”

Yep that checks out


Sixty-ten-eight! Sixty-ten-nine! Four-twenties!

1999 == One thousand, nine hundreds, four twenties, ten, nine.

I studied French in grade school over ten years and I love it. But the way numbers convert into language is wild. I tease it with love.


Switzerland and Belgium got them right!


As a French person who grew up going to a school in Belgium for a bit as a kid, I was quite amused by their numbers.

My thought as a 6 year old was "aw, are soixante-dix, quatre-vingt, and quatre-vingt-dix too complicated for you?"

Even now, while I think the French numbers make objectively no sense (even the countries that do count in 20s are at least more consistent than us), I can't help but find the Swiss and Belgian numbers "cute". Like "Baby's first 70 to 99".

And for whatever reason, I don't have the same opinion about 70-99 in English, Portuguese or Spanish.

Edit: just to be clear, I think my thoughts about it are absurd but they're too deeply engrained and decades old to shed completely.


It’s a well-known phenomenon that with the internet and modern media, large countries’ version of a language can affect the speech of the smaller countries using that language. Think kids in Portugal today growing up using lots of Brazilian words to their parents’ dismay, or americanisms slipping into UK speech. This makes me wonder if any young Vallon French speakers have started to pick up standard French higher numerals.


> Sixty-ten-eight! Sixty-ten-nine! Four-twenties!

https://www.youtube.com/watch?v=8Ze6ZMkT2Z4 :-)


US time: a quarter till 8.


= 775¢


"Four twenties and ten" is better than the Danish "five minus a half, times twenty".


But the history of why we stuck to four-twenties sort of makes it worse.

We were allegedly headed to sanity but l'Academie was like "actually let's stick to soixante-dix, quatre-vingt, and quatre-vingt-dix".


Good grief, it gets worse. It's half third [ordinal] times twenty, ½ #3 × 20.


That is so cursed.

I love it!


This is already true for just UI vs. API. It’s incredible that we weren’t willing to put the effort into building good APIs, documentation, and code for our fellow programmers, but we are willing to do it for AI.


I think this can kinda be explained by the fact that agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. There's a lack of incentive in the human direction (and in a business setting that means priority goes to other stuff, unfortunately).

In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).


> agentic AI more or less has to be given documentation in order to be useful, whereas other humans working with you can just talk to you if they need something. ... In theory AI can talk to you too but with current interfaces that's quite painful (and LLMs are notoriously bad at admitting they need help).

Another framing: documentation is talking to the AI, in a world where AI agents won't "admit they need help" but will read documentation. After all, they process documentation fundamentally the same way they process the user's request.


I also think it makes a difference that an AI agent can read the docs very quickly, and don't typically care about formatting and other presentation-level things that humans have to care about, whereas a human isn't going to read it all, and may read very little of it. I've been at places where we invested substantial time documenting things, only to have it be glanced at maybe a couple of times before becoming outdated.

The idea of writing docs for AI (but not humans) does feel a little reflexively gross, but as Spock would say, it does seem logical


The feedback loop from potential developer users of your API is excruciatingly slow and typically not a process that an API developer would want to engage in. Recruit a bunch of developers to read the docs and try it out? See how they used it after days/weeks? Ask them what they had trouble with? Organize a hackathon? Yuck. AI, on the other hand, gives you immediate feedback as to the usability of your “UAI”. It makes something, in under a minute, and you can see what mistakes it made. After you make improvements to the docs or API itself, you can effectively wipe its memory by cleaning out the context, and see if what you did helped. It’s the difference between debugging a punchcard based computing system and one that has a fully featured repl.


Yeah, this is so true. Well designed APIs are also already almost good enough for AI. There really was always a ton of value in good API design before LLMs. Yet a lot of people still said, for varying reasons, let's just ship slop and focus elsewhere.


We are only willing to have the Llm generate it for AI. Don’t worry people are writing and editing less.

And all those tenets of building good APIs, documentation, and code are opposite the incentive of building enshittified APIs, documentation, and code.


Is there a link showing the email with the prompt?


After reading the README, the only missing thing seems to be the equivalent of Dataview from Obsidian. Will wait for something like it before considering switching.


Speaking of which, have you seen the new Bases feature in Obsidian? https://help.obsidian.md/bases

Reminiscent of Dataview.


That looks awesome!


Highly recommend this youtube channel for anyone interested in the problem solving capabilities of these birds.

https://youtu.be/A5YyTHyaNpo?si=cLj4e4heV7kiXq5v


Wow this is such a great idea. Also the controls on mobile were top notch. No issues with random zooming, text selection, weird scrolling etc. Felt like a downloaded app.


I like the final approach. What about

-def sayHi()

Or

def- sayHi()

I feel like having a minus communicates the intend of taking the declaration out of the public exports of a module.


There's some prior art here from Clojure, where defn- creates private definitions and defn public ones:

https://clojuredocs.org/clojure.core/defn-

In Clojure this isn't syntax per-se: defn- and defn are both normal identifiers and are defined in the standard library, but, still, I think it's useful precedent for helping us understand how other people have thought about the minus character.


That's clever way to think of "-". :) I'll think about that.


It might be from me being so used to it, but I do like Elixir’s `def`/`defp` second best to Rust’s `pub`


personally, i like that raku goes the other way, with exported bits of the interface explicitly tagged using `is export` (which also allows for the creation of selectably importable subsets of the module through keyed export/import with `is export(:batteries)`/`use TheModule :batteries`, e.g. for a more featureful interface with a cost not every user of the module wants to pay).

it feels more natural to me to explicitly manage what gets exported and how at a different level than the keyword used to define something. i don't dislike rust's solution per se, but if you're someone like me who still instinctually does start-of-line relative searches for definitions, suddenly `fn` and `pub fn` are separate namespaces (possibly without clear indication which has the definition i'm looking for)


Actually, a module can implement any export heuristics by supplying an EXPORT subroutine, which takes positional arguments from the `use` statement, and is expected to return a Map with the items that should be exported. For example:

    sub EXPORT() { Map.new: "&frobnicate" => &sum }
would import the core's "sum" routine, but call it "frobnicate" in the imported scope.

Note that the EXPORT sub can also be a multi, if you'd like different behaviour for different arguments.


neat! i've never needed more than i could get away with by just sneaking the base stuff into the mandatory exports and keying the rest off a single arg, but that'll be handy when i do.


For me domain model means capturing as much information about the domain you are modeling in the types and data structures you use. Most of the time that ends up meaning use Unions to make illegal states unrepresentable. For example, I have not seen a database native approach to saving union types to databases. In that case using another domain layer becomes mandatory.

For context: https://fsharpforfunandprofit.com/posts/designing-with-types...


I am not sure whether the videos are representative of real life performance or it is a marketing stunt but sure looks impressive. Reminds of the robot arm in Iron Man 1.


AI demos and even live presentations have exacerbated my trust issues. The tech has great uses but there is no modesty from the proprieters.


Google in particular has had some egregiously fake AI demos in the past.


> Reminds of the robot arm in Iron Man 1.

It's an impressive demo but perhaps you are misremembering Jarvis from Iron Man which is not only far faster but is effectively a full AGI system even at that point.

Sorry if this feels pedantic, perhaps it is. But it seems like an analogy that invites pedantry from fans of that movie.


The robot arms in the movie are implied to have their own AIs driving them; Tony speaks to the malfunctioning one directly several times throughout the movie.

Jarvis is AGI, yes, but is not what's being referred to here.


Ah good point!


i thought it was really cool when it picked up the grapes by the vine

edit: it didn't.


Here it looks like its squeezing a grape instead: https://www.youtube.com/watch?v=HyQs2OAIf-I&t=43s Bit hard to tell whether it remained intact.


The leaf on the darker grapes looks like a fabric leaf, I'd kinda bet they're all fake for these demos / testing.

Don't need the robot to smash a grape when we can use a fake grape that won't smash.


The bananas are clearly plastic and make a "doink" nose when dropped into the bowl.


Haha show the whole room and work either on a concrete floor or a transparent table.

This video reeks of the same shenanigans as perpetual motion machine videos.


welp i guess i should get my sight checked


And how it just dropped the grapes, as well as the banana. If they were real fruits, you wouldn't want that to happen.


I remember a cartoon where a quality inspection guy smashes bananas with a "certified quality" stamp before they go into packaging.


[flagged]


This is, nearly exactly, like saying you've seen screens slowly display text before, so you're not impressed with LLM.

How it's doing it is the impressive part.


the difference is the dynamic nature of things here.

Current arms and their workspaces are calibrated to mm. Here it's more messy.

Older algorithms are more brittle than having a model do it.


For the most part that's been on known objects, these are objects it has not seen.


Not specifically trained on but most likely the Vision models have seen it. Vision models like Gemini flash/pro are already good at vision tasks on phones[1] - like clicking on UI elements and scrolling to find stuff etc. The planning of what steps to perform is also quite good with Pro model (slightly worse than GPT 4o in my opinion)

1. A framework to control your phone using Gemini - https://github.com/BandarLabs/clickclickclick


That's a really cool framework you've linked.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: