Hacker Newsnew | past | comments | ask | show | jobs | submit | 7thpower's commentslogin

Langfuse has been my favorite LLM observability solution so far. Hopefully this acquisition makes it better, not worse.

I love langfuse, it is my goto.

By forcing the poorest to disclose their personal health issues?

Those are… actually some very good questions.


This is essentially what the claude code and codex teams have been preaching, right?


This is great news for mice who have something vaguely similar to Alzheimer’s.


Of which there are none except for the few we tried to genetically modify so they kinda get something maybe sorta similar


But there’s good news. They’re all going to be fine, except for the ones who aren’t.

Merry Christmas!


My main beef with mistral is that they don’t bother to respond to customer inquiries for products the hide behind “reach out for pricing” terms, so even if they were better than SoTA it wouldn’t really matter.


I absolutely loathe dealing with sales people.

I will pay a premium for an inferior product or service if it means I don't have to deal with sales people.


Agreed. In this case the offering just fit neatly into a non core stack we had designed and displaced a bunch of stuff didn’t want to build ourselves.

I also hate dealing with sales people and am not going to reach out to them via another avenue as they will try and posture as if they’re doing us a huge favor (in contrast to me begging gdb for gpt4 api access).


What are you talking about? 5.2 literally just came out.


5.2-codex just came out. You could use codex with regular 5.2 for a week or so.


I have a family license and am more or less stuck with it, but for my business I will be moving things over to gsuite so I can be price gouged by them instead. It will cost more, but I’ll have Gemini, which is actually useful.

The last straw, aside from the price increases, was switching my office.com landing page to copilot. It feels like a new low, even for Microsoft.

You just lost $6/mo., Microsoft. I hope it was worth it.


What led you to that conclusion?


Zod's validation errors are awful, the json schema it generates for LLM is ugly and and often confusing, the types structures Zod creates are often unintelligible in the and there's even no good way to pretty print a schema when you're debugging. Things are even worse if you're stuck with zod/v3


None of this makes a lot of sense. Validation errors are largely irrelevant for LLMs and they can understand them just fine. The type structure looks good for LLMs. You can definitely pretty print a schema at runtime.

This all seems pretty uninformed.


What's wrong with Zod validation errors?


And what makes this different? What makes it LLM-native?


It generates schemas that are strict by default while Zod requires you to set everything manually.

This is actually discussed in the linked article (READ ME file).


That's not true based on zod docs. https://zod.dev/api?id=objects

most of the claims you're making against zod is inaccurate. the readme feels like false claims by ai.


It seems to be true to me. And aside from the API stuff (because I am far from an expert user of Zod) all of this has been carefully verified.


1. Zoe’s documentation, such as it is 2. Code examples


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: