Hacker Newsnew | past | comments | ask | show | jobs | submit | PantaloonFlames's commentslogin

Do you know how Google makes money?

I actually don't care. Most people don't. We care about the quality of service. Aside from Google employees and shareholders, I assume that most users would prefer a useful service that barely makes the company any money, versus a money-printer that's useless and a PITA to use.

I would be able to absorb your perspective better if it were structured as a bulleted list, with SUMMARY STRINGS IN BOLD for each bullet. And if you had used the word "Crucially" at least once.

That's true. Probably not compiling and using HTTP Range made a big difference, but we don't know, do we? Quantifying these differences, rather than just listing them, would make the post much more valuable.

At first I read “enormous longterm commitments” as customers committing to OpenAI. But you are saying it’s the reverse.


You are mostly missing the point. You’re saying you get value out of what OpenAI is offering you. Thats not at issue here.

The question is, does OpenAI get value out of the exchange?

You touched on it ever so briefly: “as long as inference is not done at a loss”. That is it, isn’t it? Or more generally, As long as OpenAI is making money . But they are not.

There’s the rub.

It’s not only about whether you think giving them your money is a good exchange. It needs to be a good exchange for both sides, for the business to be viable.


DCE STILL WORKS?

Where!??


Yes to all of this.

Also the “us” is ever-changing in a large enough system. There are always people joining and leaving the team. Always, many people are approximately new, and JSON lets them discover more easily.


Yes. Proto makes sense when the request rate is much higher and the network is constrained.

Otherwise, json is sufficient.


If it's the network that's constrained and not the CPU, gzipped JSON will often beat protobufs.


If TOOL_X needs $DATA and that data is not available in context, nor from other tools, then the LLM will determine that it cannot use or invoke TOOL_X. It won't try.

About the TOOL_Z and TOOL_W scenario. It sounds like you're asking about the concept of a distributed unit-of-work which is not considered by MCP.


> If TOOL_X needs $DATA and that data is not available in context, nor from other tools, then the LLM will determine that it cannot use or invoke TOOL_X. It won't try.

I didn't explain myself very well, sorry. What I had in mind is: MCP is about putting together workflows using tools from different, independent sources. But since the various tools are not designed to be composed, scenarios occur in which in theory you could string together $TOOL_Y and $TOOL_X, but $TOOL_Y only exposes $DATA_SUBSET (because it doesn't know about $TOOL_X), while $TOOL_X needs $DATA. So the capability would be there if only the tools were designed to be composed.

Of course, that's also the very strength of MCP: it allows you to compose independent tools that were not designed to be composed. So it's a powerful approach, but inherently limited.

> About the TOOL_Z and TOOL_W scenario. It sounds like you're asking about the concept of a distributed unit-of-work which is not considered by MCP.

Yes, distributed transactions / sagas / etc. Which are basically impossible to do with "random" APIs not designed for them.


Or, why not let the LLM write the tool and give it to the agent? Taking it one step further the tool could be completely ephemeral - it could have a lifetime of exactly one chat conversation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: