You can call me crazy or you can attack my points: do you think the first example logically follows? Do you think the second isn't wordy? Just to make sure I'm not insane, I just copy pasted the article into Pangram, and lo and behold, 70% AI-generated.
But I don't need a tool to tell me that it's just bad writing, plain and simple.
You are gaslighting. I 100% believe this article was AI generated for the same reason as the OP. And yes, they do deserve negative scrutiny for trying to pass off such lack of human effort on a place like HN!
They may however be obligated to not give customers access to their services at a discounted rate either - predatory pricing is at least some of the time and in some jurisdictions illegal.
Predatory pricing is selling something below cost to acquire/maintain market dominance.
The Claude subscription used for Claude Code is to all appearances being sold substantially below the cost to run it, and it certainly seems that this is being done to maintain Claude Code's market dominance and force out competitors who cannot afford to subsidize LLM inference in the same way such as OpenCode.
It's not a matter of there being a public API, I don't believe they are obligated to offer one at all, it's a matter of the Claude Subscription being priced fairly so that OpenCode (on top of, say, gemini) can be competitive.
> Predatory pricing is selling something below cost to acquire/maintain market dominance.
Yet they have to acquire market dominance in a meaningful market first if you want to prosecute, otherwise it's just a failed business strategy. Like that company selling movie tickets bellow cost.
The modern consumer benefit doctrine means predatory pricing is impossible to prosecute in 99% of cases. I’m not saying it’s right, but legally it is toothless.
This is true... in the US (though there is still that 1%). Anthropic operates globally and the US isn't the only country who ever realized it might be an issue.
The API is really expensive compared to a Max subscription! So they're probably making a lot of money (or at least losing much less) via the API. I don't think it's going anywhere. Worst case scenario they could raise the price even more.
The Claude subscription (i.e. the pro and max plans, not the API) is sold at what appears to be well below cost in what appears to be a blatant attempt to preserve/create market dominance for claude code, destroying competitors by making it impossible to compete without also having a war chest of money to give away.
You’re making a big assumption. LLM providers aren’t necessarily taking a loss on the marginal cost of inference. It’s when you include R&D and training costs that it requires the capital inputs. They’ve come out and said as much.
The Claude Code plans may not be operating at a loss either. Most people don’t use up 100% of their plan. Few people do. A lot of it goes idle.
Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.
A regulatory duty to deal is the opposite of setting your own terms. Yes, citing a ToS is acceptable in this scenario. We can throw ToS out if we all believed in duty to deal.
Do other companies have a similar "duty to deal" - for example, if Microsoft or Apple ToS forbid use of open source software with their software? Or if VS Code ToS forbid people from using VS Code to work on a competitor?
I think these debates ultimately come down to what you’re making with these tools: is it documents or application interfaces? If it’s documents, then plain HTML, CSS and a touch of JS sprinkles on top works very well, as they were designed for this. If you’re making software, though, at some point you’re going to need some additional tooling to make it feasible.
> at some point you’re going to need some additional tooling to make it feasible.
I mean sure, most people will pick some kind of abstraction over parsing and constructing raw HTTP messages.
But it boggles the mind that apparently a large chunk of "developers" cannot see the insanity in writing XML to generate JavaScript which generates HTML and CSS because they want to write `<Button variant="primary">Save</Button>` rather than... `<button class="primary">Save</button>`.
Like I said earlier: so much of the folly in the NodeJS community looks like bizarre adoration of early-2000s J2EE stack.
You have a language that requires no AOT.. ah better invent increasingly convoluted and ever-changing build processes for it.
You're writing output that's essentially just a string to be sent over the wire... ah better create a wrapper for the wrapper that creates the service which renders the string.
But sure. That is totally a rational approach to development, and the nodejs community has never shown itself to be prone to chasing shiny useless things or cargo culting. I must just be overreacting.
> But it boggles the mind that apparently a large chunk of "developers" cannot see the insanity in writing XML to generate JavaScript which generates HTML and CSS because they want to write `<Button variant="primary">Save</Button>` rather than... `<button class="primary">Save</button>`.
I'm wondering if some of the disconnect here is that you don't have personal experience with this type of development, so you might not see what pain points it solves.
The first thing I would mention is that components encapsulate function and styling. Buttons don't illustrate this well because they're trivial. But you can imagine a `<DatePicker>` that takes a `variant` property ("range" or "single"), `month` and `year` properties, and perhaps a property called `annotations` which accepts an array of special dates and their categories (`[{date: "2026-07-04", code: "premium_rate"}, {date: "2027-07-07", code: "sold_out"} ...]`). The end result is an interactive picker that shows the desired span, with certain dates unselectable and others marked with special color codes or symbols. You're going to have a very unpleasant time implementing that with globally scoped CSS classes.
And this isn't a string sent over the wire. The "document" that the browser renders is changing continuously as you interact with it. If you were to open Chrome Devtools and look at the subtree of the DOM containing the date picker, you would see elements appearing and disappearing, gaining or losing classes and attributes, in real time as you select/deselect/skip forward/etc. That's what makes it work, rather than being a static drawing of a calendar.
I personally do not like the Javascript frontend ecosystem. It's hacks on top of hacks on top of hacks. But, do you know another way to deploy software that's cross-platform and basically free of gatekeepers? Sometimes we just have to do weird things because they're really useful.
> I personally do not like the Javascript frontend ecosystem. It's hacks on top of hacks on top of hacks. But, do you know another way to deploy software that's cross-platform and basically free of gatekeepers?
One way is what I call the "Modular MVC pattern" that involves pure js routing and manual DOM manipulation without using any framework at all. You handle complexity in two ways: by modularizing the "controller" parts into multiple js modules for each route, and "view" parts into multiple HTML partials - and using the event bus pattern if your app gets too complex (as alternative to modern reactive frameworks like react/vue).
Shameless plug: I've tried to implement this exact pattern with limited success in Abhyasa Quiz App[1], a side project.
> you don't have personal experience with this type of development, so you might not see what pain points it solves.
That all depends what you mean by "this type of development".
Do you mean development targeting a browser? Do you mean development targeting client-side interaction in a browser? Or do you mean writing JSX/React/Whatever flavour of the week is hip with the NodeJS community?
If you meant either of the first two: I have about 20 years experience.
If you meant the last: No. If I wanted to be a masochist that badly I'd buy my wife a leather whip and a strap on.
As much as I generally avoid front-end dev when I can now, at one point it was a much greater part of my work. I've written modular/resuable client-side libraries/widgets (i.e. self-contained elements that other developers then used in their own separate projects to add functionality... you know, a "component" by another name) since IE6 was not just in-use, but current and popular. So to rebut your claim: I'm well aware of the "pain points" developing code like this for re-use.
> You're going to have a very unpleasant time implementing that with globally scoped CSS classes.
Have you ever used CSS or Bootstrap before? You know that bootstrap is meant to be a starting point for your codebase, right? Even the most bare-bones official Bootstrap "example" designs use custom CSS specific to that use-case. If you're trying to create anything beyond the most basic hello world page with nothing but bootstrap classes on your markup, you're doing it wrong.
If your argument for using Tailwind (and apparently by necessity, JSX components) is to avoid having someone write a handful of CSS rules specific to the widget you're creating, I can't help you mate.
> It's hacks on top of hacks on top of hacks. But, do you know another way to deploy software that's cross-platform and basically free of gatekeepers? Sometimes we just have to do weird things because they're really useful.
My argument isn't against using Javascript for interactivity on webpages/webapps. As I said, I've been doing it for a couple of decades now. I have my issues with JS, but for browser interaction it's mostly fine.
You see the "current ecosystem" the NodeJS/Javascript community has created, complain about it being "hacks upon hacks" and then still defend the batshit crazy stuff when someone calls it out.
I see the batshit crazy stuff and just ignore it. Just because something new exists, doesn't mean you have to use it. The browser environment for JS is slowly improving, gaining native abilities that we once had implement in libraries or from scratch.... and the majority of the JS-focussed community seems to continue to be obsessed with adding more and more and more layers of abstraction.
If you told 20-year-ago me that the browsers would all supported a native way to implement custom elements (i.e. Web Components) that can be initiated using regular markup in the page, it would never once have occurred to me that the JS developers of the day would then find some way to not just use the built-in capabilities and instead have a dependency chain and build system so complex there are fucking memes about it.
> Embedding every style directly into the style attribute is also readable, and as a side benefit it doesn't need a build step just to make your styles actually work.
Critical difference: media queries are unavailable to inline styles, making impossible to implement responsive designs this way. And anyway, CSS is so much more verbose than Tailwind that it really wouldn’t be very readable outside of toy examples.
Personally, I have used CSS since it was first created. I also have used Bootstrap and Foundation, but found them brittle and cumbersome. Now I just write 95% of styles with Tailwind.
Which everyone says is only really useful if you use in a JSX Component...
So you're writing it in XML...
But then that gets converted into JavaScript....
Which then writes out some HTML and CSS?
I will absolutely not be surprised when someone declares that the XML part of JSX is too verbose and creates a library to generate the JSX code. Fuck it who am I kidding, it probably already exists doesn't it?
It isn’t a criticism; it’s a description of what the technology is.
In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before. The mechanics of language can work that way at times - we select common phrasings because we know they work grammatically and are understood by others, and it’s easy. But we do our thinking in a pre-language space and then search for the words that express our thoughts.
I think kids in school ought to be made to use small, primitive LLMs so they can form an accurate mental model of what the tech does. Big frontier models do exactly the same thing, only more convincingly.
> In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before
Do we have science that demonstrates humans don't autoregressively emit words? (Genuinely curious / uninformed).
From the outset, its not obvious that auto-regression through the state space of action (i.e. what LLMs do when yeeting tokens) is the difference they have with humans. Though I can guess we can distinguish LLMs from other models like diffusion/HRM/TRM that explicitly refine their output rather than commit to a choice then run `continue;`.
Have you ever had a concept you wanted to express, known that there was a word for it, but struggled to remember what the word was?
For human thought and speech to work that way it must be fundamentally different to what an LLM does. The concept, the "thought", is separated from the word.
Analogies are all messy here, but I would compare the values of the residual stream to what you are describing as thought.
We force this residual stream to project to the logprobs of all tokens, just as a human in the act of speaking a sentence is forced to produce words. But could this residual stream represent thoughts which don't map to words?
Its plausible, we already have evidence that things like glitch-token representations trend towards the centroid of the high-dimensional latent space, and logprobs for tokens that represent wildly-branching trajectories in output space (i.e. "but" vs "exactly" for specific questions) represent a kind of cautious uncertainty.
Fine, that would at least teach them that LLMs are doing a lot more than "predicting the next word" given that they can also be taught that a Markov model can do that and be about 10 lines of simple Python and use no neural nets or any other AI/ML technology.
> In contrast, human thinking doesn’t involve picking a word at a time based on the words that came before.
More to the point, human thinking isn't just outputting text by following an algorithm. Humans understand what each of those words actually mean, what they represent, and what it means when those words are put together in a given order. An LLM can regurgitate the wikipedia article on a plum. A human actually knows what a plum is and what it tastes like. That's why humans know that glue isn't a pizza topping and AI doesn't.
With a functional government, antitrust enforcement would prevent a single company from driving economy-wide price inflation out of an attempt to starve its competition. Since we don't have a functional government, we'll ungracefully take this up the ass.
Sure thing. Are they? And also, why would they do that? Do you think OpenAI wants to enter into the DRAM manufacturing business? Or were they looking for a way to take as much supply away as possible - paying for the wafers instead of finished DRAM?
My word, how lacking in imagination. Are you forgetting that there's something that OpenAI does that requires lots of RAM and that OpenAI are very much in bed with not one but two GPU makers (https://news.ycombinator.com/item?id=45521629) they could send the wafers to to build hardware for them?
> My question is why does anybody have to be liable at all?
This question mistakes what civil law is doing. A more accurate framing would be, “why does anybody have to bear the loss?”. But of course, somebody must. So the task of civil law here is to determine who. Certain policy choices will align better or worse with a sense of fairness, better or worse with incentives that could reduce future losses, etc.
"The loss" is already performing an abstraction to create something generic that can/must be assigned. The person who died is dead regardless of the creation of that assignable loss.
If there are too many instances of people dying in such situations, then the fundamental way to solve that is to prevent such situations from existing. A specter of civil financial liability is but one way of trying to do this, and having judges create common law theories is but one way of assigning that liability. Relying on those methods to the exclusion of others is not a neutral policy choice.
reply