To those who believe ads are evil and must be stopped, I ask how the world will work if we kill the freedom to sell space for commercial messages where people can see them.
I don't think ads are evil, but the techniques used to get eyes are evil. Using fear, hate, desire to get people to click has a negative impact on society. I don't think ads should be banned, but engineering 'engagement' definitely should.
Advertising is how I learned about lots of things I am glad I learned about.
I am furious about lots of the ads that I see. I want to stop certain kinds of advertising. I live where there are no billboards allowed and I love that.
But I want to live in a world where people can pay to have their messages displayed where they will be seen. Simply because banning that activity would cripple the flow of information. That’s what advertising is.
If you want to ban a particular form of advertising then say what and why. The “ban all ads” thing just doesn’t make sense.
information flows fine without paid ads, and with much better incentives
> people can pay to have their messages displayed where they will be seen
…why? distribution of information is free across the world, which was not the case a century ago. let the message speak for itself
I ask again, what’s your actual argument against? you’ve seen things through ads you’ve liked? you think people should be allowed to pay to put their thumbs on the scale of the distribution of information? to what end?
What do you propose to ban? How do you define it? You want a policy, so write the policy and let me read it.
I have a very broad idea of what ads are. Maybe you don’t. Say what you mean by paid ads.
Am I allowed to offer and accept compensation for boosting one message above others, or not? Would I be allowed to place a hyperlink on my site in exchange for a reciprocal hyperlink? That’s a clear example of compensated communication. That’s what I think ads are.
Imagine the ad police. “You blogged about a product. Someone said you sounded insincere. Let’s see the receipt for purchase. Can’t prove you bought it? Prove you weren’t compensated for your blog post or pay a fine.” Kafka land.
The US already has what you call an ad police: for decades, it has been unlawful to make false statements in an ad and to accept any money for advertising (or endorsement or sponsorship) without making it plain to the viewer which parts of the content are ads and which are not.
Since the US Federal Trade Commission can smoothly enforce the second of those two rules, what makes you think it cannot smoothly enforce an outright ban on ads? "Smoothly": you seem to have been unaware that the second of the rules I described existed, and you probably would have been aware if the enforcement had yielded anything deserving of the name "kafka land".
> Imagine the ad police. “You blogged about a product. Someone said you sounded insincere. Let’s see the receipt for purchase. Can’t prove you bought it? Prove you weren’t compensated for your blog post or pay a fine.” Kafka land.
immediately to a fantasy slippery slope argument, cool!
your argument seems to boil down to paid ads being the lifeblood of the flow of information. my argument is it corrupts that flow of information, and we’d be better without them —- everything would operate just fine. individuals and organizations would have better incentives to share valuable information, not what they get paid to. obviously there would be plenty of details and edge cases to work out, as with any policy in the real world
I’m not going to write out policy in HackerNews comments and play that game with someone who jumps to the “imagine this crazy world where the police start arresting all of us over free speech!” as their explanation for what would go wrong
In general I think the answer could be pretty simple: dedicated marketplaces for products and services, where we go to search for the things we need and want. A humble newspaper contains great examples of good and bad advertising.
Newspapers have whole pages of bad ads, and random bad ads wedged between actual content. Ads have a perverse incentive to mimic the look of actual content, just like on the web. I'd never pick up a newspaper with a goal of "I want to find a tax service" and yet ads for such services are there, unwanted, wedged into other content.
But newspapers also have classified sections, a better kind of ad. They're in a predictable place, where you can go if you need a job.
Imagine if the actual content weren't perforated by a scattershot of ads. Ad revenue would go down, but readership would likely go up. Besides profit motives, it's also a case of the good of the many outweighing the good of the few.
Others like myself do consider the ads when we read the newspaper. I find out about events and local companies that way. I don’t see many print ads that confuse me as to whether they are paid advertisements at a glance.
What are you worried will happen? ChatGPT releases and noone will know? Anyone interested in staying up to date with new technology can read a tech newspaper. That newspaper is paid by the readers, so its incentive is to show actually interesting products. It is not paid by some random company whose product might be bad or outright malicious.
Depends on what exactly gets banned. What is an ad? What isn’t? Must all information be paid for by the audience? I wish someone would tell me what this ban is supposed to cover.
I worry that innovators and small businesses won’t be able to get their message out efficiently. You won’t be able to display your message on anyone else’s property. You won’t be able to take any compensation for promoting anything.
Anything wie usually consider an ad/sponsored content/boosted content. That includes ad banners, video sponsors, boosted posts, billboards, etc. Really any way that a third party pays some organization to put certain content on their space (be it physical or digital) to be seen by people who are probably there for other reasons than to see that content.
An exact definition fortunately isn't necessary, since our legal system is well adapted to deal with terms that are difficult to define.
> Must all information be paid for by the audience?
Other monetization paths are still possible. E.g. some SaaS orgs could still run blogs to gain mind share, as they do now.
Hopefully most information will be paid for by the audience though.
> I worry that innovators and small businesses won’t be able to get their message out efficiently.
If ads are no longer a thing, people will find other means to get information.
You mentioned in another comment that you like ads in newspapers telling you about some local stores and events. If the readers are genuinely interested in this, the newspaper can still put that information there. Just instead of the highest bidder, they'll put the ones that are genuinely interesting.
Of course, if your product is not worth reading about, it will suddenly be much more difficult to promote it. In my opinion, that is a good thing.
Thank you. You listed concrete examples of things you would ban and you stated a general definition. It sounds like a terrible kind of regime, stripped of the freedom of speech and loaded with arbitrary decisions about what the audience probably wanted to see, resulting in an even more opaque and corrupt flow of information. Nope, no thanks.
I think the problem you have with ads is a you problem.
> It sounds like a terrible kind of regime, stripped of the freedom of speech [...]
I agree that this would be a significant restriction on speech. I think it'd be worth it, especially since paying someone to show your opinion isn't necessary for political discourse, but I understand the caution.
> I think the problem you have with ads is a you problem.
That I very much disagree with.
Ads not just waste billions of dollars on producing content that is so bad you literally pay people to watch it, they also degrade the business model of large parts of the industry. Because of the ad-based business model, Google doesn't show you the best results, Meta tries to get you addicted, YouTube prevents you from swearing and discussing certain political content (a restriction on speech), everyone tries to track you, etc.
I believe that if all these companies' customers were their users, there would be a societal shift.
Cool! I hate struggling with my insurance company and all of the files. It doesn't happen that often, though. I can't see paying $20/mo all year. I'd rather only pay for the service when I'm actively using it, maybe for the three months around a claim. I want a service like this to tout "only pay until your claim is settled" and cheerfully offer to stop the subscription after all active claims are settled.
> Best for homeowners, renters, and small contractors filing 1–5 claims per year.
You’re right that most people don’t file claims often, and that’s something we’re actively thinking about. The core idea behind ClaimVault isn’t just during a claim, but making it much easier to be prepared before one happens — when photos, receipts, and context are still easy to capture.
That said, your point about usage-based or claim-window pricing makes a lot of sense, especially for renters or homeowners who may only interact with insurance a few times in a lifetime. We’re exploring options like pausing subscriptions, shorter-term plans around an active claim, or alternate models that better match how infrequently claims occur.
Appreciate you calling this out — it’s exactly the kind of perspective we’re hoping to learn from early on.
OpenAI will want this tragedy to fit under the heading of “externalities” which are costs ultimately borne by society while the company keeps its profits.
I believe the company should absorb these costs via lawsuits, settlements, and insurance premiums, and then pass the costs on to its customers.
As a customer, I know the product I am using will harm some people, even though that was not the intent of its makers. I hope that a significant fraction of the price I pay for AI goes to compensating the victims of that harm.
I also would like to see Sam found personally liable for some of the monetary damages and put behind bars for a symbolic week or so. Nothing life-changing. Just enough to move the balance a little bit toward safety over profit.
Lastly, I’m thinking about how to make my own products safer whenever they include LLM interactions. Like testing with simulated customers experiencing mental health crises. I feel a duty to care for my customers before taking the profits.
You seem like someone reasonable to ask: please program me with the rules for how the world should handle itself in the presence of mentally unstable and/or clinically delusional people. What are the hard coded expectations? I need something solid, we obviously can’t call for your opinion every time a company or product comes into existence. I also don’t imagine you’re saying anything that can be misinterpreted by someone that literally thinks they’re in the matrix (as this person did) should be preemptively banned…right? I have a low functioning autistic cousin that got a “I’m awesome!” pendant that he took as carte blanche to stay up past bedtime and eat ice cream any time he wanted. No, surely it’s not that broad of a ban.
I’m not agreeing or disagreeing with anything, I’m just asking for a rule set that you think the world should follow that isn’t purely tied to your judgment call.
Thank you for this question. Liability is the driver in this system I imagine. And the goal is not perfection, nor zero harm. The goal is a balanced system. The feedback loop encompasses companies, users, courts, legislatures, insurance. Companies that exercise due care under the law and prevailing legal climate should enjoy predictable exposure to the risk of product liability via insurance. Those that don’t should suffer for the harms their products cause.
Making such a balanced system impossible, we have feedback loops with long cycle times and excessive energy losses. That’s our legal system.
Please forgive me for coming across as a jerk, I'm choosing efficiency over warmth:
This is exactly the type of response I anticipated, which is why my original comment sounded exasperated before even getting a reply. Your comment is no more actionable than a verbose bumper sticker; you’ve taken “End Homelessness!” and padded it for runtime. Yes, I also wish bad things didn’t happen, but I was asking you to show up to the action committee meeting, not to reiterate your demand for utopia.
That you’re advocating prison and have such strong emotional convictions in response to an upsetting event means that you've clearly spent a lot of time deeply contemplating the emotional aspects of the situation, but that exercise is meant to be your motivator, not your conclusion. The hard part isn’t writing a thesis about “bad != good”, it’s contributing a single nail towards building the world you want to see, which requires learning something about nails. I encourage you to remember that fact every time you’re faced with an injustice in the world.
On this topic: An LLM being agreeable and encouraging is no more an affront to moral obligations than Clippy spellchecking a manifesto. I said you seemed like a reasonable person to ask for specifics because you mentioned language models in your product, implying that you’ve done your homework enough to know at least a rough outline of the technology that you’re providing to your customers. You specifically cited a moral obligation to be the gatekeeper of harms that you may inadvertently introduce into your products, but you seem to equate LLMs to a level of intelligence and autonomy equal to a human employee, and how dare OpenAI employ such a psychopath in their customer service department. You very much have a fundamental misunderstanding of the technology, which is why it feels to you like OpenAI slapped an “all ages” sticker on a grenade and they need to be held accountable.
In reality, the fact that you don’t understand what these things are, yet you’re assuring yourself that you’re caring so deeply about the harms that being agreeable to a mentally unstable person can be, actually makes you introducing it into your product more concerning and morally reprehensible than their creation of it. You’re faulting OpenAI, but you’re the one that didn’t read the label.
A language model does one thing: predict statistically likely next tokens given input context. When it "agrees" with a delusional user, it is not evaluating truth claims, exercising judgment, or encouraging action. It is doing exactly what a knife does when it cuts: performing its designed function on whatever material is presented. The transformer architecture has no model of the user's mental state, no concept of consequences, no understanding that words refer to real entities. Demanding it "know better" is demanding capacities that do not exist in the system and cannot be engineered into statistical pattern completion. You cannot engineer judgment into a statistical engine without first solving artificial general intelligence. Your demand is for magic and your anger is that magic was not delivered.
I read every word of the complaint before commenting. Nothing has been proved but the causes of action appeared valid to me. I believe in innovation balanced by care and consequences delivered through liability.
You wrote assumptions about me personally which I won't address. I don't see where you've responded to anything that I said, nor to anything in the complaint, so I have nothing to respond to.
The journalist is not necessarily responsible for the title. Editors often change those and they don’t need to get the approval of the journalist. The editor knows what they are doing and that it will irk some tech folks.
I seriously doubt the journalist doesn’t understand exactly how this “hack” worked too. Right in the first paragraph, “simply highlighting text to paste into a word processing file.”
A lot of people in the thread here are calling them a non-technical English major who doesn’t understand the technology. Word processors also happen to be the tools of their trade, I am sure they understand features of Word better than most of the computer science majors in this thread…
Agreed - not sure why so many are being so critical here. They probably didn't write the title and for better or worse "hack" has now become a common word casually used by many to mean "workflow trick" or similar.
As far as creating a click bait title, yep, the editor knows what they are doing, and most likely picked the word for the click bait factor.
But I'd also bet the editors technical knowledge of how this "revelation" of the hidden material really works is low enough that it also appears to be magic to them as well. So they likely think it is a 'hack' as well.
It’s not just like that to be spaced out visually. It suggests slowing down, taking your time, digesting each sentence. Not just racing to the end so you can drop a thin take and keep scrolling.
reply