It's the frictionless aspect of it. It requires basically no user effort to do some serious harassment. I would say there's some spectrum of effort that impacts who is liable along with a cost/benefit analysis of some safe guards. If users were required to give paragraph long jailbreaks to achieve this and xAI had implemented ML filters, then I think there could be a more reasonable case that xAI wasn't being completely negligent here. Instead, it looks like almost no effort was put into restricting Grok from doing something ridiculous. The cost here is restricting AI image generation which isn't necessarily that much of a burden on society.
It is difficult to put similar safeguards into Photoshop and the difficulty of doing the same in Photoshop is much higher.
i think you have a point but consider this hypothetical situation.
you are in 1500's before the printing press was invented. surely the printing press can also reduce the friction to distribute unethical stuff like CP.
what is the appropriate thing to do here to ensure justice? penalise the authors? penalise the distributors? penalise the factory? penalise the technology itself?
Photocopiers are mandated by law to refuse copying currency. Would you say that's a restriction of your free speech or too burdensome on the technology itself?
If curl is used by hackers in illegal activity, culpability falls on the hackers, not the maintainers of curl.
If I ask the maintainers of curl to hack something and they do it, then they are culpable (and possibly me as well).
Using Photoshop to do something doesn’t make Adobe complicit because Adobe isn’t involved in what you’re using Photoshop for. I suppose they could involve themselves, if you’d prefer that.
You don’t understand the difference between typing “draw a giraffe in a tuxedo in the style of MC Escher” into a text box and getting an image in a few seconds, versus the skill and time necessary to do it in an image manipulation program?
You don’t understand how scale and accessibility matter? That having easy cheap access to something makes it so there is more of it?
You don’t understand that because any talentless hack can generate child and revenge porn on a whim, they will do it instead of having time to cool off and think about their actions?
So, is it that you don’t understand how the two differ (which is what you originally claimed), or that you disagree about who is responsible (which the person you replied to hasn’t specified)?
You made one specific question, but then responded with something unrelated to the three people (so far) who have replied.
You could drive your car erratically and cause accidents, and it would be your fault. The fact that Honda or whoever made your car is irrelevant. Clearly you as the driver are solely responsible for your negligence in this case.
On the other hand, if you bought a car that had a “Mad Max” self driving mode that drives erratically and causes accidents, yes, you are still responsible as the driver for putting your car into “Mad Max” mode. But the manufacturer of the car is also responsible for negligence in creating this dangerous mode that need not exist.
There is a meaningful distinction between a tool that can be used for illegal purposes and a tool that is created specifically to enable or encourage illegal purposes.
This is because they have entrenched themselves in a comfortable position that they don’t want to give up.
Most won’t admit this to be the actual reason.
Think about it: you are a normal hands on self thought software developer. You grew up tinkering with Linux and a bit of hardware. You realise there’s good money to be made in a software career. You do it for 20-30 years; mostly the same stuff over and over again. Some Linux, c#, networking. Your life and hobby revolves around these technologies. And most importantly you have a comfortable and stable income that entrenches your class and status. Anything that can disrupt this state is obviously not desireable. Never mind that disrupting others careers is why you have a career in the first place.
> disrupting others careers is why you have a career in the first place.
Not every software project has or did this. In fact I would argue many new businesses exist that didn't exist before software and computing and people are doing things they didn't beforehand. Especially around discovery of information - solving the "I don't know what I don't know" problem also expanded markets and demand to people who now know.
Whereas the current AI wave seems to be more about efficiency/industrialization/democratizing of existing use cases rather than novel things to date. I would be more excited if I saw more "product orientated" AI use cases other than destroying jobs. While I'm hoping that the "vibing" of software will mean that SWE's are needed to productionise it I'm not confident that AI won't be able to do that soon too nor any other knowledge profession.
I wouldn't be surprised with AI if there's mass unemployment but we still don't cure cancer for example in 20 years.
> Not every software project has or did this. In fact I would argue many new businesses exist that didn't exist before software and computing and people are doing things they didn't beforehand.
That's exactly what I am hoping to see happen with AI.
All I can say to that is "I hope so too"; but logic is telling me otherwise at this point. Because the alternative, as evidenced by this thread, isn't all that good. The fear/dread in people since the holidays has been sad to see - its overwhelmed everything else in tech now.
You are exaggerating. LLMs simply don’t hallucinate all that often, especially ChatGPT.
I really hate comments such as yours because anyone who has used ChatGPT in these contexts would know that it is pretty accurate and safe. People also can generally be trusted to identify good from bad advice. They are smart like that.
We should be encouraging thoughtful ChatGPT use instead of showing fake concern at each opportunity.
Your comment and many others just try to signal pessimism as a virtue and has very less bearing on reality.
All we can do is share anecdotes here, but I have found ChatGPT to be confidently incorrect about important details in nearly every question I ask about a complex topic.
Legal questions, question about AWS services, products I want to buy, the history a specific field, so many things.
It gives answers that do a really good job of simulating what a person who knows the topic would say. But details are wrong everywhere, often in ways that completely change the relevant conclusion.
I definitely agree that ChatGPT can be incorrect. I’ve seen that myself. In my experience, though, it’s more often right than wrong.
So when you say “in nearly every question on complex topics", I’m curious what specific examples you’re seeing.
Would you be open to sharing a concrete example?
Specifically: the question you asked, the part of the answer you know is wrong, and what the correct answer should be.
I have a hypothesis (not a claim) that some of these failures you are seeing might be prompt-sensitive, and I’d be curious to try it as a small experiment if you’re willing.
In one example, AWS has two options for automatic deletion of objects in S3 buckets that are versioned.
"Expire current versions" means that the object will be automatically deleted after some period.
"Permanently delete non-current versions" means that old revisions will be permanently removed after some period.
I asked ChatGPT for advice on configuring a bucket. Within a long list of other instructions, it said "Expire noncurrent versions after X days". In this case, such a setting does not exist, and the very similar "expire current versions" is exactly the wrong behavior. "Permanently delete noncurrent versions" is the option needed.
The prompt I used has other information in it that I don't want to share.
LLM give false information often. The ability for you to catch incorrect facts is limited by your knowledge and ability and desire to do independent research.
LLMs are accurate with everything you don't know but are factually incorrect with things you are an expert in is a common comment for a reason.
As I used LLMs more and more for fact type queries, my realization is that while they give false information sometimes, individual humans also give false information sometimes, even purported subject matter experts. It just turns out that you don’t actually need perfectly true information most of the time to get through life.
They do. To the point where I'm getting absolutely furious at work at the number of times shit's gotten fucked up and when I ask about how it went wrong the response starts with "ChatGPT said"
Do you double check every fact or are you relying on yourself being an expert on the topics you ask an llm? If you are an expert on a topic you probably aren't asking ab llm anyhow.
It reminds me of someone who reads a newspaper article about a topic they know and say its most incorrect but then reading the rest of the paper and accepting those articles as fact.
"Often" is relative but they do give false information. Perhaps of greater concern is their confirmation bias.
That being said, I do agree with your general point. These tools are useful for exploring topics and answers, we just need to stay realistic about the current accuracy and bias (eager to agree).
"Yes. Large language models produce incorrect information at a non-trivial rate, and the rate is highly task-dependent."
But wait, it could be lying and they actually don't give false information often! But if that were the case, it would then verify they give false information at a non trivial rate because I don't ask it that much stuff.
And how would you know what they base their hiring upon? You would just get a generic automated response..
You would not be privy to their internal processes, and thusfar not be able to prove wrong doing. You would just have to hope for a new Snowden and that the found wrongdoings would actually be punished this time.
I don't get it, if you're medically unfit for a job, why would you want the job?
For instance, if your job is to be on your feet all day and you can barely stand, then that job is not for you. I have never met employers that are so flush in opportunities of candidates that they just randomly choose to exclude certain people.
And if it's insurance, there's a group rate. The difference only variable is what the employee chooses out of your selected plans (why make a plan available if you don't want people to pick that one?) and family size. It's illegal to discriminate of family size and that does add up to 10k extra on the employer side. But there are downsides to hiring young single people, so things may balance out.
Usually there's one or two job responsibilities among many, that you can do, but not the way everyone else does them. The ADA requires employers to make reasonable accommodations, and some employers don't want to.
So less, the job requires you to stand all day, and more, once a week or so they ask you make a binder of materials, and the hole puncher they want you to use dislocates your hands (true story). Or, it's a desk job, but you can't get from your desk to the bathroom in your wheelchair unless they widen the aisles between desks (hypothetical).
Very large employers don't have a group rate. The insurance company administers the plan on behalf of the company according to pre-agreed rules, then the company covers all costs according to the employee health situation.
I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason. If I may, stepping back for a second: the reason privacy laws exist, is to protect people from bad behavior from employers, health insurance, etc.
If we circumvent those privacy laws, through user licenses, or new technology - we are removing the protections of normal citizens. Therefore, the bad behavior which we already decided as a society to ban can now be perpetrated again, with perhaps a fresh new word for it to dodge said old laws.
If I understand your comment, you are essentially wondering why those old laws existed in the first place. I would suggest racism or other systemic issues, and differences in insurance premiums, are more than enough to justify the existence of privacy laws. Take a normal office job as an example over a manual labor intensive job. No reason at all that health conditions should impact that. The idea of not being hired because I have a young child, or a health condition, that would raise the group rate from the insurer passing the cost to my employer (which would be in their best interest to do) is a terrible thought. And it happened before, and we banned that practice (or did our best to do so).
All this to say, I believe HIPAA helps people, and if ChatGPT is being used to partially or fully facilitate medical decision making, they should be bound under strict laws preventing the release of that data regardless of their existing user agreements.
> I believe existing laws carve out exceptions for medical fitness for certain positions for this very reason.
It’s not just medical but a broad carve out called “bona fide occupational qualifications”. If there’s a good reason for it, hiring antidiscrimination laws allow exceptions.
Do you have any proof they don't? Do you have any proof the "AI System" that they use to filter out candidates doesn't "accidentally" access data ? Are you willing to bet that Google, OpenAI, Anthropic, Meta, won't sell access to that information?
Also, in some cases: they absolutely do. Try to get hired in Palantir and see how much they know about your browsing history. Anything related to national security or requiring clearances has you investigated.
The last time I went through the Palantir hiring process, the effort on their end was almost exclusively on technical and cultural fit interviews. My references told me they had not been contacted.
Calibrating your threat model against this attack is unlikely to give you any alpha in 2026. Hiring at tech companies and government is much less deliberate than your mental model supposes.
The current extent of background checks is an API call to Checkr. This is simply to control hiring costs.
As a heuristic, speculated information to build a threat model is unlikely to yield a helpful framework.
>the effort on their end was almost exclusively on technical and cultural fit interviews
How could you possibly know if they use other undisclosed methods as part of the recruitment? You are assuming Palatir would behave ethically. Palantir, the company that will never win awards based on ethics
Notwithstanding the fact that tech companies hire dogshit employees all the time and the vast majority of employees of any company of size 1000+ are average at best, Palantir happens to be rating so high on the scale of evil that I'd pop champagne if it got nuked tomorrow.
That’s the point. If any company would do it, it’s Palantir, and they don’t. In fact it’s quite the opposite. Their negative public image makes hiring more difficult causing them to accept what they can get.
Also, I’m not saying they have the best talent, just that they want the best talent.
As if any company that did that is a company I would want to work for.
For instance back when I was interviewing at startups and other companies where I was going to be a strategic hire, I would casually mention how much I enjoyed spending time on my hobbies and with my family on the weekend so companies wouldn’t even extend an offer if they wanted someone “passionate” who would work 60 hours a week and be on call.
But is it really so hard to imagine a world where your individual choice to "opt-out" or work for companies that don't use that info is a massive detriment to your individual life? It doesn't have to be every single company doing it for you to have no _practical_ choice about it (if you want to make market rate for your services.)
Exactly what am I suppose to do? I vote for politicians who talk about universal healthcare, universal child care, public funding of college education and trade schools etc.
But the country and the people who could most benefit from it are more concerned with whatever fake outrage Fox News comes up with an anti woke something or the other.
So yeah, if this is the country America wants, I’m over it. I’ve done my bid.
While other people talk about leaving the country, we are seriously doing research and we are going to spend a month and a half outside of the US this year and I’ve already looked at residency requirements in a couple of countries after retirement including the one we are going to in a month and a half.
No, I was being snarky and that was a mistake and I apologize. For some reason I thought the person above was happy or okay with the current state and can just fck off if/when it affects them negatively.
I basically did what they plan on doing. I fcked off because my country was already too far gone. But I always always make sure I will never talk positively or be in denial about the state it’s in. America isn’t there (yet). What made me snarky was the mistaken hypocrisy.
Probably not directly, that would be too vulnerable. But they could hire a background check company, that could pay a data aggregator to check if you searched for some forbidden words, and then feed the results into a threat model...
Anyone who has worked in hiring for any big company knows how much goes into ensuring hiring processes don't accidentally touch anything that could be construed as illegal discrimination. Employees are trained, policies and procedures are documented, and anyone who even accidentally says or does anything that comes too close to possibly running afoul of hiring laws will find themselves involved with HR.
The idea that these same companies also have a group of people buying private search information or ChatGPT conversations for individual applicants from somewhere (which nobody can link to) and then secretly making hiring decisions based on what they find is silly.
The arguments come with the usual array of conspiracy theory defenses, like the "How can you prove it's not happening" or the claims that it's well documented that it's happening but nobody can link to that documentation.
Yes, I remember a friend that interned there a couple times showed me that. One of them was “list comprehensive python” and the Google website would split in 2 and give you some really fun coding challenges. I did a few, and you get 4(?) right you get a guaranteed interview I think. I intended to come back and spend a lot of time on an additional one, but I never did. Oops
I think I only did three or something and I didn't hear back from them. Honestly my view of Google is that they aren't as cool as they think they are. My current position allows me to slack off as much as I want and it's hard to beat that, even if they offer more money (they won't in the current market).
I'm kind of amazed that so many people in this comment section believe their Google searches and ChatGPT conversations are being sold and used.
Under this conspiracy theory they'd have to be available for sale somewhere, right? Yet no journalist has ever picked up the story? Nobody has ever come out and whistleblown that their company was buying Google searches and denying applicants for searching for naughty words?
Google "doesn't sell your data" but RTB leaks that info, and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.
It is well documented in many many places, people just don't care.
Google can claim that it doesn’t sell your data, but if you think that the data about your searches isn't being sold, here is just a small selection of real sources.
> and the reason no one is called out for "buying Google searches and denying applicants for searching for naughty words" is because it is trivial to make legal.
Citation needed for a claim of this magnitude.
> It is well documented in many many places, people just don't care.
Yes, please share documentation of companies buying search data and rejecting candidates for it.
Like most conspiracy theories, there are a lot of statements about this happening and being documented but the documentation never arrives.
> Most employers we examined used an ATS capable of integrating with a range of background screening vendors, including those providing social media screens, criminal background checks, credit checks, drug and health screenings, and I-9 and E-Verify.29 As applicants, however, we had no way of knowing which, if any, background check systems were used to evaluate our applications. Employers provided no meaningful feedback or explanation when an offer of work was not extended. Thus, a job candidate subjected to a background check may have no opportunity to contest the data or conclusions derived therefrom.30
If you are going to ignore a decade of research etc... I can't prove it to you.
> The agency found that data brokers routinely sidestep the FCRA by claiming they aren't subject to its requirements – even while selling the very types of sensitive personal and financial information Congress intended the law to protect.
> Data brokers obtain information from a variety of sources, including retailers, websites and apps, newspaper and magazine publishers, and financial service providers, as well as cookies and similar technologies that gather information about consumers’ online activities. Other information is publicly available, such as criminal and civil record information maintained by federal, state, and local courts and governments, and information available on the internet, including information posted by consumers on social media.
> Data brokers analyze and package consumers’ information into reports used by creditors, insurers,
landlords, employers, and others to make decisions about consumers
You keep straying from the question. The question was: who has access to google searches? RTB isn't google searches. Background screening isn't google searches. Social media isn't google searches. Cookies aren't google searches. etc etc
Every link you provided is for tangential things. They're bad, yes, but they're not google searches. Provide a link where some individual says "Yes, I know what so-and-so searched for last wednesday."
Can you answer this question without walls of unrelated text, ad hominem attacks (saying I’m in a cult), or link bombing links that don’t answer the question?
It’s a simple question. You keep insisting there’s an answer and trying to ad hominem me for not knowing it, but you consistently cannot show it.
This fails the classic conspiracy theory test: Any company practicing this would have to be large enough to be able to afford to orchestrate a chain of illegal transactions to get the data, develop a process for using it in hiring, and routinely act upon it.
The continued secrecy of the conspiracy would then depend on every person involved in orchestrating this privacy violation and illegal hiring scheme keeping it secret forever. Nobody ever leaking it to the press, no disgruntled employees e-mailing their congress people, no concerned citizens slipping a screenshot to journalists. Both during and after their employment with the company.
To even make this profitable at all, the data would have to be secretly sold to a lot of companies for this use, and also continuously updated to be relevant. Giant databases of your secret ChatGPT queries being sold continuously in volume, with all employees at both the sellers, the buyers, and the users of this information all keeping it perfectly quiet, never leaking anything.
It doesn't though. As an aside, I have been using a competitor to chatgpt health (nori) for a while now, and I have been getting an extreme amount of targeted ads about HRV and other metrics that the app consumes. I have been collecting health metrics through wearables for years, so there has been no change in my own search patterns or beliefs about my health. I just thought ai + health data was cool.
> It’s not legal to be denied jobs based on health. Not to deny insurance
The US has been pretty much a free-for-all for surveillance and abusing all sorts of information, even when illegal to do so. On the rare occasions that they get caught, the penalty is almost always a handslap, and they know it.
The ADA made it illegal to discriminate against job seekers for health conditions and ObamaCare made it illegal to base cover and rates on pre-existing conditions.
What are the chances those bills last long in the current administration and supreme court?
And yet, if you want life insurance you can’t get it with a bunch of pre existing conditions. And you can be discriminated against as a job seeker as long as they don’t make it obvious.
if its not solved in the richest country maybe its not so easy to solve unless you want to hand wave the diffuclt parts and just describe it as "rich people being greedy"
It's such a dysfunctional situation that the "rich people being greedy" is the most likely explanation. Either that or the U.S. citizenry are uniquely stupid amongst rich countries.
It doesn't have to get to your employer, it just has to get to the enormous industry of grey-market data brokers who will supply the information to a third-party who will supply that information to a third-party who perform recruitment-based analytics which your employer (or their contracted recruitment firm) uses. Employers already use demographic data to bias their decisions all the time. If your issue is "There's no way conversations with ChatGPT would escape the interface in the first place," are you... familiar with Web 2.0?
What is it with you people and privacy? Sure it is a minor problem but to be _this_ affected by it? Your hospitals already have your data. Google probably has your data that you have google searched.
What's the worst that can happen with OpenAI having your health data? Vs the best case? You all are no different from AI doomers who claim AI will take over the world.. really nonsensical predictions giving undue weight to the worst possible outcomes.
reply