Not reasonable unless the proper security/privacy guarantees are in place. Otherwise this risks a massive security vulnerability; what kind of information is being shared with ChatGPT?
Based on the disclosure here, he's not sharing any information. He's doing research, asking questions, a perfectly reasonable use of the tool. Not unlike using Google.
I'm sure if he wasn't already aware, he's been briefed by his SpAd or civil servant (possibly via the security services) not to disclose any sensitive information.
It's a lot like using Google, but it's different in at least one important way: it doesn't reveal where the information comes from. How do you know if ChatGPT is repeating The New York Times, reddit, or 4chan?
The issue I see is that LLMs are not an unbiased source of information. Functionally, he's just asking big tech companies what to do.
It's one thing when a developer asks an LLM for programming advice, and the LLM has been instructed to prioritize answers suggesting React. In my opinion, it's quite another when a politician asks an LLM for policy advice, and the provider has the ability to make similar prioritizations.
Yeah I frequently discuss difficult system design or other engineering problems with AI. I don't copy-paste it as my own thoughts, but use it as a way to brainstorm solutions. It's no different from discussing it with a colleague that has a very different brain from me. The tech secretary's use of AI seems perfectly appropriate.
I've found it very useful with organisational problems too. I had a complex issue to work through recently and tried working with ChatGPT, Claude and Grok 3 to optimise letters to some of the people involved to try to get things solved in the way I felt was fairest for all involved (anonymised, of course, with no memory on ChatGPT). One neat trick was to export the letter from one chat, then to start a fresh one, say that I am <role of recipient> and provide the letter and ask what it thinks -- basically red-teaming it.
The process of doing that clarified my thoughts and arguments so much that I never wound up having to send them -- I'd already made the right points on Slack and in meetings, and a compromise was achieved.
Like how engineers claim they are not losing skills because they are using programming tool assistants, I think there is an over-confidence thing at play here, where the user of an AI assistant doesn’t want to admit to themselves that they are also being shaped by it.
You are still being subtly influenced by something, that’s the whole issue at play here. OpenAI has incredible power in influencing the subconscious of so many people, and I think thats the thing to question here. I am not entirely against it, but we need to be honest with ourselves with how malleable our minds are and influenced we are by what we consume.
Do you lose skills when discussing things with colleagues? What if you find the answer in a book? Sure if you're just having someone else do your work or using an equation without understanding it you'll lose skills, but nothing suggests that AI is essentially worse than either of those. The government official asking for ideas about how they might improve ai adoption in the UK is a perfect example of a good usage. They are actively engaging with the AI, not having it write their policy.
Concerns about bias are bikeshedding. I am much more concerned about the systemic bias of race than I am about OpenAI trying to psyop people(I'm not concerned about race).
The only really reliable use I discovered for AI so far is rubber-ducking.
As you dig down it's often so hilariously useless it sparks my decaying intelligence to offer up a gem. Less fruitful than talking to a Junior Engineer but more useful than a physical rubber duck.
It sounds bad, but the alternative is policy makers making decisions out of their arses, so maybe this really is a tool that can improve public management.
Problem is that the owner of the AI service has power over the model and thus can influence government decisions.
Maybe we should require that government use of AI should be required to use open models only.
Have you compared OpenAI to the current leader of the free world?
>The Globalist Wall Street Journal has no idea what they are doing or saying. They are owned by the polluted thinking of the European Union, which was formed for the primary purpose of “screwing” the United States of America... (Truth Social today)
Really? How have policies been made for the past hundred years without it then?
It's completely bonkers that people keep saying stuff like this, virtually implying that we didn't have a functional society before LLMs.
LLMs are tools that generate word salads that sound compelling. They are not research aids and they can't help you understand things better than the plethora of tools we already developed over the centuries can.
So no, the alternative is policy makers doing what policy makers have always done: research, polling, reading, talking, and more research.
Now you'll likely come back and say "but X policy was bad and it was because it was poorly researched". Absolutely! Yes, bad policies exist. Poorly researched policies exist. Poorly implemented policies exist.
Good policies researched well also exist. And the bad ones aren't going away because of the magical word generator.
Seriously. Stop using LLMs to "help" you do stuff you already know how to do.
Really? How have policies been made for the past hundred years without it then?
In the United States, lobbyists have approached lawmakers with pre-written legislation and pitched it to them.
I would rather have lawmakers asking LLMs to explain concepts to them than the absolute mess we had back when a House committee was trying to evaluate SOPA with absolutely zero domain knowledge:
The current administration is using LLMs which are telling them to instruct the NSA not to talk about privilege escalation.
I addressed that: bad policies and decisions don't go away just because of LLMs. They will always exist, it's just that now they'll exist at the same time as people gaslighting themselves into believing that LLMs are helping to eliminate them.
Ignoring the fact that this comment is literally nihilist and simultaneously anti-natalist (read: you are simply advocating for the elimination of the human race)... yes I addressed that.
> Now you'll likely come back and say "but X policy was bad and it was because it was poorly researched". Absolutely! Yes, bad policies exist. Poorly researched policies exist. Poorly implemented policies exist.
> Good policies researched well also exist. And the bad ones aren't going away because of the magical word generator.
You will notice that the US is gearing up for additional warfare with the help of AI.
I wasn't intending to advocating for the elimination of the human race so much as to say some of the policies were not that great and could maybe benefit from assistance. LLMs so far in my experience have been quite common sense like and not prone to things like group of humans A are superior and deserve to conquer group B. Which has been a frequent issue in human policies.
When the politicians consult experts they usually consult experts who give them the answers they want. They get rid of experts (e.g. Professor Nutt) who give them advice they do not like.
Sometimes they go to the other extreme and think experts are gods, and do not question the advice, check whether other experts would disagree with it etc.
I'm curious, did he not clean out old chats that were no longer needed? If UK FOI applies to this now, does that introduce recordkeeping/retention requirements?
It does, and from what I've seen the UK public service are pretty diligent in making sure the rules are enforced... but I don't expect ministers only use recorded accounts and machines.
I think there's something to be said for a LLM for government policy. It should be trained for the country in question and open sourced. Then as well as the politicians asking what we should do about the NHS or what have you, the public could do so as well and see where the ideas were coming from.
Given the current state of LLMs it would only be giving advice but even so would probably give better advice than some of the idiots that get elected. Also maybe more importantly it would be a known quantity whereas humans can be deceptive.
Here is the nginx proxy rule so that requests from the UK government go to the right place:
if ($remote_addr = 25.0.0.0/8) {
set $backend_pool grok_with_special_information_for_really_smart_uk_tech_ministers;
}
else {
set $backend_pool grok_for_poor_and_stupid_uk_people;
}
I don't have actual information that Grok names the backend servers this way, btw. Nor, do I know if Peter Kyle is using tor.
This was generated with help from Claude, so please verify your own sources.
Very interesting point about whether AI Chats are personal data or not, like email or a whatsapp message, or if they should be treated more like a web search. Certainly the AI isn't a person....
I think all of the comments you are missing the point. We beginning to see the outsourcing of critical thinking to a few technology companies. We’ve stopped valuing leaders that understand issues and can make sane decisions. Instead we pick leaders that present good sound bites or appear strong. They in turn do not understand the issues and can get quick and easy answers from AI that for the most part are pretty damn good.
The most insidious part of this, is it gives us the feeling that we are still in control.
When in reality, a few extremely powerful and extremely wealthy individuals are in a position to dramatically shape and shift policy.
15 years ago Google was still pretty damn good. Today most people agree that it’s departed from its original goals and in its efforts to make money actively censors and shapes the results it serves.
The AI systems people use to make decisions will make this shift once they have a lock on the market.
It’s wild to see the creation of a technological equivalent to the medieval Catholic Church. an entity that represents itself as benevolent to the masses, but in reality, exploits them and maintains a stranglehold over the political ruling class.
It doesn't necessarily sound like that's the case, though. It's difficult to get a good sense of the scale and extent of Kyle's usage of ChatGPT, but the examples cited in the article sound to me like he's using it for definitions (i.e. to replace Google), and messing around to try it out (with the podcast suggestions).
It would be worrying if the report was that Kyle based some significant part of his decision-making on AI chatbots, but no-one here seems to be claiming that's the case. As it is, given the article only writes about two significant examples and a bunch of definitions, it sounds like he's not been using it much at all.
Sure, but there is a slippery slope effect that could very well happen here. The more comfortable we get with the product and the better it gets the easier its going to be to delegate to them.
Uh. Google results are so irrelevant to me now that I've turned to using Perplexity instead, and just clicking on the provided links without paying much attention to its output.
Those seem like non-idiotic, reasonable uses of ChatGPT. Reassuringly disappointing. My bigger concern is that a company like OpenAI has the resources to identify requests by bigwigs of interest, and a unique window into their head. With US friendship on dubious terms, I'd have thought GCHQ would have had a few stern talks with the government.
I agree it is a concern. However. a lot of UK government IT decisions are of similar concern. Using Zoom for cabinet meetings (where extremely secret stuff gets discussed) during lockdown was even worse.
The courts use MS Teams for remote hearings - not as bad, but I do not like it either.
The FOI request would have been sent to the government not Open AI, so I do not think Open AI would have needed to identify the individual.
I meant it like 'OpenAI has many sessions of rich text to mine' and my hunch is that identifying individuals of interest is possible at least in some cases. Knowing exactly what's on their mind and what problems they are trying to solve is priceless information. I'm not even implying that OpenAI would tailor the bot output to influence them... but it's there for the taking.
I don't personally find ChatGPT as useful as some people here apparently do. But, assuming outputs are not being put into effect unthinkingly, this seems well in the domain of ChatGPT being used for brainstorming as many people on this board seem to endorse.
Certainly, information leakage from search activities of any kind by high government officials is a cause for concern (as are travel patterns etc.).
The resources required are petty trivial. Users hand over their name and email when they create an account. Likewise you get the IP address of a request.
You could identify which world leaders are using ChatGPT pretty easily.
I would bet a fair amount that the Cabinet Office / GCHQ provide anonymisers for high-ranking public servants and orders not to use pm@gov.uk as an ID. ChatGPT takes rich text, though, which is a lot easier to de-anonymise than random search keywords.
If you've a decision to make, then asking chatgpt for 20 possibilities is a great way to get ideas. Using them blindly would, but there's no suggestion that is the case.
I'm very surprised this is subject to FOIA. I'll bet he learns to disable memory and uses incognito chats now though.