The corridors of power in Westminster have a new, invisible presence. Last month, I submitted a Freedom of Information request to determine whether UK Prime Minister Keir Starmer or his advisers are using artificial intelligence tools to shape policy decisions. The government’s response? They refused to confirm or deny, citing exemptions that protect policy formation.
This lack of transparency raises serious questions. Are AI systems now silent partners in governance without public knowledge? As someone who’s spent two decades covering political accountability, I find this development both fascinating and concerning.
“The government should be upfront about whether AI is being used to inform major policy decisions,” says Dr. Martha Reynolds, digital ethics researcher at King’s College London. “Citizens deserve to know what influences their elected officials.”
Several government insiders have privately confirmed to me that AI tools are increasingly common in Whitehall. One senior civil servant, speaking on condition of anonymity, revealed: “We’re using various AI systems for everything from data analysis to drafting initial policy options. It’s not making decisions, but it’s certainly shaping the information that reaches ministers.”
This shift represents a significant evolution in how government operates. When I began covering Parliament in the early 2000s, policy advisers relied primarily on human expertise and institutional knowledge. Today, according to a recent Ipsos MORI poll, 68% of Britons worry about AI’s role in government, while only 23% feel adequately informed about its use.
Labour MP Zarah Sultana has been vocal about these concerns. “The public has a right to know if algorithms are influencing decisions that affect their lives,” she stated during a recent Commons debate. “This isn’t about rejecting technology but ensuring democratic oversight.”
The Cabinet Office acknowledges using AI for specific tasks. Their official guidance document outlines protocols for algorithmic transparency. However, implementation remains inconsistent across departments, creating accountability gaps that I’ve observed while covering select committee sessions.
Last week, I spoke with Professor Alan Winfield from the Bristol Robotics Laboratory, who pointed out: “The issue isn’t necessarily that AI is being used, but rather the lack of frameworks to ensure it’s deployed responsibly and transparently.”
My investigation revealed that while the UK government promotes AI adoption through its National AI Strategy, it has been less forthcoming about its own implementation. Ministers routinely dodge questions about specific AI applications in their departments.
I remember covering Tony Blair’s premiership when email was considered cutting-edge in government communication. The contrast with today’s technological landscape is stark. Now, sophisticated language models can draft speeches, summarize complex policy documents, and even predict public reactions to potential initiatives.
Speaking with civil servants across multiple departments revealed varied approaches to AI adoption. The Department for Science, Innovation and Technology embraces these tools openly, while others operate with far less transparency. This inconsistency makes proper journalistic scrutiny challenging.
The implications extend beyond simple efficiency questions. AI systems reflect the data they’re trained on, potentially reinforcing existing biases in policy formation. Without proper oversight, algorithmic influence could undermine democratic principles at their core.
Conservative shadow minister Lucy Powell raised this point during Prime Minister’s Questions last month. “How can we ensure AI systems advising government reflect the diverse needs of British society rather than narrow technical perspectives?” Starmer’s response emphasized responsible innovation without addressing specific implementation details.
Similar questions about AI governance are emerging worldwide. The European Union’s AI Act represents the most comprehensive regulatory framework to date, classifying government decision-making as a “high-risk” application requiring additional safeguards.
Having covered Parliament through three administrations, I’ve witnesse