UK businesses adopting AI without understanding the risks, expert warns
A growing number of UK businesses are deploying artificial intelligence without fully understanding the risks involved, a leading technology expert has warned.
Watch more of our videos on ShotsTV.com
and on Freeview 262 or Freely 565
The proportion of UK companies using at least one AI tool has doubled in the past two years—from 9% in 2023 to 18% in early 2025—according to the Office for National Statistics. Among large employers, nearly one in three have adopted AI technologies, often without having the internal expertise to interpret how those systems work.
At the same time, a chronic shortage of digital skills is estimated to be costing the UK economy £63 billion a year, according to government figures.
Spencer Pickett, Chief Technology Officer at Software Development UK, said the rapid uptake of AI has created a “gold-rush mentality” that leaves little room for caution or oversight.
“Imagine hiring a PhD expert who refuses to explain their sums,” he said. “That’s today’s AI. Until we can follow its thinking, it should never have the final say on someone’s loan, diagnosis or job.”
AI’s ‘Black Box’ Problem
Unlike traditional software, modern AI models often teach themselves by processing vast amounts of data. While this enables them to spot patterns and make predictions, it also makes their internal reasoning difficult—if not impossible—to understand.
That opacity poses a particular risk in regulated sectors where decisions must be justified and audited.
“There’s a gold-rush mentality—lots of shiny pilots, very little safety tape,” said Pickett. “In industries like banking, insurance and healthcare, you need to justify every decision. But AI often can't explain how it reaches its conclusions.”
According to Pickett and other experts, there are three key risks companies face when they adopt AI without safeguards:
Invisible errors: If an AI system makes a mistake, the lack of transparency means it may go undetected and be difficult to correct.
Regulatory scrutiny: UK watchdogs now expect clear, auditable explanations for automated decisions, particularly in credit scoring, medical assessments and safety-related systems.
Loss of trust: Customers and staff are less likely to accept decisions made by AI if they appear arbitrary or unfair, which can undermine confidence in the technology altogether.
“AI is a power tool,” Pickett said. “In skilled hands, it’s brilliant; in unskilled hands, it’s an accident waiting to happen.”
A Call for Responsible AI Use
Pickett is part of a team working on ways to make AI systems more transparent and accountable. Their research focuses on tools that can explain how models arrive at decisions, flag high-risk outcomes, and support human oversight in sensitive cases.
“We help firms set the right boundaries, introduce human sign-off where it matters and, above all, make the model show its working,” he said. “Most risks shrink fast once you can actually see them.”
He believes that boards must first recognise the limitations of current AI systems before placing them at the heart of critical decision-making.
“Until boards accept that today’s AI is a sealed box, they won’t ask the questions that keep customers safe and regulators calm,” Pickett said. “Real innovation starts with opening the lid.”
