Born in Stoke, leading in tech: Piers Linney warns AI will ‘supercharge cybercrime’ without action
Piers Linney’s journey to becoming one of Britain’s most recognisable entrepreneurs began right here in Staffordshire, where he was born in Stoke-on-Trent. From local roots to national success, he rose to prominence on Dragons’ Den and has since built a reputation as a pioneering voice in business and innovation.
Watch more of our videos on ShotsTV.com
and on Freeview 262 or Freely 565
Now ranked among the UK’s leading artificial intelligence speakers, Piers is a sought-after expert on digital transformation and the future of enterprise.
Drawing on his background as the former co-CEO of cloud firm Outsourcery and as a government advisor on small business growth, he shares powerful insights that position him at the intersection of future of work speakers, entrepreneurs speakers, and diversity & inclusion speakers.
In this exclusive interview, he explores AI’s growing role in cybercrime, workforce transformation, and what businesses must do to stay competitive in a tech-driven world.
Piers Linney: “Artificial intelligence is — you know, it can be used for good, but it can also be used by bad actors. And increasingly, it already is being used by bad actors.
“One of the sort of battlefields we’re going to see is cyber. Now I did a talk quite recently for one of the UK’s — the world’s largest, actually — cyber companies, top five. And I was talking to senior management there, and we were talking about the cyber arms race.
“I was saying, you know, a lot of this malware in the future is designed by AI — it’s going to be very sophisticated. And they kind of looked at each other and looked at me and said, “It’s actually worse than that, Piers. The malware will be an AI.”
“So you have some intelligence — essentially something which has an objective and can sit inside your systems, your computers, your watch even — they can sit there, its objective, and then work out a way to achieve the objective over time. You may not even be aware of it.
“So we’re going to see this battle. This — well, now, this is always going to happen to all technology. It’s nothing new. But when you start talking about artificial intelligence and maybe, you know, quantum computing — then you’re getting into something quite serious.
“And the main danger, I think, out of AI is not, you know, it’s not the future of robotic-looking humanoid androids, you know, crunching over the skulls of defeated human armies. It’s more about the fact that cybercriminals can use this in very, very dangerous and nefarious ways to harm people — especially financially. Not just people — potentially whole countries and governments.”
Piers Linney: “So we, as a fund, a company called Implement AI, we've got a framework where urban companies kind of understand really what that continuum looks like. And it is a continuum. We call it AI-assisted. So, you know, for 250 years since the pre-industrial revolution, even humans had various tools. But it was basically human first, really. We were the ones — it was kind of our cognitive labour especially — did a lot of work on displacing our physical labour. But cognitively it was very much human first.
“And now we've entered an age where AI — quite a lot, AI has been around for 50 years, machine learning in terms of research maybe 10 years, but since the beginning of this year, probably end of last year — these new large language models, diffusion models, have essentially changed the way we can communicate with technology.
“So what you're seeing now is a period where we are going to become — and by we, I mean us personally and organisations — are going to become AI-assisted. So in that period, software first, that kind of cognitive labour, knowledge work, the information economy — you're going to find that your organisations, your employees, they can be very much superpowered.
“And robotics is slightly different. So eventually, artificial intelligence is going to go mobile. So what we often get confused about is the timeframe, and it's very, very hard — very hard to see more than maybe two years now, it's like a fog. But what you can be assured of is that there is huge change coming.
“This is not moving from mainframe to network PCs. This is not moving to the cloud — you can take 10 years on that journey if you want to. This is happening very, very quickly. And we always say that it won't be — you know, your business or your job won’t be ruined or taken or outcompeted by artificial intelligence — it’ll be your competition using it.
“So what we’re trying to do is say: look, you've got kind of two to five years really to get on board and really embed this. Because what happens is that it's like a ship leaving the harbour. The gap — you can jump, kind of make it now — but as that big ship begins to move away at exponential speed (and we're going to see exponential change over time), that leap is going to be harder to make.
“To the point where it’s very, very difficult to make it. And once you are left on the quayside in this new world, you're staying there.
“Now, robotics — slightly different. You're going to see artificial intelligence help humans to design better robotics. They'll become more — they’re quite expensive now, it can be like a million pounds for a decent sort of robot — but eventually there’ll be consumer robots that become cheaper. Economies of scale. That’s further out — that’s more like maybe five to fifteen years out — but that's also coming.
“And then eventually we get to a world where it's AI-first — where everything is driven by AI. You will not trust a human. In fact, I advise quite a large automotive manufacturer — one of the largest in the world actually — quite recently.
“And it probably will be illegal for humans to drive a car in a built-up area. You won’t trust your human lawyer or accountant. Initially, you want them to check something maybe for certain reasons, but it’s going to be AI-first. Humans will have far more faith in artificial intelligence in the future than we all do in our fallible selves.”
Piers Linney: “Yes. AI policy is something that every company needs to embed. At my business, Implement AI, it's something we actually do as well.
“And it’s kind of an overall framework. And you can embed it into your business. It depends what industry you’re in — it can be quite different in regulated industries, for example, in healthcare, in terms of data.
“So you have to have a sort of overall framework. And the key to it is though — it pervades everything. It’s not got a kind of standalone one. What you're going to find though is that aspects of that, and parts of that, will need to be embedded into all of your policies — everything from, you know, HR to maybe even holiday policy — whatever you've got.
“So a standalone one probably isn’t the way to go. It’s something which conveys the whole business. Because essentially, artificial intelligence will — and you may not know it — but, you know, employees now — there's research saying that 30% of employees are using AI in their jobs, in their day jobs.
“So you’ll be very careful that, you know, your company data, your personal data, your customer data is being treated correctly. And that people are using this technology in ways which actually have added value to your business.
“And also, you're going to make sure — this is where training is really important — your training policies — that people actually know how to use it.
“There’s one thing having access to artificial intelligence. There’s another thing — actually knowing how to use it. People need to know how to use it and in a responsible way to impact your business, your organisation, its data security or your customer data security.
“Then you can actually add a huge amount of value to your business. Not allowing people to use AI — because they're always going to — it’s called Shadow AI (like Shadow IT) — not having the policy frameworks to know what are the four corners of the box within which they can use it, is a dangerous plan, a dangerous way forward.
“You need to make sure people know your organisation's approach and policies for using this extremely powerful technology.”
This exclusive interview with Piers Linney was conducted by Mark Matthews.