This week, most of the tech world’s glitterati gathered in Lisbon for Web Summit, a sprawling convention showcasing all the things from dancing robots to the influencer economic system.
In the pavilions – warehouse-sized rooms chock stuffed with levels, cubicles and other people networking – the phrase “agentic AI” was in all places.
There have been AI brokers that hung around your neck in jewelry, software program to construct brokers into your workflows and greater than 20 panel discussions on the subject.
Agentic AI is actually synthetic intelligence that may do particular duties by itself, like guide your flights or order an Uber or assist a buyer.
It’s the trade’s present buzzword and has even crept into the actual world, with the Daily Mail itemizing “agentic” as an ‘in’ phrase for Gen Z final week.
But AI brokers aren’t new. In truth, Babak Hodjat, now chief AI officer at Cognizant, invented the expertise behind one of the crucial well-known AI brokers, Siri, within the Nineties.
“Back then, the fact that Siri itself was multi-agentic was a detail that we didn’t even talk about – but it was,” he instructed Sky News from Lisbon.
“Historically, the first person that talked about something like an agent was Alan Turing.”
New or not, AI brokers are thought to return with much more dangers than general-purpose AI, as a result of they work together with and modify real-world situations.
The dangers that include AI, like bias in its knowledge or unexpected circumstances in the way it interacts with people, are magnified by agentic AI as a result of it interacts with the world by itself
“Agentic AI introduces new risks and challenges,” wrote the IBM Responsible Technology Board of their 2025 report on the expertise.
“For example, one new emerging risk involves data bias: an AI agent might modify a dataset or database in a way that introduces bias.
“Here, the AI agent takes an motion that probably impacts the world and could possibly be irreversible if the launched bias scales undetected.”
But for Mr Hodjat, its not AI brokers we want to fret about.
“People are over-trusting [AI] and taking their responses on face value without digging in and making sure that it’s not just some hallucination that’s coming up.
“It is incumbent upon all of us to be taught what the boundaries are, the artwork of the doable, the place we can belief these methods and the place we can not, and educate not simply ourselves, but in addition our youngsters.”
His warning will feel familiar, particularly in Europe, where there’s an increased wariness around AI compared to the US.
But have we become too cautious when it comes to AI – at the risk of a far more existential threat in the future?
Jarek Kutylowski, chief govt of German AI language big DeepL, actually thinks so.
This yr, the EU AI Act got here into power, strict rules about how firms can and might’t use AI.
In the UK, firms are ruled by current laws like GDPR and there is uncertainty about how strict our guidelines can be sooner or later.
When requested if we wanted to decelerate AI innovation as a way to put stricter rules in place, Mr Kutylowski stated it was a query price grappling with… however in Europe, we are taking it too far.
Read extra from science and expertise:
NASA cancels space launch
Jeff Bezos’s rocket lands on Earth
New law could help tackle AI-generated child abuse at source
“Looking at the apparent risks is easy, looking at the risks like what are we going to miss out on if we don’t have the technology, if we are not successful enough in adopting that technology, that is probably the bigger risk,” stated Mr Kutylowski.
“I see definitely a much larger risk in Europe being left behind in the AI race.”
“You won’t see it until we start falling behind and until our economies cannot capitalise on those productivity gains that maybe other parts of the world will see.
“I don’t imagine personally that technological progress could be stopped in any method, so it’s extra of a query of ‘how do we pragmatically embrace what’s coming forward?”
