Over the past two years, generative artificial intelligence (AI) has captivated public attention. This year signals the beginning of a new phase: the rise of AI agents.
AI agents are autonomous systems that can make decisions and take actions on our behalf without direct human input. The vision is that these agents will redefine work and daily life by handling complex tasks for us. They could negotiate contracts, manage our finances, or book our travel.
Salesforce chief executive Marc Benioff has said he aims to deploy a billion AI agents within a year. Meanwhile Meta chief Mark Zuckerberg predicts AI agents will soon outnumber the global human population.
As companies race to deploy AI agents, questions about their societal impact, ethical boundaries and long-term consequences grow more urgent. We stand on the edge of a technological frontier with the power to redefine the fabric of our lives.
How will these systems transform our work and our decision-making? And what safeguards do we need to ensure they serve humanity's best interests?
AI agents take the control away
Current generative AI systems react to user input, such as prompts. By contrast, AI agents act autonomously within broad parameters. They operate with unprecedented levels of freedom - they can negotiate, make judgement calls, and orchestrate complex interactions with other systems. This goes far beyond simple command-response exchanges like those you might have with ChatGPT.
For instance, imagine using a personal "AI financial advisor" agent to buy life insurance. The agent would analyse your financial situation, health data and family needs while simultaneously negotiating with multiple insurance companies' AI agents.
It would also need to coordinate with several other AI systems: your medical records' AI for health information, and your bank's AI systems for making payments.
The use of such an agent promises to reduce manual effort for you, but it also introduces significant risks.
The AI might be outmanoeuvred by more advanced insurance company AI agents during negotiations, leading to higher premiums. Privacy concerns arise as your sensitive medical and financial information flows between multiple systems.
The complexity of these interactions can also result in opaque decisions. It might be difficult to trace how various AI agents influence the final insurance policy recommendation. And if errors occur, it could be hard to know which part of the system to hold accountable.
Perhaps most crucially, this system risks diminishing human agency. When AI interactions grow too complex to comprehend or control, individuals may struggle to intervene in or even fully understand their insurance arrangements.
A tangle of ethical and practical challenges
The insurance agent scenario above is not yet fully realised. But sophisticated AI agents are rapidly coming onto the market.
Salesforce and Microsoft have already incorporated AI agents into some of their corporate products, such as Copilot Actions . Google has been gearing up for the release of personal AI agents since announcing its latest AI model, Gemini 2.0 . OpenAI is also expected to release a personal AI agent in 2025.
The prospect of billions of AI agents operating simultaneously raises profound ethical and practical challenges.
These agents will be created by competing companies with different technical architectures, ethical frameworks and business incentives. Some will prioritise user privacy, others speed and efficiency.
They will interact across national borders where regulations governing AI autonomy, data privacy and consumer protection vary dramatically.
This could create a fragmented landscape where AI agents operate under conflicting rules and standards, potentially leading to systemic risks.
What happens when AI agents optimised for different objectives - say, profit maximisation versus environmental sustainability - clash in automated negotiations? Or when agents trained on Western ethical frameworks make decisions that affect users in cultural contexts for which they were not designed?
The emergence of this complex, interconnected ecosystem of AI agents demands new approaches to governance, accountability, and the preservation of human agency in an increasingly automated world.
How do we shape a future with AI agents in it?
AI agents promise to be helpful, to save us time. To navigate the challenges outlined above, we will need to coordinate action across multiple fronts.
International bodies and national governments must develop harmonised regulatory frameworks that address the cross-border nature of AI agent interactions.
These frameworks should establish clear standards for transparency and accountability, particularly in scenarios where multiple agents interact in ways that affect human interests.
Technology companies developing AI agents need to prioritise safety and ethical considerations from the earliest stages of development. This means building in robust safeguards that prevent abuse - such as manipulating users or making discriminatory decisions.
They must ensure agents remain aligned with human values. All decisions and actions made by an AI agent should be logged in an "audit trail" that's easy to access and follow.
Importantly, companies must develop standardised protocols for agent-to-agent communication. Conflict resolution between AI agents should happen in a way that protects the interests of users.
Any organisation that deploys AI agents should also have comprehensive oversight of them. Humans should still be involved in any crucial decisions, with a clear process in place to do so. The organisation should also systematically assess the outcomes to ensure agents truly serve their intended purpose.
As consumers, we all have a crucial role to play, too. Before entrusting tasks to AI agents, you should demand clear explanations of how these systems operate, what data they share, and how decisions are made.
This includes understanding the limits of agent autonomy. You should have the ability to override agents' decisions when necessary.
We shouldn't surrender human agency as we transition to a world of AI agents. But it's a powerful technology, and now is the time to actively shape what that world will look like.