UNSC Debates AI in Conflicts, Urges Unified Framework

Russian Federation Warns Against Imposing West-Led Rules, Norms

Rapidly evolving artificial intelligence (AI) is outpacing human ability to govern it, even threatening human control over weapons systems, the United Nations chief warned during a Security Council briefing today, urging Member States to swiftly establish "international guard-rails" to ensure a safe, secure and inclusive AI future for all.

"Artificial intelligence is not just reshaping our world - it is revolutionizing it," underscored Secretary-General António Guterres. AI tools are identifying food insecurity and predicting displacements caused by extreme events and climate change, detecting and clearing landmines, and soon will be able to spot patterns of unrest before violence erupts.

However, recent conflicts have become testing grounds for AI military applications, he pointed out, noting that algorithms, from intelligence-based assessments to target selection, have reportedly been used in making life-and-death decisions. "Artificial intelligence without human oversight would leave the world blind - and perhaps nowhere more perilously and recklessly than in global peace and security," he warned, adding that "deep fakes" could trigger diplomatic crises, incite unrest and undermine the very foundations of societies. The integration of AI with nuclear weapons must be avoided at all costs, he emphasized.

Amid the pressing need for "unprecedented global cooperation" in reducing fragmentation of AI governance, his High-Level Advisory Body on AI has developed a blueprint for addressing both the profound risks and opportunities that AI presents to humanity, he noted, adding: It has also laid "the foundation for a framework that connects existing initiatives - and ensures that every nation can help shape our digital future".

Member States should move swiftly in establishing the International Scientific Panel on AI and launching the Global Dialogue on AI Governance within the United Nations, as set forth in the UN Global Digital Compact. "We must never allow AI to stand for 'Advancing Inequality'," he added, underscoring the need to support developing countries in building AI capabilities. "Members of this Council must lead by example and ensure that competition over emerging technologies does not destabilize international peace and security," he urged.

Fei-Fei Li, Sequoia Professor in the Computer Science Department at Stanford University, Co-Director of Stanford's Human-Centered AI Institute, and Member of the Secretary-General's Scientific Advisory Board, via videoconference, spotlighted new technology, called Spatial Intelligence, which allows AI systems to perceive and interact with the 3D virtual and physical world. "This work has illuminated further promises of this technology, bringing us to some of the most exciting frontiers of innovation," she said, citing examples, such as robots that navigate disaster zones to save lives, precision agriculture systems that address food insecurity, and advanced medical imaging tools that improve healthcare outcomes.

"Yet, we must also remain vigilant," she warned, spotlighting AI's ability to harm. Member States must act with urgency and unity to ensure that "AI serves humanity rather than undermining it" and that "everyone has equitable access to AI tools".

A multilateral AI research institute - a network of research hubs bringing together experts from across disciplines and pooling resources across nations - would advance tech innovation and set global norms for responsible AI development and deployment, she said. Governments must foster public sector leadership, champion global collaboration, and advance evidence-based policymaking, she said, and in doing so, "we can unlock AI's transformative potential while safeguarding its responsible development".

Also briefing the Council was Yann LeCun, Chief AI Scientist, Meta, and Jacob T. Schwartz, Professor of Computer Science, Data Science, Neural Science and Electrical and Computer Engineering at New York University, who said: "There is no question that, at some point in the future, AI systems will match and surpass human intellectual capabilities." By amplifying human intelligence, AI may bring, not just a new industrial revolution, but "a new period of enlightenment for humanity", contributing towards the maintenance of international peace and security by "supercharging the diffusion of knowledge and powering global economic growth".

"Governments and the private sector must work together to ensure this global network of infrastructure exists to support AI development in a way that enables people all over the world to participate in the creation of a common resource," he said. International cooperation must focus on two initiatives: collecting cultural material, providing AI-focused supercomputers in multiple regions around the world and establishing a modus operandi for the distributed training of a free and open universal foundation model; and unifying the regulatory landscape, so that the development and deployment of open-source foundation models is not hindered.

Regarding Governments' concerns about a handful of companies controlling the "digital diet of their citizens", he said Meta has taken a leading role in producing and distributing free and open-source foundation models. About AI-generated disinformation, he said: "There is no evidence that current forms of AI present any existential risk, or even a significantly increased threats over traditional technology such as search engines and textbooks."

In the ensuing high-level discussion, Council members underscored the urgent need for coordinated action to prevent the misuse of AI, especially threats to global peace and security, while spotlighting various governance initiatives.

Antony J. Blinken, Secretary of State of the United States, Council President for December, speaking in his national capacity, said that while AI can help achieve 80 per cent of the Sustainable Development Goals, it can also be deployed for destructive and hard-to-trace cyberattacks, and by repressive regimes in targeting journalists. Urging States to condemn and reject its malicious use by any actor, he said his country has been working to set rules around the use of AI and mobilize a collective response. Leading American technology companies have committed to the use of watermarks for AI-generated content, for example, and last month, an international network of artificial intelligence safety institutes was launched to set benchmarks for testing and safety.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.