Artificial Intelligence and its Impact on International Relations
- Sean Rivero

- 1 day ago
- 7 min read
For decades, Hollywood has long been captivated by Artificial Intelligence (AI). From the end of days, apocalyptic depiction of AI in the form of “Skynet” in the Terminator franchise to the brilliant assistant of Tony Stark’s J.A.R.V.I.S in Marvel, “pop culture” has spent billions on attempting to imagine what AI will look like. Despite these billion dollar franchises being hits at the box office, Hollywood has failed to properly predict how AI will penetrate and alter one of society’s most consequential fields: international relations. Through AI monitoring systems for civil surveillance, the use of AI drones in warfare, to countries appointing AI to positions in government, such as Albania’s newly AI-generated minister for public procurement, called Diella. With every passing year, what Hollywood has never been able to conjure in film is now becoming a reality: the full integration of AI into the world of international relations.
This is no fault of cinema as even the brightest minds in international relations are splitting hairs when it comes to the future of AI and its impact on politics. Governments and organizations across the globe are weighing the benefits and risks of AI implementation in governance. From these discussions, one critical question remains at the center of every debate: should we introduce AI in governance and the political sphere more broadly? This debate is somehow already null and void as, whether we like it or not, AI has been active in politics for a few years now. One question remains: how should we regulate the use of AI in governance and international relations?
Forms of Regulations
The idea that AI can be kept out of certain fields is already obsolete. With AI transforming on a daily basis, so must the debate surrounding it. The era of prevention is already behind us. The world must now focus on regulation and AI transparency. This burden falls on the world’s governments and international organisations to enact universal standards and laws that will ensure that AI is not abused by greedy technocrats. Luckily, we are already witnessing a strong international push towards AI regulation, both at the national and international level.
On the national level, AI regulation and monitoring is being implemented through various different avenues: via legislation or through governmental agencies oversight. The United Kingdom, South Korea and Singapore have all taken incredible measures towards regulating AI. The UK has taken a very interesting and intriguing sector-based regulation framework allowing for a case by case implementation of regulation. At the same time, South Korea and Singapore have adopted innovative AI regulation frameworks that are at the forefront of how a country can monitor the use of AI. Both countries take an innovation first approach, which allows for AI innovation to still occur and play an important role in various industries, while ensuring that no abuse or mishandling of AI systems can occur. The AI Basic Act of the Republic of Korea does an excellent job at balancing AI innovation with safety and ethical standards. Just like South Korea, Singapore does an excellent job at focusing on innovation but without ever sacrificing safety and privacy. While Singapore does not enforce any mandatory AI laws, they have various AI pillars that foster trustfulness and accountability when it comes to AI implementation. The UK, South Korea and Singapore are all exceptional case studies for national regulation frameworks in this area.
On the international level, over the past couple years, AI regulation has been at the top of the agenda for governments, universities and international organizations alike. What this has led to is a greater push for cooperative global networks and the establishment of common ethical standards, some that we already see being implemented today. Various International organizations are working to create global principles for trustworthy AI, emphasizing transparency, safety, and human rights. The OECD principles is a wonderful framework that promotes innovative, trustworthyAI that respects human rights and democratic values. The UN is furthering this effort by promoting international cooperation on AI. Furthermore, the UN Secretary-General has established a High-Level Advisory Body on AI. The purpose behind this body is to ensure that AI benefits humanity and minimize risk of wrongful AI implementation. However, when it comes to AI regulation, there is one actor that currently stands above the rest.
European Union at the Forefront of AI Regulation
The European Union is a leader in AI regulation, setting many of the standards for the future. The EU’s AI Act demonstrates that the world is capable of regulating AI and ensuring that it is used for the common good, such as in healthcare, for safer and cleaner transport, more efficient manufacturing, and cheaper and more sustainable energy. Passed in 2024, the Act is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI implementation in the European Union. What this act does well is the stance it takes towards AI. It approaches AI as an invaluable tool if used correctly. By acknowledging the advantages of AI innovation, the Act is able to build regulation measures that do not hinder AI innovation while allowing it to flourish for the sole purpose of the betterment and enhancement of humanity across various fields.
The Act takes a risk-based approach towards AI regulation. The reason for this is to ensure that European citizens can trust what AI has to offer. This Act understands that the best path forward is viewing AI as an ally rather than an existential threat. Through this mindset, we can harness the undeniable capabilities of AI and ensure that AI benefits everyone. However, what makes the EU AI Act so impactful and a model to be followed by others is that it outlines clear repercussions for violations of the framework. In December 2025, the EU AI Commission opened formal proceedings against Meta, and in February 2026 it found Meta to be in breach of EU competition rules as it pertains to AI. Why is this significant? It shows that these AI frameworks are not just “paper-tigers” but hold enforceable mechanisms that can regulate the behaviors of companies and actors in this field.
AI Race and Fragmentation
We live in a complex world with increasingly opposing and conflicting agendas and interests. In international relations, everything ultimately comes down to power projection. What nations see when they see AI is a new way to project their power in ways once never thought possible. This has led to an immense increase of great power competition, with one clear area of confrontation being AI innovation and use. What by now has been dubbed the AI arms race is similar to the space race of other eras: we have two global powers, United States and China, that view becoming the leader in AI as essential to their ability to power project, making them willing to do whatever they must to achieve their goal.
One key consequence of the AI race between these two global powers is the increasing international fragmentation on AI regulation. In 2025, President Trump revoked President Biden’s AI executive order with the goal of overturning certain existing AI policies and directives that act as barriers to American AI innovation, clearing the path for the United States to act decisively to retain global leadership in artificial intelligence. On the flipside, China views AI as a tool for unprecedented economic growth. Funded by ample government spending, AI is at the center of business priorities, consumer behavior and economic growth in China. However, if left unchecked, AI could exacerbate income inequality and add to social stability risks.
Furthermore, the AI race will likely cause further fragmentation and continue to hinder international regulation by hindering the adoption of a cohesive global framework on AI regulation. When looking through the lens of power projection, many nations will now likely look to bypass existing international regulation and delay developing national regulatory frameworks. As we continue down this “arms race”, nations will do what they must in order to be the first to the top of the AI mountain, rendering many, if not all, AI regulation frameworks obsolete. As these actors look to pursue their divergent and competitive national agendas, international cooperation and coordination on AI regulation is set to continue to pay the price.
AI has immense potential for good. However, if left unchecked and unregulated, it risks opening a Pandora’s box of consequences that may prove difficult to reverse. Fragmentation and competition risks making any form of internationally-agreed AI regulation obsolete. AI is unlike any other global issue the world has faced: it cannot be treated in isolation and there cannot be diverging approaches to it. The fallout of a country mishandling AI will be felt equally by every nation across the globe, thus, global cooperation is essential.
Implications
We stand at the precipice of a new world. AI is here and will have its say on the shape of the future. However, it is not omnipotent. It can be controlled and it can be used to ensure a future in which we witness incredible technological advances and medical discoveries that were once thought to be impossible. However, the world in its entirety must be united in its approach to AI regulation. Greed and political desires can not be the drivers for AI innovation. The first steps to universal AI regulation have already been taken. The OECD principles, the EU AI Act, and national oversight committees are all steps in the right direction. However, they are only the first steps. We must continue and build off of these towards a future in which AI is regulated and monitored globally. It is the only way we can ensure that AI is used for the betterment of humanity.
Sean Alejandro Rivero is currently a MA Student in Political Science and Public Affairs at Saint Louis University - Madrid, which will accompany his double major in History and International Business in which he achieved during his undergrad. Using his academic background, Sean looks to build an interdisciplinary career with the ultimate goal of working for the U.S government.
The OCC publishes a wide range of opinions that are meant to help our readers think of International Relations. This publication reflects the views only of the author, and neither the OCC nor Saint Louis University can be held responsible for any use which may be made of the opinion of the author and/or the information contained therein.
To quote this article, please use the following reference:
Rivero, S. A. (2026, April). Artificial Intelligence and its Impact on International Relations. Observatory On Contemporary Crises. https://www.crisesobservatory.org/post/artificial-intelligence-and-its-impact-on-international-relations




Comments