El reto de la inteligencia artificial para la seguridad y defensa

The challenge of artificial intelligence for security and defense

ESSAY*

18 | 10 | 2023

Texto

AI brings undoubted advantages in information gathering, decision making and system autonomy, but it poses major ethical, legal and strategic challenges.

In the picture

Scope of an autonomous weapon [committee International Red Cross].

Twenty-five years ago, a supercomputer defeated for the first time a chess champion, the Russian Garry Kasparov, a historic milestone that was, however, only the beginning of a long road for Artificial Intelligence (AI).

Since then, AI has been growing exponentially and playing an increasingly prominent role in numerous aspects of human life. As we move into the 21st century, AI has demonstrated its ability to transform industries, improve efficiency in a variety of areas, and open up new possibilities at research and innovation.

One of the most impactful and, in turn, challenging areas where AI is making its mark is in the field of security and defense. In a world characterized by the increasing complexity of threats and the constant evolution of conflicts, the integration of AI into military strategies and operations raises a number of crucial ethical and strategic questions that must be carefully addressed to ensure a balance between the transformative power of technology and the preservation of global security. This approach begs the question: How does AI pose challenges to security and defense?

development and applications of AI in security and defense

AI has seen significant advances in recent decades, and its application in the security and defense arena has revolutionized the way governments and armed forces address contemporary challenges. From intelligence gathering to strategic decision making, AI has proven its worth in a number of areas crucial to national security.

One of the most remarkable aspects of AI in defense is its ability to analyze large amounts of data in real time. AI systems can process information from multiple sources, such as sensors, satellites and surveillance networks, and extract patterns and trends that would be difficult to detect by traditional methods. Image processing algorithms can identify anomalous objects, patterns and behaviors, improving early threat detection and target identification.

AI has also played a crucial role in the development of autonomous systems and unmanned vehicles. Drones and autonomous ground vehicles have proven useful in gathering information in hostile and hazardous environments, conducting reconnaissance missions, and executing search and rescue operations. These systems can be controlled remotely or follow pre-programmed routes using AI algorithms, which reduces risk to the military staff and allows for greater flexibility in mission planning.

Challenges

However, as AI becomes an integral part of defense, significant ethical and legal challenges arise. Autonomous decision making by AI systems raises questions about responsibility and accountability in the event of incidents. In addition, the possibility that AI could be used in offensive operations has generated debates about the need for international regulations to control its development and use.

Cybersecurity

The U.S. Defense department considers that "AI is an essential tool for predicting, identifying, and responding to cyber-attacks and other physical threats from diverse sources"[1]. For which they are targeting "public-private cooperation, selecting commercial and academic partners to develop these new systems with a focus on areas such as processing data, assessment and testing of new systems, based on AI and, fundamentally, across the broad spectrum of cybersecurity".

However, it is important to note that artificial intelligence systems can increase weaknesses in security strategies against potential cyberattacks. In this regard, it is crucial to consider that a potential adversary could employ malware to take control, influence or distort the operation of AI systems intended for offensive or defensive purposes. In addition, we should not overlook the ability to manipulate the patterns and architecture of autonomous systems.

Attackers also have a very wide range of possibilities for the application of artificial intelligence or machine learning technologies to their advantage. And, in fact, they have already begun to do so, thanks to the extraordinary adaptability of these technologies[2]

From all this we can deduce the beginning of a kind of 'degree program arms race' of artificial intelligence against artificial intelligence. And in this degree program, "cyber defenders have the disadvantage of being like soccer goalkeepers: they have to succeed in all their interventions. Attackers, on the other hand, only have to convert some of their chances. And in a game that never ends.

For an extended period, human involvement will remain as essential as it is decisive. Just as defensive strategies can benefit extensively from artificial intelligence, it is clear that aggressors will also gain similar advantages. Therefore, it does not seem unreasonable to consider that the eternal confrontation between defenders and attackers, which began three decades ago, will continue to develop without a definitive end on a board increasingly influenced by artificial intelligence but never totally.

Ethics

While investment in AI for military use offers potential strategic advantages, it also raises ethical concerns and security issues[3].

One of the main problems posed by AI is the delegation of functions to an algorithm. Regarding the selection and attack of targets, and the use of autonomous systems in this task, it is argued that the responsibility for this decision cannot be left to machines and robots due to their lack of empathy if they become "capable of selecting targets and attacking them on their own"[4]. Criticism is justified on the grounds that autonomous systems and the AI that runs them are incapable of discerning the complex situations that can occur on the battlefield, such as the possibility that certain targets have lost their military value, or assessing whether a goal intends to attack or surrender. For example, "assessing whether a tank is a military goal or whether the autonomous lethal weapon system would accept its surrender is not just a matter of having intelligent algorithms with high discernment capabilities. Instead, we have to consider the underlying values that we, as humans developing such algorithms, should be able to install in them."

Although AI and robotics aspire to improve the performance of humans and reduce their limitations, machines lack human-like intelligence and our capacity for social interaction, which allows us to recognize and interpret complex social behavior supported by different codes of signs and signals, and measured by cultural patterns and also by complex moral circumstances, such as those occurring on the battlefield.

Consequently, several initiatives have expressed concern about the inappropriate, premature or malicious use of new technologies indicating the need for ethical codes of conduct that promote appropriate use of artificial intelligence. Among them 'Stop Killer Robots', launched in 2013 by award Nobel Laureate Jody Williams to promote ban what he calls "killer robots", the systems that in the future will be able to choose and fire on targets without any human intervention.

Responsibility, human or machine?

It is undeniable that ethics and the legal system have evolved in response to human needs, and not with the purpose to address issues related to machines. Therefore, objections arise to the notion of granting legal personality to robots, which would imply the ability to assign them responsibility for their actions or the consequences of their actions. Consequently, "it is the scientists who program algorithms and develop AI, and in any case the States, who cannot discard their responsibility and must regulate the use of these systems, especially in their military use. It is researchers and states that must uphold ethical principles in their actions and take legal responsibility in determining their employment and use".

A central issue for ethical consideration is the autonomy of the systems and the control exercised over them by humans, who cannot abdicate responsibility for the results of actions taken with weapons that become lethal. How can autonomous weapons be held accountable? Who is to blame if a robot commits war crimes? Who would be prosecuted? The weapon? The soldier? The commanders of the soldiers? The corporation that manufactured the weapon? NGOs and international law specialists express concern about the risk of autonomous weapons creating a significant lack of transparency in accountability. In the face of possible errors in operation and choices made by automated systems, there is an ethical need to maintain a constant level of human control, ensuring that there is always an individual manager and that accountability for their actions and decisions is verifiable.

Geopolitics

It is commonly understood in the context of international relations that the international order during the 21st century will be determined by the power granted by technology. The President of the Russian Federation, Vladimir Putin, commenting to a group of Russian journalists and students in 2017 is said to have declared that, "Whoever becomes the leader in this sphere [artificial intelligence] will become the ruler of the world." The reality is that many countries are developing AI. The United States, China and Russia are increasingly convinced that their global leadership will be determined by how powerful their AI is[5].

In this new scenario, the United States and China emerge as the two great powers that will presumably dominate cyberspace. "Europe, for its part, seems to lack these technological capabilities and runs the risk of suffering from a sort of 'cyber-ballooning' or 'cyber-colonization', with the dangers that this entails for European independence and autonomy in the global context"[6].

Moreover, the use of artificial intelligence to manipulate data on a large scale has transformed the global economic environment and the development of major global technology corporations, such as Google (Alphabet), Apple, Amazon, Facebook (currently goal) and Microsoft, along with Chinese companies Tencent and Alibaba. These companies wield global technological power, as they dominate cyberspace through the use of advanced tools based on artificial intelligence and the manipulation of huge volumes of data from different users, including individuals, companies and institutions around the world. This often underestimated geopolitical power exerts significant influence on the investment decisions of numerous governments internationally, as well as on the technological development of companies and universities in the public and private spheres in a wide range of sectors.

Responses and strategies

Just a few days ago more than 1,300 people - including Tesla and SpaceX founder Elon Musk, Apple founder Steve Wozniak and historian Yuval Noah Harari - joined together to sign an open letter calling for slowing down the development and implementation of this AI to properly manage and control the "profound risks to society and humanity" they pose. Now, governments around the world are beginning to take steps to limit them.

The UN and the Convention on Certain Conventional Weapons (CCW)

The 125-nation CCW, which meets periodically in Geneva, has been discussing possible limits on the use of lethal autonomous weapons, which are completely operated by machines and use new technologies such as artificial intelligence and facial recognition.

The UN's University Secretary , Antonio Guterres, had order countries to present an "ambitious plan" on new rules.

The Geneva talks, ongoing for eight years, have taken on a new urgency since a UN panel reported in March 2021 that the first autonomous drone strike may have already occurred in Libya[7].

Countries participating in the UN talks on autonomous weapons stopped short of starting negotiations on an international treaty to regulate their use, but agreed simply to continue discussions[8]. The International Red Cross committee and several NGOs had been pushing for negotiators to begin work on an international treaty that would establish new legally binding rules on machine-operated weapons.

Regulation by the EU and the United States

The great military potential of AI evidences the need for the EU to regularize these technologies, as the current legislation, through the Artificial Intelligence Act, does not avoid the most serious implications of military AI. It is urgent to establish an effective legal framework and promote an AI manager and ethical.

The proposal of Regulation on Artificial Intelligence (the AI Act), "promotes uses of AI that are ethical and respect fundamental rights, but discreetly mentions in a footnote grade that military uses of AI do not fall within its scope."[9].

This leaves member states a wide margin of maneuver to regulate the use of AI in warfare. Given the Union's investment in AI and other advanced technologies will reach the value of almost €8 billion between 2021-2027, it could be worrying. This is possible thanks to the European Defense Fund and the fact that the EU does not ban the use of autonomous weapons, despite resolutions passed by the European Parliament in 2014, 2018 and 2021.

Despite the exclusion of military artificial intelligence, the AI Act will have a considerable impact on European security. This is because many AI systems have a dual-use nature, meaning that they can be applied in both civilian and military contexts. For example, a pattern detection algorithm can be developed to identify cancer cells or to select targets in military operations.

In dual-use cases such as these, the AI Act would be applicable, as it requires systems to comply with its regulations regarding high-risk AI. However, implementing these regulatory requirements may be problematic for systems operating autonomously or in classified environments. In addition, most defense organizations do not closely follow developments in civilian digital policy, which could leave them ill-prepared to comply with the AI Act once it goes into effect.

At the political level, governments are becoming increasingly involved in the regulatory issues surrounding military AI. The Dutch and South Korean governments jointly organized a summit on AI manager in the military (REAIM) in February 2023, which brought together more than 50 government representatives to endorse a joint call to action, with the goal to place "the manager use of AI more prominently on the diary political ." The Defense Departments of Canada, Australia, the US and the UK have already established guidelines for the use manager of AI. NATO adopted its own AI Strategy in 2021, along with a board Review of data and Artificial Intelligence (DARB) dedicated to ensuring the development legal and manager of AI through a rule certification. However, NATO's AI Strategy may face implementation hurdles.

Apart from France's public AI Defense strategy, there is no EU-wide legal and ethical framework for military uses of AI. As a result, Member States may take different approaches, which will lead to gaps in regulation and oversight.

Therefore, the European Union should take a proactive role and establish a framework covering both dual-use and military applications of artificial intelligence. This would be done through a European strategy aimed at promoting the use manager of artificial intelligence in defense, based on risk categorization according to the AI Act. Such a approach would provide guidance to defense institutions and industry in promoting responsible development, acquisition and use of artificial intelligence, underpinned by shared values.

Spain

The Ministry of Defense has regulated the future use that the Armed Forces may make of artificial intelligence through Resolution 11197/2023 approving the "Strategy of development, implementation and use of Artificial Intelligence in the Ministry of Defense"[10]. In the prologue or preamble to the strategy, Defense stresses that "the people who serve in the Armed Forces are, and will continue to be, the most valuable asset of department".

As a consequence, the Ministry proclaims that " data and AI-enabled information, tools and applications will be used to enhance people's understanding and capabilities, not with the goal aim of replacing them, but of complementing them and enabling them to bring greater value to their activities, while aligning with applicable ethical principles".

On the other hand, the principle of "human responsibility and accountability" stands out in particular. The Ministry of Defense guarantees in writing that "any development of artificial intelligence, as well as its use, must allow for clear human oversight in order to ensure proper accountability and responsibility attribution".

Conclusions

In conclusion, AI applications in the military domain bring undoubted advantages, as AI has transformed military operations by improving information gathering, decision making and autonomy of systems such as unmanned vehicles. However, there are significant ethical, legal and strategic challenges. In terms of cybersecurity, there are warnings about the possibility of cyberattacks and the weaponized degree program of AI versus AI. Ethics emerges as a central point, questioning the delegation of autonomous decisions to algorithms and the responsibility of machines in conflict situations.

In addition, AI is having a major impact on geopolitics, with the United States, China and Russia competing for leadership in AI. Artificial intelligence, as explained, is the field where the new world order will be determined. A new order that, as has been the trend in human history, will be determined by the Structures of political and, therefore, military power, where China and the United States appear as the two great powers of this century, without forgetting other countries that are also playing to occupy and defend those geopolitical spaces they consider their own.

Regarding the different responses and strategies to the challenges posed by AI. Although the EU seeks promote a manager use of AI through regulation, this is not entirely oriented to defense, so the need for a more solid and coherent legal framework arises. As a result, states are independently regulating the use of AI in defense through national strategies. This poses a problem as the use of AI must be addressed appropriately and jointly to maximize its benefits and mitigate its risks.

* Communication presented at the XXX International Defense Course, "Los motores de cambio de la seguridad y la defensa", Jaca, September 25-29, 2023.

REFERENCES


[1] Olier, Eduardo, & Manuel Corchado, J. (2022). "Artificial Intelligence: applications to Defense". Spanish Institute of Strategic programs of study . https://www.ieee.es/Galerias/fichero/docs_investig/2022/DIEEEINV01_2022_EDUOLI_Inteligencia.pdf.

[2] Ministry of Defense (2020) "Military Uses of Artificial Intelligence, Automation and Robotics" (IAA&R). Joint Center for development of Concepts. https://emad.defensa.gob.es/Galerias/CCDC/files/USOS_MILITARES_DE_LA_INTELIGENCIA_ARTIFICIALx_LA_AUTOMATIZACION_Y_LA_ROBOTICA_xIAAxRx.-_VV.AA.pdf

[3] Morgan, F., Boudreaux, B., Lohn, A., Ashby, M., Curriden, C., Klima, K., & Grossman, D. (2020). "Military Applications of Artificial Intelligence. Ethical Concerns in an Uncertain World". Rand Corporation. https://www.rand.org/content/dam/rand/pubs/research_reports/RR3100/RR3139-1/RAND_RR3139-1.pdf

[4] IEEE (2018). "Artificial intelligence applied to defense". Security and Defense Papers 79. Spanish Institute for Strategic programs of study . https://publicaciones.defensa.gob.es/la-inteligencia-artificial-aplicada-a-la-defensa-n-79-libros-pdf.html

[5] Romero Mier, S. G. (2019) "Artificial Intelligence as tool of strategy and security for defense of states". Journal of the technical school de Guerra Naval. technical school De Guerra Naval, Peru.

[6] Olier, E. Op. cit.

[7] Vega, Guillermo (May 28, 2021). "UN reports first autonomous drone strike on people". El País.

https://elpais.com/tecnologia/2021-05-28/la-onu-informa-del-primer-ataque-de-drones-autonomos-a-personas.html

[8] Farge, Emma (December 17, 2021). "UN talks adjourn without deal to regulate "killer robots"". Reuters https://www.reuters.com/article/us-un-disarmament-idAFKBN2IW1UJ

[9] Fanni, Rossana (June 28, 2023). "EU must address risks posed by military AI". Foreign Policy. https://www.politicaexterior.com/inteligencia-artificial-militar-union-europea/

[10] Ruiz Enebral, Aurelio (July 14, 2023). "Defensa garantiza que el uso militar de la inteligencia artificial siempre tendrá un control humano". Confidencial Digital. https://www.elconfidencialdigital.com/articulo/defensa/defensa-garantiza-que-uso-militar-inteligencia-artificial-tendra-siempre-control-humano/20230712171647607253.html