Each and every major world conflict has marked a before and after in history, both in terms of technological and strategic advances. In this current era, despite the absence of open world conflict, we are also living in a time of change, apparently in the field of civilian technologies, but with a very high potential in the military industry, such as artificial intelligence (AI).
AI can be defined as "a technology that allows to simulate human capabilities in problem solving, as well as to learn through essay or by data from some concrete source ". In this definition, it is worth noting the importance of the word "simulate", since this technology is currently incapable of thinking abstractly like a human being.
The last few years have witnessed an B in AI, from the appearance of models such as Chat GPT, to systems capable of generating very realistic images, in what constitutes an example of 'generative' AI, which is that which can create something new, such as answers or images, and which represents progress with respect to 'traditional' or 'analytical' AI, which performs specific tasks more efficiently by learning from data and making decisions. Of all of them, analytics has the most potential in industrial and military applications, specifically in the fields of strategy and intelligence, where it could mark a turning point.
This review explores some of the various applications of AI. It addresses the current uses of AI in military conflicts and delves into two specific applications of AI in the military domain: autonomous weaponry and threat prevention. Finally, it analyzes the risks associated with this technology, both in the military and human domains. AI is a very recent technology, and as is well known, the unknown is terrifying. For this reason, the article will also discuss the risks associated with this technology, both in the military and in the human sphere in general.
AI has multiple military applications, such as cybersecurity, detection of real threats, and many others. For example, AI will make it possible to automate cyberattacks, increasing their speed and making it possible to exploit vulnerabilities more efficiently. It also has defensive applications, as it can be used to analyze large volumes of information and to recognize anomalous patterns that allow a rapid response to threats, even preventing their execution.
One of the most important challenges is the employment AI in disinformation campaigns, creating false content that can influence public opinion or destabilize adversaries. In addition, the difficulty in attributing attacks to specific actors is compounded by the use of advanced techniques by attackers. Cyber threats are constantly evolving, and AI allows attackers to quickly adapt to existing defenses, creating a continuous cycle of attack and defense.
AI in modern warfare
AI is beginning to be used in international conflicts, such as the one currently being fought in the Gaza Strip. agreement to Israeli intelligence testimonies, the "Lavender" program has been in use since May 2021. It is a probabilistic model that, with unknown parameters, ranks the population of the Gaza Strip on a scale of 0 to 100 based on its relations with Hamas, through a surveillance network installed in the area. According to the Israel Defense Forces (IDF), this system is used as "an additional tool in the target identification process", showing an efficiency of 90%. This system is complemented by two others: "Where's Daddy?" and "The Gospel"; the first one allows tracking targets and attacking them, while the second one identifies buildings from which Hamas militants allegedly operate.
If AI is applied in this theater for purely probabilistic purposes, to achieve offensive objectives, in the Ukrainian theater it is put to a completely different, but no less decisive, use. As is known, Russia is one of the countries that invests the most in AI. As early as 2017, President Vladimir Putin stated that "whoever is the leader of artificial intelligence will be the leader of the world." Accordingly, Moscow has mainly focused its efforts on leading the field of AI application in military technology, particularly in areas such as robotics, cyber defense or disinformation, implementing in more than 150 military systems, both consolidated and experimental.
AI can have an important application in support of open source intelligence analysis (OSINT, Open Source Intelligence), focused on obtaining, analyzing and exploiting unclassified and publicly available information. The analysis of sources such as social networks and media allows the generation of useful knowledge in multiple fields. Typically, this subject of analysis requires a large issue of trained analysts to read and assimilate all sources, discerning which are real and which are 'fake news' or 'deepfakes'. AI facilitates this process by allowing to identify relevant information within large volumes of data in a much shorter time and helps to assess the veracity of the information.
NATO uses artificial intelligence (AI) in open source analysis (OSINT) to locate Russian instructions, units and materiel near Europe's northern border using tools such as Google Earth. The Atlantic Alliance not only employs this technology for military purposes, but also for geopolitical ones, such as monitoring and tracking melting ice in the Arctic Ocean, which has significant implications for maritime trade. However, AI can be a double-edged sword, since, like any computer systems-based technology, the adversary can also use similar tools for intelligence purposes.
Artificial intelligence is a technology that complements other intelligence disciplines beyond OSINT. For example, its application in SIGINT (signals intelligence), GEOINT (geospatial intelligence) and even HUMINT (human source intelligence) can enhance data analysis collection and data analysis capabilities. These disciplines are critical to a more complete and accurate understanding of the operational environment.
Autonomous armament
Autonomous weaponry, as commonly imagined, seems more out of a science fiction movie than reality. However, we are in a core topic era within the military industry, in which the use of AI can make a difference and make this utopia a reality. Numerous media and experts in this field have already affirmed this. For example, Paul Scharre of the Center for a New American Security (CNAS) stated that "when this conversation started, about a decade ago, it was really a bit of science fiction. And now, it's not at all anymore. The technology is very very real." Moreover, it could be added that, because huge amounts of money are being invested in the civilian world for the design of autonomous vehicles, among other developments, it is an almost unquestionable reality that at some point these advances will move into the military world. One of the companies most involved in the development of AI-powered autonomous weaponry is ShieldAI, which creates drones that work together through AI. However, the business does not yet have drones with these features.
It is important to differentiate between autonomous and automatic weaponry. Automatic weaponry, which has been in use for more than a century, operates without human intervention and does not distinguish between threats, allies or civilians. This is the case, for example, with anti-personnel mines. Autonomous weaponry, on the other hand, uses artificial intelligence and advanced algorithms to make decisions on the use of force without direct human intervention. These systems can identify and engage targets independently, handling complex environments and making rapid decisions, with the ability to discern between targets with a certain Degree confidence. This is not a single technology, but an "enabling feature," as AI "enables new functions to be added to the tools of war to potentially make them more efficient, cheaper, more compact and more autonomous."
Russian President Vladimir Putin stated the following: "Whoever becomes the leader in this sphere [of AI] will rule the world". However, the real secret that will allow a step ahead of the rest of the countries would be to control the AI software, rather than the creation of more impressive weapons.
Why put the emphasis on software rather than weaponry? Quite simply: because its greatest virtue is, ironically, its worst flaw. Just as it is possible to make the AI learn to identify targets correctly, a slight change in the presentation features can make it classify them in a totally antagonistic way (this is called 'antagonistic examples'). Just by altering several pixels, a person would see the same thing, but an AI can see totally different things (at MIT an AI was able to confuse a turtle with a rifle 3 times in a row by minimal changes). This vulnerability is core topic, but it is the 'elephant in the room that no one sees'.
To conclude, it is clear that the integration of these systems on the battlefield can improve coordination and operational effectiveness, although it also always entails risks related to possible communication failures or errors in decision making, the Achilles heel of which has already been previously mentioned. On the other hand, the use of autonomous weapons raises important ethical dilemmas, such as concerns about targeting decisions that could involve civilians and the risk that automation lowers the threshold for entering conflicts.
As artificial intelligence and robotics continue to evolve, autonomous weaponry will also advance, highlighting the need for clear regulatory and ethical frameworks to ensure a manager application and to understand its impact on society and warfare.
Threat prevention
Preventing potential threats in the context of artificial intelligence for cyberspace security and defense focuses on identifying and mitigating vulnerabilities before attacks materialize.
Several core topic strategies and technologies in this area are worth mentioning. One of them is user profiling, a project of the Spanish company SIA, which belongs to Indra and specializes in cybersecurity, which applies supervised learning techniques in the banking sector. This approach makes it possible to implement customized protection measures for each threat, which is crucial given that users are one of the main vulnerabilities of the systems.
Another important strategy is attack detection, a process that is a step prior to reaction. There are commercial systems that use artificial intelligence to discriminate between legitimate and malicious traffic, especially in the case of distributed denial-of-service (DDoS) attacks, enabling a more effective and rapid response to these threats.
Lastly, one can highlight cyber threat hunting, a proactive and iterative process to identify potential threats. Partial automation of this search using AI can improve its efficiency, suggesting that more advanced approaches in this field will be developed in the future. Outside the realm of cyber threats, this technology has prospects in detecting actual physical threats within the battlefield. Already in the Afghanistan war, Michael Kanaan, a former U.S. Air Force intelligence officer, highlighted the importance of the spectral imager with the goal of detecting objects hidden from the naked eye (explosives, tanks...). Kanaan stated that this system helped them "eliminate thousands of pounds of explosives on the battlefield. However, we spent too much time looking at the data and not enough time making decisions. At times it took so long that we wondered if we could have saved more lives." This is a clear example of what this technological breakthrough can mean for threat prevention, enabling more immediate and accurate action, as well as gathering and generating massive amounts of useful front-line intelligence.
These strategies reflect a comprehensive approach to threat prevention, combining advanced technology with a deep understanding of attacker vulnerabilities and tactics.
Ethics. Risks of using AI for control purposes.
Within this section, it is necessary to emphasize multiple aspects, such as the propensity of the system to make mistakes, the dehumanization of conflicts or decision making. All these questions arise during the research on the topic, and could be summarized in a single question: where should the limits be set?
Simplifying a lot, one of the applications of AI is to act as a probabilistic algorithm that greatly reduces calculation time. Like any estimate-based calculation, it is prone to errors that, without proper oversight, are unacceptable. The Israel Defense Forces (IDF) claim that their system has a 90% confidence interval, which is indeed a mathematical achievement, but... what happens when that calculation directly impacts people's lives? As the sample increases, the error grows, and the impact becomes brutal. It may be objected that this is not a 'zero-error' policy and that errors are "treated statistically".
Continuing with the idea of progressive trust in AI systems, when you trust a program, even if it is only 90%, you go from thinking that we are only talking about numbers, statistics and probabilities, to assuming that its results are reliable enough to make decisions based on them. In this way, it can be said that, over time, we will move from using AI as a 'financial aid' or 'optimization' tool to using it as a tool on which to fully base any decision. This aspect could be extrapolated to many areas of everyday life, but in warfare, everything is magnified.
This adjustment gives a logical conclusion to the gradual change described above, which addresses how trust in AI systems evolves from being a support tool to becoming the basis for making important decisions, completing the idea of progressive trust in AI results. It is the scientists who program algorithms, and in any case the States, "who cannot discard their responsibility and must regulate... they must uphold ethical principles in their actions and take legal responsibility in determining their employment and use".
In this way, people cannot base their decisions solely on mathematics because they lose precisely what makes us human: morality and conscience. Without intending to do so, the population's capacity for conscientious objection can be abolished, as well as the ability to reason and decide in a FREE way.
"Complying with the laws of war requires human reasoning ability, the skill to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of each attack. Machines and algorithms cannot recreate these human abilities; they cannot negotiate, produce empathy or respond to unpredictable situations."
It should be borne in mind that in warfare, in many cases all these issues are left 'in the back of the closet', since the focus is on very specific objectives. To illustrate how far humans are capable of going, here are some alarming statements about whether AI should be able to make lethal decisions under human supervision: "individual decisions versus not making individual decisions is the difference between winning and losing, and you're not going to lose"; "I don't think the people we might face would do that, and it would give them a huge advantage if we imposed that constraint on ourselves".
After this reflection, which has focused only on the negative areas, it is important to emphasize that AI also has its positive side, and that it will be a useful tool in many areas, both civilian and military. It all depends on us and where we set the limits.
Conclusion
Artificial intelligence in security and defense offers great advances, but it also poses serious ethical and security challenges. Autonomous weapons systems, which can make decisions without human intervention, improve operational efficiency but also increase the risk of serious errors, especially in conflicts where civilian lives are at stake.
It is crucial to establish regulatory and ethical frameworks that guarantee the manager use of these technologies, ensuring that critical decisions remain under human oversight. As AI evolves, the balance between innovation and ethics will be a core topic for the future of war and peace.
* Communication presented at the XXXI International Defense Course, "Artificial Intelligence: Opportunities and Challenges for Security and Defense", Jaca, September 23-27, 2024..
BIBLIOGRAPHY
Abraham, Yuval. "'Lavender': The AI Machine Directing Israel's Bombing Spree in Gaza." +972 Magazine, April 25, 2024. https://www.972mag.com/lavender-ai-israeli-army-gaza/.
BBC News. "What Is AI? A Simple guide To Understanding Artificial Intelligence - BBC News World." News World, September 12, 2023. https://www.bbc.com/mundo/resources/idt-74697280-e684-43c5-a782-29e9d11fecf3.
CNI. "Obtaining." National Intelligence Center, June 21, 2023. https://www.cni.es/la-inteligencia/obtencion.
Cozzi Elzo, Francisco. "Inteligencia Artificial Y Sistemas De Armas Autónomos," Revista Marina, Chile, December 20, 2019 . https://revistamarina.cl/es/articulo/inteligencia-artificial-y-sistemas-de-armas-autonomos.
Encyclopedia Meanings, Team. "Intelligence: What It Is, Characteristics And Meaning In Psychology." Encyclopedia Meanings, June 20, 2024. https://www.significados.com/inteligencia/
ISO. "What Is Artificial Intelligence (AI)?", International Organization for Standardization. https://www.iso.org/es/inteligencia-artificial/que-es-ia
Jerez, Alexia Columba. "Artificial Intelligence also enters combat in the war in Ukraine." ABC Newspaper, May 1, 2022. https://www.abc.es/economia/abci-inteligencia-artificial-tambien-entra-combate-guerra-ucrania-202203140205_noticia.html.
Las Heras, Paula. "The challenge of Artificial Intelligence for Security and Defense," Global Affairs & Strategic Studies, October 18, 2023.challenge.
European Parliament. "What is Artificial Intelligence and how is it used?", European Parliament," September 8 d, 2020. https://www.europarl.europa.eu/topics/es/article/20200827STO85804/que-es-la-inteligencia-artificial-y-como-se-usa
Perez, Lara. "AI: Main Differences Between Traditional And Generative." Contact Center Hub, September 4, 2024. https://contactcenterhub.es/diferencias-ia-tradicional-y-generativa/.
Porter, Tom. "Pentagon closer to having AI weapons that decide to kill humans." Business Insider Spain, November 23, 2023 . https://www.businessinsider.es/pentagono-armas-ia-matar-humanos-autonomas-1339738.
Sánchez Tapia, Salvador (co), "Security in the Age of Artificial Intelligence," Global Affairs Journal, Center for Global Affairs & Strategic Studies, January 25, 2024. https://www.unav.edu/web/global-affairs/seguridad-en-la-era-de-la-inteligencia-artificial.
The Economist. "Así usa Ucrania la Inteligencia Artificial contra Rusia", in La Vanguardia, 12 April 2024. https://www.lavanguardia.com/internacional/20240412/9592801/ucrania-rusia-guerra-inteligencia-artificial.html