Fortaleza digital: Estrategias para proteger la inteligencia artificial en conflictos bélicos

Digital Fortress: Strategies for protecting artificial intelligence in warfare conflicts

ARTICLE*

20 | 12 | 2024

Texto

Adversaries may attempt to address and neutralize these systems through cyberattacks, algorithm manipulation, or disabling critical networks

Artificial intelligence (AI) has revolutionized numerous industries, and the field of security and defense is no exception. From improving logistics to automating weapons systems, AI has transformed the way the military operates. However, as AI systems become more deeply integrated into military operations, so too do the associated risks, particularly in scenarios of warfare conflict. Adversaries may attempt to target and neutralize these systems through cyber attacks, algorithm manipulation, or disabling critical networks. Therefore, protecting AI at all times is not only vital, but becomes a strategic priority.

Artificial intelligence offers endless opportunities in the military field[1] and constitutes an important challenge for security and defense[2]. This makes it necessary for countries to take it into account in the development of their national security strategies[3].

First, the ability of AI to process large amounts of data in real time allows commanders to make informed and accurate decisions. This advanced data analysis can range from interpreting satellite imagery to predicting enemy movements based on behavioral and historically tested patterns, transforming the way the military operates[4].

In addition, AI is used to improve logistical efficiency, optimizing the distribution of resources on the battlefield and ensuring that troops are always supplied and ready[5].

Another aspect core topic is the use of AI in the automation of weapons systems. Drones and autonomous vehicles are capable of operating in dangerous environments without putting human lives at risk[6]. These systems not only increase operational capability, but also enable missions that would otherwise be too risky.

Likewise, AI is fundamental in cybersecurity, as it can detect and neutralize threats in real time, protecting critical infrastructures and communication networks.

Risks: Manipulation of AI systems

"The supreme art of war is to subdue the enemy without fighting"[7]. This famous phrase by Sun Tzu, taken from his work 'The Art of War', resonates with particular relevance in the context of artificial intelligence in warfare.

In the digital age, subduing the enemy without physical fighting can be achieved through manipulation and control of their AI systems, even before open conflict begins[8]. An adversary who succeeds in infiltrating and controlling an opponent's AI can, in effect, win the war without firing a shot. This approach not only destabilizes the enemy, but also strips it of its ability to respond, putting national security at significant risk.

The manipulation of AI systems in warfare can take many forms, and it is essential that the military understands both the tactics of attack and the possible responses[9].

A clear example of manipulation is the attack known as 'data poisoning'[10] or poisoning of data, where an adversary introduces false or corrupted data into the AI system to alter its operation. This can result in the AI making erroneous predictions or decisions based on incorrect information. More specifically, in a war scenario, an AI system tasked with identifying targets could be manipulated to misclassify civilian vehicles as military targets.

Another method of manipulation is the 'adversarial attack'[11], where small changes imperceptible to humans are introduced into the inputs of an AI system, causing the system to fail or make incorrect decisions. In a military context, this could be used to cause a facial recognition system to misidentify an enemy combatant as an ally, or an autonomous vehicle to navigate into a trap.

A third example is the manipulation of algorithms through the 'backdoor attack'[12], in which a backdoor is inserted during the development of AI software that allows an adversary to take control of the system at critical times. This attack subject could allow the enemy to disable defense systems or divert ongoing missions.

Challenges

As discussed above, the opportunities for the use of AI in national defense are not without risks that challenge militaries. They are the focus of this section.

The vulnerability of these intelligent systems to potential cyber-attacks, not only in military conflicts, but even in peacetime, can be subject to hacking to disable them, manipulate the data they process or use them to misinform. On the other hand, the complexity of AI systems requires complex mitigating measures.

We cannot forget the overdependence on AI. In the event that the Armed Forces rely too much on these systems, they could find themselves in a vulnerable position if the AI fails or is disabled and hence the enemy intends in the first place to enter by this route to disable the systems or worse, to turn them against them

Given the critical importance of AI in defense as well as the challenges to which it is exposed, it is essential to implement robust strategies to protect these systems[13]. Some possibilities are outlined below.

  1. One of the main strategies is the strengthening of cybersecurity. This includes implementing advanced encryption protocols, using artificial intelligence to detect and respond to threats in real time, and creating secure and redundant communication networks which means creating multiple layers of security that can mitigate the effects of an attack, ensuring that if one system fails, another can take its place without disrupting operations. In addition, it is vital to develop AI systems that are tamper-resistant and can continue to operate even if some of their components are compromised. The use of 'honeypots'[14] or cyber traps can be effective in identifying attackers and studying their methods, allowing the military to develop more robust countermeasures.
  2. Another strategy core topic is the implementation of contingency measures. This includes the creation of contingency plans that are activated in the event that an IA system is compromised. These plans should include procedures to quickly restore system functionality, as well as to mitigate any damage caused by the attack.
  3. Another important aspect is the development of AI with real-time self-directed learning and adaptation capabilities. This would enable AI systems to respond dynamically to previously unforeseen threats, increasing their resilience to novel attacks.
  4. It is also important to train the Armed Forces on how to operate without the financial aid AI, so that they are prepared to deal with situations where these systems are not available.
  5. It is essential that the Armed Forces maintain a constant state of alertness, even in peacetime. Continuous vigilance and readiness for rapid and effective response are critical to counter the efforts of an adversary who seeks to subdue without a fight. This level of preparedness not only protects against direct attacks, but also deters the enemy from attempting such strategies, knowing that their efforts could be discovered and neutralized before they can cause significant damage.
  6. Finally, international cooperation plays a critical role in protecting AI in times of war. The creation of strategic alliances that share information on cyber threats and develop common standards for AI security can strengthen collective defense. The partnership between allied nations[15] can also facilitate the development of effective countermeasures against potential attackers seeking to exploit vulnerabilities in AI.

Integration

It is clear that AI has transformed military defense, and this has only just begun and we are not yet able to see the full potential of generative AI systems offering numerous opportunities to improve efficiency and accuracy in operations. However, it is important to be aware that these benefits can come with significant challenges and risks, especially in the context of a military conflict, but no less so in peacetime, when the adversary intends to make a latent attack that will facilitate, when the time comes, an attack with little or no resistance to national security. Therefore, protecting AI systems in wartime is essential to ensure the integrity and effectiveness of the Armed Forces. In addition to the above, it is core topic to articulate measures in peacetime to ensure the effectiveness of the models.

This is not something new: already since ancient Rome, the 'pax' was used to Romanize the conquered, but in turn to build among other things logistical systems of roads and supply warehouses that allowed the rapid movement of troops across the vast territory of the Empire, in case of having to act. A comprehensive approach is therefore required that combines the strengthening of cybersecurity, the development of resilient AI systems and contingency preparedness. Only through these measures will it be possible to ensure that AI is an effective tool in national defense and ensure national security.

* Communication presented at the XXXI International Defense Course, 'Artificial Intelligence: Opportunities and Challenges for Security and Defense', Jaca 23-27 September 2024.

REFERENCES

[1] S. Sanchez Tapia, ed (2024). 'Security in the Age of Artificial Intelligence', Center for Global Affairs & Strategic Studies, University of Navarra. https://www.unav.edu/web/global-affairs/seguridad-en-la-era-de-la-inteligencia-artificial

[2] P. Las Heras (2023), 'The challenge of artificial intelligence for security and defense', Center for Global Affairs & Strategic Studies, University of Navarra. https://www.unav.edu/web/global-affairs/el-challenge-of-artificial-intelligence-for-security-and-defense.

[3] E. Olier Arenas, (2024). 'International panorama of artificial intelligence in defense and security activities', Spanish Institute of programs of study Strategic, notebook de estrategia 226, pp. 11-17. https://publicaciones.defensa.gob.es/average/downloadable/files/links/l/l/a/la_inteligencia_artificial_en_la_geopoltica_y_los_conflictos_ce_226.pdf.

[4] A. Gómez de Ágreda, (2020). 'Military uses of artificial intelligence, automation and robotics IAA&R'. Ministry of Defense, pp. 9-18. https://publicaciones.defensa.gob.es/average/downloadable/files/links/u/s/usos_militares_inteligencia_artificial.pdf.

[5] Ministry of Defense, (2020). 'Technology and Innovation Strategy for Defense ETID'. https://publicaciones.defensa.gob.es/average/downloadable/files/links/e/t/etid_estrategia_de_tecnolog_a_e_innovaci_n_para_la_defensa_2020.pdf.

[6] A. Ortiz and T. Rodriguez, (2024). 'Analysis of global geopolitics using artificial intelligence (AI) and big data', Spanish Institute of programs of study Strategic, notebook de estrategia 226, pp. 63-72. https://publicaciones.defensa.gob.es/average/downloadable/files/links/l/a/la_inteligencia_artificial_en_la_geopol_tica_y_los_conflictos_ce_226.pdf.

[7] Sun Tzu (2019). 'The Art of War' (L. Urdiales, translation; Alliance publishing house) (Original Circa 5th century BC), p. 26.

[8] J. Corchado, (2024). 'management of crisis through the use of AI'. Spanish Institute of Strategic programs of study , notebook de estrategia 226, pp 161-185. https://publicaciones.defensa.gob.es/average/downloadable/files/links/l/a/la_inteligencia_artificial_en_la_geopol_tica_y_los_conflictos_ce_226.pdf.

[9] J. L. Pontijas, (2022). 'Una nueva estrategia para la Unión Europeaa', Spanish Institute of programs of study Estratégicos, Cuadernos de Estrategia, 215, pp. 48-81. https://publicaciones.defensa.gob.es/average/downloadable/files/links/l/a/la_uni_n_europea_hacia_autonom_a_estrat_gica_1.pdf.

[10] I. Goodfellow, Y. Bengio, & A. Courville, (2016). Deep learning (MIT Press ) pp. 290-295.

[11] Ibid.

[12] Ibid.

[13] Lupiáñez, M. (2023). 'How to cope with a cognitive attack: Prototype detection of propaganda and manipulation in psychological operations targeting civilians during conflict', Revista Instituto Español de programs of study estratégicos, 22, pp. 61-94. https://revista.ieee.es/article/view/6058

[14] L. Spitzner (2003). Honeypots: Tracking hackers (Addison-Wesley Professional) pp. 135-138.

[15] G. León Serrano (2021). 'Repercusión estratégica del development tecnológico', Instituto Español de programs of study Estratégicos, Cuadernos de Estrategia, 207. pp 23-76. https://publicaciones.defensa.gob.es/repercusiones-estrategicas-del-development-tecnologico-impacto-de-las-tecnologias-emergentes-en-el-posicionamiento-estrategico-de-los-paises-libros-papel.html.