Back to search

SAMKUL-Samfunnsutviklingens kulturell

Warring with Machines: Military applications of AI and the relevance of Virtue Ethics

Alternative title: Warring with Machines

Awarded: NOK 12.0 mill.

Project Number:

303136

Application Type:

Project Period:

2020 - 2023

Partner countries:

The project's primary research focus is on the people - military personnel throughout the command structure - who serve in combat settings with AI-enabled machines. In a battlespace where machine autonomy is increasingly assuming functions once restricted to human beings, maintaining clear lines of human responsibility is of paramount importance. Clarifying this issue should improve ethical instruction within military training and educational institutions, as well as change how AI developers design their technologies. In turn, this will render ethical guidelines better tailored to the battlefield scenarios military personnel will confront in the future. The project aims in three settings to yield moral guidelines for AI technology use: kinetic (physical) combat operations, cyber operations, and strategic planning. These guidelines will serve as conceptual pillars for forming policies that help guide the design and use of AI-related weapon systems. Our theoretical framework broadly aligns with virtue ethics, focused on inward capabilities - virtues - that empower us to act responsibly amid the challenges of personal and professional life. Warring with Machines will probe how we can enhance the moral agency of combatants as algorithms become more prevalent in warfare. During its four-year period, and among its many activities (including publications, lectures, and seminars) the project organized several conferences: "AI and the Transformation of Warfare: Perspectives from South Asia and Beyond," 3-4 March 2022 in Kolkata, India. Discussions around the development of AI are often dominated by what is ongoing in the U.S., Europe, and China, thus the goal of this conference was to discuss other regional perspectives on AI military applications, including India, Russia, and Nigeria. Conference participants included speakers from a variety of backgrounds, including retired military officers, software developers, and researchers. "The Ethics of Military AI," 2022 McCain Conference, organized with the Stockdale Center for Ethical Leadership 21-22 April, at the US Naval Academy, Annapolis, Maryland. Included among the speakers were Lt. Gen. Michael S. Groen, Director of the Joint Artificial Intelligence Center; Dr. Paul Scharre, author of Army of None: Autonomous Weapons and the Future of War; and Brett Vaughan, Chief AI officer of the U.S. Navy. "European and North American Workshop on the Ethics of Artificial Intelligence," organized with the Technology Ethics Center at the University of Notre Dame (Indiana), 20-21 October 2022 in Rome, Italy. The event was hosted at Notre Dame’s Rome Global Gateway facility and brought together philosophers and other experts working on issues related to AI ethics. Discussions explored the conceptual and ethical foundations as well as applications related to artificial intelligence and related technologies, including autonomous weapon systems. "Pacem in Terris: War and other Obstacles to Peace" (Vatican City, 18-19 September 2023. The conference brought together thirty social science experts from law, philosophy, political science, economics, engineering, and physics - who work in a variety of fields, from academic settings, and industry, to diplomacy. The event was organized on the 60th anniversary of ‘Pacem in Terris,’ the landmark letter of Pope St. John XXIII. The conference focused on ethical issues around the use of technology in warfare, including AI-enhanced weaponry. Pope Francis, who sent a message to the conference group (available at https://press.vatican.va/content/salastampa/en/bollettino/pubblico/2023/09/19/230919e.html), urged the participants to analyze today’s military and technology-based threats to peace, in addition to reflecting ethically “on the grave risks associated with the continuing possession of nuclear weapons, the urgent need for renewed progress in disarmament, and the development of peacebuilding initiatives.” The conference group issued a closing statement (available online at https://www.pass.va/en/events/2023/pacem_in_terris/final_statement.html) that emphasized how "AI-enabled weapon systems that select and apply force without meaningful human intervention raise profound legal and ethical issues. Comprehensive regulation of these systems is needed, with bans on some applications and restrictions on others. The integration of AI into weapon systems – whether for decision-support or targeting – presents distinctive risks that must be addressed by any military force that includes these weapons in its arsenal." The project team engaged with a broad array of actors (from military operational centers and firms supplying weapon technologies), both in Norway and abroad, to demonstrate how the integration of AI into weapon systems – whether for decision support or autonomous targeting – presents distinctive risks that must be addressed by any military force that includes these weapons in its arsenals.

The project has contributed to advancing awareness, both in Norway and abroad, of the ethical risks associated use of AI technologies in armed conflicts. Inter alia, the project team co-organized two high-level level conferences, one at the US Naval Academy (Annapolis, MD) and the other at the Vatican (Pope Francis sent a letter to the conference group that has been published on the Vatican's official website https://press.vatican.va/content/salastampa/en/bollettino/pubblico/2023/09/19/230919e.html). It also organized a workshop (“AI and the Future of Warfare”) that was held in Kolkata, India, 3-4 March, 2022, with representatives from the Indian military and the defense industry. These events plus the project's many publications enabled the dissemination of project findings showing how the use of AI in decision support functions (e.g., for target identification and analysis) raises distinctive ethical concerns that must be carefully managed by military organizations. The project team engaged with a broad array of actors (from military operational centers and defense contractors), both in Norway and abroad to demonstrate how the integration of AI into weapon systems – whether for decision support or autonomous targeting – presents distinctive risks that must be addressed by any military force that includes these weapons in its arsenals. To minimize such risks, we have recommended how militaries should engage in rigorous testing, evaluation, validation, and verification (TEVV) of AI-enabled weapons. Such testing should be a) cradle to grave, b) modular and principled, and should be followed by c) gradual fielding in d) clearly defined operational envelopes, with e) appropriate explainability; this should take place in parallel with the legal review of these weapons.

Artificial intelligence plays an ever-expanding role in the context of war. The project Warring with Machines: Military applications of AI and the relevance of Virtue Ethics constitutes an inquiry into the conditions of human morality in AI-human interaction. It aims to determine how the moral integrity and agency of military personnel may be preserved and enhanced when artificial intelligence is implemented in practices of war. The project will pursue this goal from the perspectives of virtue ethics, philosophy of action and mind, and applied military ethics in close dialogue with institutional stakeholders as well as technologists and representatives from cognitive neuroscience. Its three research questions are as follows: (1) How does AI technology change the way we think about the moral character of military personnel? (2) How can we understand the nature of AI technology/tools, and how do these tools change the moral and psychological conditions for virtuous behavior? (3) What does "virtuous human-AI interaction" mean in different parts of the military? The project is built up around a unique institutional collaboration between leading national and international research institutions within the fields of military ethics, the philosophy of mind, and artificial intelligence research, as well as key military training institutions and technology manufacturers.

Funding scheme:

SAMKUL-Samfunnsutviklingens kulturell