The project's primary research focus is on the people - military personnel throughout the command structure - who serve in combat settings with AI-enabled machines. In a battlespace where machine autonomy is increasingly assuming functions once restricted to human beings, maintaining clear lines of human responsibility is of paramount importance. Clarifying this issue should improve ethical instruction within military training and educational institutions, as well as change how AI developers design their technologies. In turn, this will render ethical guidelines better tailored to the battlefield scenarios military personnel will confront in the future.
The project aims in three settings to yield moral guidelines for AI technology use: kinetic (physical) combat operations, cyber operations, and strategic planning. These guidelines will serve as conceptual pillars for forming policies that help guide the design and use of AI-related weapon systems.
Our theoretical framework broadly aligns with virtue ethics, focused on inward capabilities - virtues - that empower us to act responsibly amid the challenges of personal and professional life. Warring with Machines will probe how we can enhance the moral agency of combatants as algorithms become more prevalent in warfare.
In this last period, the project has resulted in a scientific publication "Algorithm exploitation," which examines how "Humans are keen to exploit benevolent AI" (as explained in the article's subtitle). Members of the project team have also written a report for the Norwegian Ministry of Defense, "Algor-ethics in the emerging battlespace," which reviews the current state of ethical debate, in different regional settings, on military applications of AI. The project likewise sponsored the development of an MA course on "Ethics and Artificial Intelligence" that was taught by a project associate at Case Western Reserve University (USA).
Artificial intelligence plays an ever-expanding role in the context of war. The project Warring with Machines: Military applications of AI and the relevance of Virtue Ethics constitutes an inquiry into the conditions of human morality in AI-human interaction. It aims to determine how the moral integrity and agency of military personnel may be preserved and enhanced when artificial intelligence is implemented in practices of war. The project will pursue this goal from the perspectives of virtue ethics, philosophy of action and mind, and applied military ethics in close dialogue with institutional stakeholders as well as technologists and representatives from cognitive neuroscience. Its three research questions are as follows:
(1) How does AI technology change the way we think about the moral character of military personnel? (2) How can we understand the nature of AI technology/tools, and how do these tools change the moral and psychological conditions for virtuous behavior? (3) What does "virtuous human-AI interaction" mean in different parts of the military?
The project is built up around a unique institutional collaboration between leading national and international research institutions within the fields of military ethics, the philosophy of mind, and artificial intelligence research, as well as key military training institutions and technology manufacturers.