Recent developments in artificial intelligence (AI) have enabled the implementation of smart solutions in the domain of crime prevention and more efficiency. Nevertheless, the current technological solutions still suffer from two important weaknesses: the intrusion of privacy and the aversion of public concerning its implementation and potential (ab)use. AI4citizens aims to address the societal and technological security challenges connected to surveillance technologies while at the same time addressing the ethical, societal and privacy related issues. Responsible AI tools need to be co-created with citizens? perspectives in mind, which may help to address the algorithmic biases that technology has against certain groups of citizens (minorities, less representative groups, etc.), but also to overcome biases that citizens have against smart surveillance technologies.
AI4citizens addresses the following four areas for implementing responsible AI in a societal security context:
1. Anonymization of Monitoring Data for AI Training and Information Distribution, investigates AI-based anonymization for video data to increase resiliency and reduce algorithmic biases.
2. Anonymous Crowd Monitoring for Event Detection Although CCTV camera presence contributes to a sense of public safety, their proliferation makes human monitoring in operations centers for rare and unusual events such as accidents, group loitering, violence, etc. impossible. AI holds the promise of automating both the detection and classification of unusual events.
3. Contextual Video Object Search for Crime Prevention & Investigation, addresses the Police's main challenge of manually looking through videos for specific objects or incidents in different contexts.
4. Societal Impact and Ethical Implications of Anonymized Surveillance, will look at citizens? responses to anonymized AI solutions and the extent to which the improvement overcome the ethical paradoxes, algorithm aversion and algorithmic biases.
According to human rights standards and practice for the police that are put forward by the United Nations, police organisations shall serve four main objectives: (1) ensure compliance with democratically agreed laws; (2) prevent crimes; (3) respond to emergencies; and (4) provide supporting services to citizens. With the digitalisation of our society, on one hand, and on the other hand, the adoption of technologies to conduct sabotages, organised crimes, and terroristic attacks, police organisations are becoming depending on advanced technologies in order to deal with the amount and complexity of information that occurs during their day to day operations.
Our research aims to study the appropriateness of using machine intelligence, also referred to as narrow artificial intelligence, in police operations to fulfill their mandate, and to foster human rights. Our interdisciplinary study comprises perspectives from the social and computing sciences, and works towards a balanced and responsible socio-technical approach in smart-city policing, where citizens can trust the police, and both co-create a safe and prosperous environment for everyone.
The Nordic countries and Norway in particular have a long history of implementing organisations and policies for the benefit of mankind, i.e. with the founding of the UN in 1949. With our research we are miming to contribute to provide policies and technologies for using artificial intelligence in policing, to engage citizens and gain their trust in advanced technology support, and to minimize possibilities that machine intelligence is used to infringe fairness and freedom of citizens, and their human rights.
The project brings together researchers in social and computing sciences, e.g. the United Nations Interregional Crime and Justice Research Institute (UNICRI), NTNU, UiA, and BI. Project members also represent a wide range of citizens, and police, e.g. Oslo City Kommune, Nordic Edge Smart City Innovation, Oslo & Øst police.