While browsing the Internet, installing an app on our phone, or setting up a new device, we are often asked to consent to privacy policies or terms of service agreements. These privacy policies and terms of service agreements are often lengthy and contain technical and legal jargon, making them hard to read and understand. Most people reflexively choose “I consent” or “I agree”. They agree to unfair-deceptive practices with such uninformed consent, which leads to frustration and privacy resignation (they give up managing their privacy). Choosing “I consent” or “I agree” without reading and understanding the policies becomes increasingly problematic when it is about electronic devices used in daily life, such as a fitness tracker constantly sensing our bodies, a voice assistant listening in, and a cleaning robot taking pictures to map the floor.
Privacy resignation and the prevalent surveillance capitalism give enormous power to companies to manipulate and control consumers, affecting many areas of our lives and weakening our democracy. Privacy@edge addresses this critical societal problem.
The goal of Privacy@Edge is to develop privacy-assisting solutions that enhance people’s privacy awareness, understandability, and control over their privacy. Utilizing advances in natural language processing, machine learning, edge computing, and privacy cognition, we will process privacy policies automatically and generate personalized privacy recommendations. Salient features of privacy policies (such as the purpose of data processing, data retention, and privacy controls) will be presented along with privacy recommendations to people through a user-friendly privacy dashboard in an intuitive way. People using the dashboard will easily review personal data collected and used by their smart devices and update associated privacy settings.
In the past year, we developed a framework harnessing the power of Large Language Models (LLMs) through prompt engineering to automate the analysis of privacy policies. The framework streamlines the extraction, annotation, and summarization of information from these policies, enhancing their accessibility and comprehensibility without requiring additional model training. Our evaluation shows that the framework obtains competitive results while reducing training efforts and increasing the adaptability to new analytical needs.
We started survey and vignette studies to capture user preferences and concerns about data collection and usage practices. Our goal is to identify data practices the participants are interested in, factors that trigger their need to retain control of their personal information, and factors that cause mistrust or discomfort.
We also developed a partial model-sharing approach to enhance the efficiency and privacy of federated learning (FL). By sharing only a fraction of the model parameters and applying homomorphic encryption, this method reduces the communication overhead while safeguarding privacy. Our results show that this approach mitigates privacy risks such as gradient inversion attacks without compromising model accuracy, thus offering a scalable and secure solution for FL.
The consumer IoT is now ubiquitous and creates unprecedented quantities of detailed, high-quality information about citizens' everyday actions, habits, personalities, and preferences. Such detailed information brings several new and unique privacy challenges. One of the major privacy challenges is that of a current paradigm: ‘Notice and Consent’. Consumers are asked to consent to privacy policies or terms of service agreements for IoT devices and services. Privacy policies or services agreements are often lengthy and contain technical and legal jargon, making them hard to read and understand. Most people reflexively choose “I consent” or “I agree”. With such an uninformed consent, consumers agree to unfair-deceptive practices which leads to frustration and privacy resignation, meaning that they give up managing their privacy.
The goal of Privacy@Edge is to devise a novel IoT privacy solution that leverages edge computing and federated learning to enhance consumer’s privacy awareness and control over personal data collected and processed by IoT systems. Privacy@Edge aims to address the protection of privacy rights by extending privacy research with advances in edge computing, natural language processing, and privacy-preserving federated learning. In particular, the project leverages edge computing and decentralized machine learning principles to process privacy notices automatically and generate personalized privacy recommendations. Privacy recommendations and information extracted from privacy notice, will be presented in an intuitive way by a novel user-friendly single destination tool, a privacy dashboard. The combined experience of SINTEF in privacy and IoT, Norwegian University of Science and Technology in decentralized machine learning and edge computing, & our industry partners Kobla AS in smart cities, and Tellu AS in eHealth services, provides a unique, timed opportunity.