Our democracies are challenged by false news and disinformation. The new digital media-world, where it is easy for everyone to publish what they may wish, undermines the status of the free press and opens for more disinformation to enter our lives. Researchers have examined many aspects of false news, but not so much the visual aspects of the problem. The advent of deepfakes and related AI-based manipulation strategies, however, have now made the need for research on visual disinformation pressing.
The challenge involves a range of visual materials that may support manipulation efforts, from the simplest altered photographs and videos to the high-tech AI-generated videos of tomorrow. Many media organizations have responded by developing guidelines for verification of visual materials. Some are promising, but they generally appear to be based on a poor understanding of basic photographic terms and concepts. This partly hampers their work.
In order to amend this problem, and help improve their work, PHOTOFAKE offers a research-based revision of the most important concepts used when discussing visual manipulation, alteration of photographic images and computer-generated images which to our eyes look like there are taken by cameras even though they are not.
An interdisciplinary team which combines humanist competence on photographic media and competence in the digital economy and computer science will produce materials describing how verification guidelines for camera-based images can best be optimized, and how verification based on such guidelines can be optimized, also while taking into consideration how photographic technologies and practices likely will develop over the next years. PHOTOFAKE will by means of its research contribute to strengthening the ability of the media and of citizens to defend our democratic societies.
Our democracies are challenged by fake news and deliberate misinformation. The new digital economy has undermined the status of the free press as curator of public discourse and dispersed citizens to a variety of non-curated social media platforms with questionable veracity. While there are substantial ongoing research efforts on the vogue of false news which has ensued, research on the visual aspects of disinformation, exacerbated by the advent of deepfakes and related AI-alterations, is sparse.
The present challenge goes beyond deepfakes. It concerns a multitude of manipulating visual materials, from the most rudimentary to the AI-altered and -generated videos of tomorrow. Media organizations now develop manuals for fact-checking visual material, and compliment these with technical verification systems. Some manuals are promising, but the basic vocabulary informing them is unsatisfactory. These conceptual problems are fundamentally photo-theoretical and require humanist competence on photographic media.
PHOTOFAKE offers a research-based revision of this key vocabulary employed in discussions of photographic image alteration as well as computer-generated imagery that mimics camera-based images. An interdisciplinary team, combining basic theoretical and historical expertise from the humanities with cutting-edge expertise in digital economics and computer science, seeks to produce guidelines for the optimization of fact check guides for camera-images, and for their use, informed also by future scenarios that add resilience to the guidelines. These efforts will, at the same time, produce new insights that add more broadly to our understanding of the current information disorder.