This project aims to lay technical foundations for a new and highly flexible media model, called control-driven media.
The project envisions a media model, where media experiences are assembled and rendered in real time by viewer-devices, from online data sources and streams. Rendered experiences may combine a variety of data sources and exploit a wide range of rendering technologies, such as media players and interactive data-visualizations. Moreover, rendering components may be split across multiple devices of a single viewer, or across the devices of a large audience. Importantly, by sharing a common timeline, components of the same experience may be rendered consistently, and replay and time-shifting apply to the experience as a whole. In addition, by treating interactivity as a data source in its own right, media experiences may be recorded and replayed in full - including the state of interactive interfaces. Media experiences may also be remote controlled in real-time by collaborators or an external production system, and dynamically modified after production for the benefit of time-shifted viewers. This way, cloud-hosted AI-processes could assist in directing rendering performed by individual viewer-devices, for instance adapting experiences to personal preferences and circumstances.
We argue that this media model addresses a number of cross-domain challenges in online media. To illustrate the potential this project focuses on use cases within different application domains where media technologies are being used. This includes online media (entertainment) as well as emergency response operations. Still, the model itself is generic, and opportunities identified for emergency operations may well be transferable to other application domains.
The goal is to define technical concepts required by this model, and to demonstrate feasibility and benefit in select scenarios relevant for emergency response operations. Furthermore, to confirm the hypothesis that this new media model represents a cost effective, general and practical approach, opening up for further innovation.
This project identifies control of media experiences as an underlying challenge in online media, and proposes that control should be represented as persistent resources, time-dependent concept, independent from data and media content. This project develops a generic, cloud-based control mechanism, as a technical foundation for control-driven media.
In the first stage of this project, a generic concept for control has been defined, based on scenarios from a broad range of applications domains, including emergency response operations.
In the second stage, the project has developed a programming model supporting the new control concept, opening up for continued experimentation and application development. This progress has also been documented as a scientific publication (currently under review)