Technical advancements have allowed us to experience space in the way that the controller on the recording side allows us. However, for many applications, it is desirable that the observer retains control of its view's spatial location.
In ORCAD3, we aim at combining our existing expertise to allow the remote control of the view in a scalable, QoE-aware manner. ORCAD3 explores the model-based encoding and transmission of non-rigid 3D objects in real-time as they change dynamically, although another option for allowing this kind of interaction is already in use today. For example, in Cloud Gaming (among other), the player sends view updates to a server, which renders a video with the view parameters and streams video to the player. We consider the transmission of models more sensible, because server-side rendering does not scale to many receivers, and it prevents the use of storage, caching and replication to allow several receivers to render alternative views of the same scene. Server-side rendering also pushes control over the rendering quality to the sender although it is the receiver that can understand the context in which the viewers perceives the scene.
The 3D compression that is required to make this approach possible has become a popular topic in the graphics and signal processing community. Thanks to these advances, some works have been proposed on model representation, in particular by the multimedia community, but they are mostly restricted to a single rigid 3D object, or complete scenes consisting of images with depth maps. In ORCAD3, we explore the more ambitious scenario of non-rigid objects that adhere to a known model but change over time.
The transmission should benefit from the identification of the dynamic parts. Modeling should be adapted to optimize the ratio quality vs. bandwidth. These objectives are particularly well targeted for the collaboration between the two applicant teams.