Back to search

IKTPLUSS-IKT og digital innovasjon

Ubiquitous cognitive computer vision for marine services

Alternative title: Marine tjenester basert på kognitivt datasyn

Awarded: NOK 15.6 mill.

Deep learning is a machine learning method that has received a lot of attention and has been referred to as the revolutionary technique that quietly changed machine vision forever. This has happened as these methods have shown unprecedented accuracy in visual recognition competitions and are approaching human performance. The methods are based on deep neural networks that learn the correspondence between data and content descriptions from examples. A key component for the success of deep learning is the availability of an extensive number of labelled examples, i.e., images that are marked with information on what they contain. In the COGMAR project, we focus on analysis of complex image data, often with limited number of annotations. Vast amounts of marine data ranging from optical imagery and videos to acoustic surveys are today being collected. These data contain valuable information needed to ensure sustainable fisheries and harvest. In next generation marine services real-time analysis is anticipated and this will further increase the amount of data. The aim of the project is to contribute to the development of automatic solutions for extracting information from these big complex image data to bring exploitation of these data in the marine science to a new level, enabling extraction of new knowledge and continuous monitoring. We have developed several deep learning methods for analysis of different types of marine data: trawl camera, ototlith images and acoustical data. For the trawl camera data we have developed techniques that automatically detects each fish individual and classifies the species. Due to limited amount of training data, we have developed a simulation scheme that simulates fish in the trawl for a number of situations. These simulated data are used to train the network. For the otolith data, we have developed methods that automatically estimates the age of Greenland halibut individuals by analyzing images of the otoliths. We have performed a study on how to understand how the network is reading the ototlith images and found that the deep learning techniques reads the images completely different than humans. For the otolith data, we have found that the performance is worse with images from another laboratory. We have therefore developed methods that adapt new data sources to the network trained on Norwegian ototlitt images. For the acoustical data, we have developed methods to estimate the amount and fish species, with a particular emphasis on handling the data variation between different surveys, and extended these to include context information such as depth and distance to seabed to increase performance. We have also developed methods that are able to exploit unannotated data for learning neural networks (so-called semi-supervised learning). The methodology for analyzing acoustical data that have been developed in Cogmar have now been implemented in Kongsberg Maritime?s Blue Insight Platform.

Deep learning has been called the revolutionary technique that quietly changed machine vision forever, but is at present mainly applicable to standard RGB images of natural scenes or objects, or otherwise only for other types of imagery when a substantial amount of labelled data is available, which is seldom the case. This project aims at enabling this technology for computer vision problems anywhere, by developing easy-to-use cognitive solutions also for non-standard images and thereby extending the use of autonomous cognitive computer vision systems to new application areas. The methodology will be general and transferable to other domains such as medical imagery, remote sensing and various industrial applications. Within the project the aim is to solve key big data computer vision challenges in the marine sector. The overall concept of the project is to exploit the power of deep convolutional neural networks (CNNs) by developing new solutions for learning necessary to classify, localize and segment objects in non-standard, sparsely labelled, image data. Motivated by the methods' ability to generalize and the fact that unlabelled data is often inexpensive to acquire, our approach for solving this will be based on three main concepts; (i) cross-domain transfer learning, (ii) semi-supervised learning and (iii) data augmentation and simulation. Fisheries and aquaculture are major industries in Norway, and marine image data are acquired in a wide range of formats and modalities for various tasks. Automatic solutions for extracting information from these big non-standard image data will bring exploitation of these data in the marine science to a new level, enabling extraction of new knowledge and continuous monitoring of marine ecosystems. Solutions from the project will also contribute to innovation for industries manufacturing solutions for automated monitoring of fish and marine environments.

Publications from Cristin

No publications found

Activity:

IKTPLUSS-IKT og digital innovasjon