Machine learning and deep learning are methods that are increasingly used in ecological research with success. In this PhD project, we work with the development of methods related to image processing, GIS and remote sensing, acoustics and soundscape and environmental DNA (eDNA).
Image-based detection has great potential for automating and improving procedures in the collection of biological and ecological data sets. Preliminary studies have been conducted to identify plants as well as individuals of trout and salamanders from photos. This work is only a first step towards developing a precise and automatic system. We are working to further develop these methods.
Monitoring of the acoustic environment, or soundscape, is fast becoming a key tool in ecosystem management of species. Automated acoustic survey methods are necessary to be able to meet new requirements to study the effects of global change on biological diversity. Low-cost acoustic loggers are now available, paving the way for such approaches. Nevertheless, we lack good analysis methods. In the PhD project's first year, we have primarily worked with machine learning to analyze soundscapes.
Species are increasingly modifying their distributions and migration timings due to climate change. Effective ecological management and conservation efforts depend on identifying current distributions and predicting shifts. While remote sensing data such as land cover or weather are established covariates for species occurrence, environmental soundscapes are increasingly enabling low-cost and passive monitoring tools. Combining soundscapes with established abiotic species covariates can create a more complete ecological fingerprint for modeling biodiversity trends. Here, we study and compare the predictive power of soundscapes, local weather, and national U.S. weather data across a four-year period in Sapsucker Words in Ithaca, New York.
Avian species presence data was sourced from species survey checklists from the eBird Basic Dataset from January 2016 to August 2020, restricted to a 200m radius of the microphone array adjacent to the Cornell Lab of Ornithology. Only complete survey checklists were included, allowing for the creation of a species absence dataset. Additionally, only checklists from traveling and stationary surveys were allowed, removing instances of large-scale area species counts. To prevent temporal bias in the model, the total number of checklists for each year was restricted to the lowest annual count across the study period of 2016-2019. For the other years, checklists were randomly subsampled.
The audio dataset consists of more than four years of continuous audio data from January 2016-August 2020, recorded on a Gras-41AC precision microphone microphone outside the Cornell Lab of Ornithology (Latitude: 42.47955, Longitude: -76.45132), digitized with a Barix Instreamer ADC at 48 kHz, and stored in lossless .flac files at 15 minute increments. All audio data was converted into VGGish embedding features on the Saga HPC Cluster at SIGMA2 ( SIGMA 2). VGGish is a tensorflow model trained for classification of audio on a large dataset of youtube videos (Tensorflow Github). It converts 0.96 segments of audio data into 128-dimensional float feature vectors which are optimized for audio classification. Audio classification models applying audio feature embeddings perform comparably or better than CNN models trained directly on audio spectrograms and have been previously used in the ecoacoustics field. These vectors were averaged at the daily scale, as well as from 3:30 AM-7:30 AM for each day. This time window was chosen based on an analysis of six common temperate forest bird species, which found that dawn choruses tend to begin 20 minutes before nautical twilight to 100 minutes afterwards.
We find strong correlations between soundscape features and metrics of biodiversity in the area. Via supervised machine learning models and feature importance analysis, we identify temporal patterns in the predictive power of environmental soundscapes and weather features. We are also using the North American DayMet climate features dataset to detail how audio and climate features compare in predicting bird species and biodiversity. Lastly, we are similarly exploring how historical sound and climate features predict migration timings.
We continue to work on demonstrating the added value of eco-acoustics also by increasing data collection. In connection with this, we are in the process of collecting sound from many different places and ecosystems in Norway through the project «Sound of Norway».