Back to search

IKTPLUSS-IKT og digital innovasjon

MIRAGE: A Comprehensive AI-Based System for Advanced Music Analysis

Alternative title: MIRAGE: Et integrert AI-basert system for avansert musikkanalyse

Awarded: NOK 10.0 mill.

Project Manager:

Project Number:

287152

Application Type:

Project Period:

2019 - 2024

Location:

One main goal is to greatly improve the machine's capability to listen to and understand music. We design groundbreaking technologies to help everybody better understand and appreciate music. One main application is to make music more accessible and engaging. We design computational frameworks extracting a large set of information from music, such as timbre, notes, rhythm, tonality or structure. We currently focus on Hardanger fiddle music, automatically transcribing recorded performances into scores, detecting beats, as well as extracting subtle aspects related to music structure. Through advanced computational music analysis combined with insights from musicology, we can bring musicologists closer to the answers they are looking for. We are now approaching a stage in the research where we can make tools that can understand the logic of the music. We train computers to detect notes automatically. The music the machine is intended to transcribe is the National Library of Norway's catalog of folk music. Norwegian folk music and especially the Hardanger fiddle is difficult material for the machine. The large amount of examples needed was initially not available. Therefore, we asked musicians, the professional fiddler Olav Luksengård Mjelva, and students from the Norwegian Academy of Music, to play for us, and designed a software where the sounds were visualized and they could place the notes for us. We use the manual annotations to teach the machine how to automatically detect the notes played by the Hardanger fiddle. The next step, which is where we are now, is detecting the beats. This is complex in fiddle music. A large range of musical styles will be progressively considered: traditional, classical and popular; acoustic and electronic; and from various cultures. The rich description of music provided by our new computer tool are also used to investigate elaborate notions such as emotions, groove or mental images. Beyond the academic domains of musicology, music informatics and music cognition, this project is oriented towards the development of groundbreaking technologies for the general public, such as generating videos on the fly for any music. One challenge in music listening is that it all depends on the listeners' implicit ear training. Automated, immersive, interactive visualisations help listeners understand and better appreciate the music they like (or don't like yet), so that music becomes more accessible and engaging. While algorithms and technology have so far helped listeners to more of the same music, this new research direction will benefit not only the individual listeners, but the music itself, and the diversity of the entire ecosystem of music. With the apps planned for development, you will be able to browse the folk music catalog and go on a journey of discovery. When you find a tune you like, the app can point you in the direction of something in a similar style. One of the ways you can interact with the music is by watching it. When you watch a video clip it plays on more of your senses which in turn makes you feel more. This can help many people to understand folk music better. We are trying to find the fine balance between adding enough detail to make you understand more than you do by just listening, and too much detail, which may turn your attention away from the music. It can even be a kind of gamification: an interactive app where you start with a simple visualization. When you master and understand what is going on in the music, you gradually get a more complex version. The same technology that will become an app on your smartphone can also be used on stage. For the MusicLab concert in Copenhagen on October 2021, while the Danish String Quartet performs Bach's The Art of Fugue, a real-time video is projected in the background of the scene, in which we graphically explain the music as it develops along time. As the four musicians play through the themes, the audience is able to see how the different voices repeat each theme. Our hypothesis is that especially untrained listeners understand more if they can also see what is happening musically. The themes appear one after the other on the screen, so when the musicians start playing a new repetition of it, you see the patterns. For another MusicLab concert in Oslo on November 2021, a real-time video reconstructs the synaesthesia experience of a guitarist improvising live. Applications to music therapy are also considered. As part of the project, a 2-day online symposium was organised in June 2021, presenting our ongoing works and inviting a large panel of European researchers in computational music analysis to present their perspectives. One main aim of the symposium was to strengthen the dialogue between computer science and musicology. The video recording of the whole symposium is available online. Video interview and demo: https://www.youtube.com/watch?v=B4LFjIBLFEI

-

MIRAGE aims at conceiving a ground-breaking AI system for music analysis that will generate a rich and detailed description of music along a large range of parametric and structural dimensions. This will enable musicologists to acquire a systematic and explicit understanding of music in a much deeper level than using traditional methods or previous state of the art in computational musicology. The scientific ambitions of MIRAGE is not solely related to the conception of AI and signal processing algorithms able to perform advanced operations of interest for musicology. It is also to establish a detailed and explicit answer to theoretical musicological problems, related to the formalization of music analysis. The rich description of music will also foster the enrichment of cognitive models predicting listeners’ percepts and reactions to music, for instance for the study of emotion in music. Moreover, music cognition needs to be understood as a complex system composed of highly interdependent modules. A computational system built on a set of general heuristics that is able to mimic music cognition on a large range of music would offer a valuable blueprint for the establishment of cognitive models. The software technologies developed in the project will also have a wider transformational impact on how tomorrow’s technologies will be used to understand and appreciate music. We will also apply these technologies for music therapy. The approach follows a transdisciplinary perspective, articulating traditional musicology, cognitive science, signal processing and artificial intelligence. This project in collaboration with the National Library of Norway, world leading in digitizing cultural heritage, will help developing further the field of Digital Humanities, which is under-developed in Norway.

Publications from Cristin

Funding scheme:

IKTPLUSS-IKT og digital innovasjon