We are honoured to introduce our three keynote speakers for this year.
Keynote #1 – Bridging Symbolic and Audio Data: Score-Informed Music Performance Data Estimation
Johanna Devaney (Brooklyn College and the Graduate Center, CUNY)
The empirical study of musical performance dates back to the birth of recorded media. From the laborious manual processes used in the earliest work to the current data-hungry end-to-end models, the estimation, modelling, and generation of expressive performance data remains challenging. This talk will consider the advantages of score-aligned performance data estimation, both for guiding signal processing algorithms and leveraging musical score data and other types of linked symbolic data (such as annotations) for analysing and modelling performance-related data. While the focus of this talk will primarily be on musical performance, connections to speech data will also be discussed, as well as the resultant potential for cross-modal analysis.

Johanna Devaneyis an Associate Professor at Brooklyn College and the Graduate Center, CUNY, where she teaches courses in music theory, music technology, and data analysis. Johanna’s research primarily examines the ways in which recordings can be used to study and model performance, and she has developed computational tools to facilitate this. Her research on computational methods for audio understanding has been funded by the National Endowment for the Humanities (NEH) Digital Humanities program and the National Science Foundation (NSF). Johanna currently serves as the Co-Editor-in-Chief of the Journal of New Music Research.
Keynote #2 – Reverberation – Dereverberation: The promise of hybrid models
Gaël Richard (Télécom Paris, Institut Polytechnique de Paris)
The propagation of acoustic waves within enclosed environments is inherently shaped by complex interactions with surrounding surfaces and objects, leading to phenomena such as reflections, diffractions, and the resulting reverberation. Over the years, a wide range of reverberation models have been developed, driven by both theoretical interest and practical applications, including artificial reverberation synthesis—where realistic reverberation is added to anechoic signals—and dereverberation, which aims to suppress reverberant components in recorded signals. In this keynote, we will provide a concise overview of some reverberation modeling approaches and illustrate how these models can be integrated into hybrid frameworks that combine classical signal processing, physical modeling, and machine learning techniques to advance artificial reverberation synthesis or dereverberation.

Gaël RICHARD received the State Engineering degree from Telecom Paris, France in 1990, the Ph.D. degree and Habilitation from University of Paris-Saclay respectively in 1994 and 2001. After the Ph.D. degree, he spent two years at Rutgers University, Piscataway, NJ, in the Speech Processing Group of Prof. J. Flanagan. From 1997 to 2001, he successively worked for Matra, Bois d’Arcy, France, and for Philips, Montrouge, France. He then joined Telecom Paris, where he is now a Full Professor in audio signal processing. He is also the co-scientific director of the Hi! PARIS interdisciplinary center on AI and Data analytics. He is a coauthor of over 250 papers and inventor in 10 patents. His research interests are mainly in the field of speech and audio signal processing and include topics such as source separation, machine learning methods for audio/music signals and music information retrieval. He is a fellow member of the IEEE, and was the chair of the IEEE SPS TC for Audio and Acoustic Signal Processing (2021-2022). He received, in 2020, the Grand prize of IMT-National academy of science. In 2022, he was awarded of an advanced ERC grant of the European Union for a project on machine listening and artificial intelligence for sound.
Keynote #3 – Effecting Audio: An Entangled Approach to Signals, Concepts and Artistic Contexts
Andrew P. McPherson (Imperial College London)
I propose to approach audio effects not as technical objects, but as a kind of activity. The shift from from noun (“audio effect”) to verb (“effecting audio”, in the sense of applying transformations to sound) calls attention to the motivations, discourses and contexts in which audio processing, analysis and synthesis take place. We build audio-technical systems for specific reasons in specific situations. No system is ever devoid of sociocultural context or human intervention, and even the simplest technologies when examined in situ can exhibit fascinating complexity.
My talk will begin with a stubbornly contrarian take on some seemingly obvious premises of musical audio processing. Physicist and feminist theorist Karen Barad writes that “language has been granted too much power.” I would like to propose that as designers and researchers, we can let words about music take precedence over the messy and open-ended experience of making music, but that becoming overly preoccupied with language risks propagating clichés and reinforcing cultural stereotypes. Drawing on recent scholarship in human-computer interaction and science and technology studies, I will recount some alternative approaches and possible futures for designing digital audio technology when human and technical factors are inextricably entangled. I will illustrate these ideas with recent projects from the Augmented Instruments Laboratory, with a focus on rich bidirectional couplings between digital and analog electronics, acoustics and human creative experience.

Andrew McPherson (https://andrewmcpherson.org) is Professor of Design Engineering and Music at Imperial College London, where he leads the Augmented Instruments Laboratory (https://instrumentslab.org), a research team creating new musical instruments, studying the encounters between musicians and instruments, and developing high-performance technologies for real-time interactive audio. Prior to joining Imperial, Andrew received a Master’s in electrical engineering from MIT and a PhD in music composition from the University of Pennsylvania, and he has spent over a decade at the Centre for Digital Music at Queen Mary University of London.
Andrew has a particular interest in hybrid acoustic-electronic instruments; his magnetic resonator piano, an electromagnetically-augmented grand piano, has been used in dozens of compositions and featured in TV and film scores and albums spanning a wide stylistic range. He is also a co-founder of Bela.io, a startup making high-performance open-source embedded audio computing systems. Andrew currently holds an ERC/UKRI Consolidator Grant (“RUDIMENTS”) exploring the cultural implications of engineering decisions in music technology, as well as a fellowship from the UK Royal Academy of Engineering in embedded music computing.