Tutorials

We are pleased to host four tutorials on the 2nd September, 2025, the first day of the conference.

Tutorial #1 – Non-Iterative Simulation: A Numerical Analysis Viewpoint


Alessia Andò (University of Udine)

Stiff ordinary differential equations (ODEs) frequently appear in scientific and engineering applications, necessitating numerical methods that ensure stability and efficiency. Non-iterative approaches for stiff ODEs provide an alternative to fully implicit schemes, which often feature a certain degree of unpredictability of the computation time that can be unacceptable in virtual analog real time applications.
This tutorial will especially focus on Rosenbrock-Wanner (ROW) methods and exponential integration techniques, whose origins date back to the 1960s. ROW methods are linearly implicit methods, that is, their idea is to replace the solution of nonlinear systems with a finite number of linear systems per step. Exponential integrators, on the other hand, incorporate stiff dynamics by leveraging matrix exponentials, and offer advantages in problems whose stiffness or oscillatory nature is mainly driven by their linear component. We will discuss the derivation, stability properties, and practical implementation of these methods, as well as compare their strengths, limitations, and potential for virtual analog in real-world applications through illustrative examples.

Dr. Alessia Andò is a postdoctoral fellow at the Department of Mathematics, Computer Science and Physics, University of Udine, where she received her PhD in 2020. She also worked as a postdoc at GSSI (Gran Sasso Science Institute), Italy.
Within the general area of Numerical Analysis, her main research interests are ordinary and delay differential equations and related dynamical systems and models. The focus is towards both the numerical time integration and the dynamical analysis of the models, which includes the computation of invariant sets and the study of their asymptotic stability.

Tutorial #2 – Logarithmic Frequency Resolution Filter Design for Audio


Balázs Bank (Budapest University of Technology and Economics)

Digital filters are often used to model or equalize acoustic or electroacoustic transfer functions. Applications include headphone, loudspeaker, and room equalization, or modeling the radiation of musical instruments for sound synthesis. As the final judge of quality is the human ear, filter design should take into account the quasi-logarithmic frequency resolution of the auditory system. This tutorial presents various approaches for achieving this goal, including warped FIR and IIR, Kautz, and fixed-pole parallel filters, and discusses their differences and similarities. Application examples will include phyics-based sound synthesis, loudspeaker and room equalization, and the equalization of a spherical loudspeaker array.

Balázs Bank is an associate professor at the Department of Artificial Intelligence and Systems Engineering, Budapest University of Technology and Economics (BUTE), Hungary. He received his M.Sc. and Ph.D. degrees in Electrical Engineering from BUTE in 2000 and in 2006, and his Hungarian Academy of Sciences (MTA) doctoral degree in 2023. In the academic year 1999/2000 and in year 2007 he was with the Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Finland. In 2008 he was with the Department of Computer Science, Verona University, Italy. Between 2000 to 2006, and since 2009 he was/is with BUTE. He has been an Associate Editor for IEEE Signal Processing Letters in 2013–2016 and for IEEE Signal Processing Magazine in 2018-2022. He has been the lead Guest Editor for the 2022 JAES special issue “Audio Filter Design”. His research interests include physics-based sound synthesis and filter design for audio applications.

Tutorial #3 – Building Flexible Audio DDSP Pipelines: A Case Study on Artificial Reverb


Gloria Dal Santo (Aalto University)

This tutorial focuses on Differentiable Digital Signal Processing (DDSP) for audio synthesis, an approach that applies automatic differentiation to digital signal processing operations. By implementing signal models in a differentiable manner, it becomes possible to backpropagate loss gradients through their parameters, enabling data-driven optimization without losing domain knowledge.
DDSP has gained popularity due to its domain-appropriate inductive biases, yet it still presents several challenges. The parameters of differentiable models are often constrained by stability conditions, affected by non-uniqueness issues, and may belong to different domains and distributions, making optimization nontrivial.
This tutorial provides an overview of these limitations and introduces FLAMO, a library designed to facilitate more flexible training pipelines. A key focus will be on loss functions: how to select appropriate ones, insights from perceptually informed losses, and techniques for validating them.
Demonstrations will use FLAMO, an open-source Python library built on PyTorch’s automatic differentiation framework. Practical examples will primarily centre on recursive systems for artificial reverberation applications.

Gloria Dal Santo received the M.Sc. degree in electrical and electronic engineering from the Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland in 2022, during which she interned at the Audio Machine Learning team at Logitech.
She is currently working toward a Doctoral degree with the Acoustics Lab, at Aalto University, Espoo, Finland. Her research interests include artificial reverberation and audio applications of machine learning, with a focus on designing more robust and psychoacoustically informed systems.

Tutorial #4 – Plausible Editing of our Acoustic Environment


Annika Neidhardt (Institute of Sound Recording, University of Surrey)

The technology and methods for creating spatial auditory illusions have evolved phenomenally to the point where we can create illusions that cannot be distinguished from reality anymore. However, so far, such convincing quality can only be achieved with accurate knowledge about the target environment based on measurements or detailed modelling. Rendering virtual content into previously unknown environments remains a challenge. Quick automatic characterisation of its acoustic properties is necessary. Which information do we need to extract to render convincing illusions? Moreover, to what extent can we become creative in manipulating the appearance of the actual environment without compromising its plausibility and vividness? This tutorial will give insight into the perceptual requirements for rendering audio for Augmented and Extended Reality.

Annika Neidhardt is a Senior Research Fellow in Immersive Audio at the University of Surrey. She has been an active researcher of related topics for more than 10 years. She holds an MSc in Electrical Engineering (Automation & Robotics) from Technische Universität Chemnitz and an MSc in Audio Engineering (Computermusic & Multimedia) from the University for Music and Performing Art Graz. After three years in advanced development and applied science, she started her own research project at Technische Universität Ilmenau in the group of Karlheinz Brandenburg in 2017 on 6DoF binaural audio and related perceptual requirements and evaluation. She defended her PhD thesis on the plausibility of simplified room acoustic representations in Augmented Reality in May 2023. In addition, she conducted research on the automatic characterisation of acoustic environments, and perceptual implications for audio in Social VR and XR. Since autumn 2023, Annika continues her research at the Institute of Sound Recording in Surrey with more focus on room acoustic modelling and perceptual modelling.