2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
  • Satellite Venue: Shenzhen, China (Postponed)

IEP-1: When signal processing meets user experience: how to turn a regular user into an audio systems engineer in 60 seconds
Sun, 8 May, 20:00 - 20:45 Singapore Time (UTC +8)
Sun, 8 May, 14:00 - 14:45 France Time (UTC +2)
Sun, 8 May, 12:00 - 12:45 UTC
Sun, 8 May, 08:00 - 08:45 New York Time (UTC -4)
Location: Gather Area P
Presented by: Adib Mehrabi, Sonos, Inc

Digital audio signal processing technology is often implemented in such a way that it requires little or no user interaction for it to function as intended. The user of a teleconferencing device or music playback system often isn't aware of the acoustic echo cancellation, noise suppression, speech enhancement, or various limiters and equalizers that are being applied to improve the quality of the audio. However, sometimes the user is ideally placed to provide inputs or measurements to the system that will improve its performance, or indeed enable it to perform. This is where DSP meets user experience, presenting new and sometimes challenging considerations for both the signal processing methods and interaction design.

In this presentation I will discuss the development and design of a feature called Trueplay, which exists on all Sonos products today. Trueplay is a user facing audio feature, which is used to adapt the sound of our speakers to the listening environment. Early on in the design of Trueplay, it was recognised that in order to estimate how a loudspeaker sounds in a room, there really is no substitute for an in-room measurement in multiple locations, including measurements not made directly on the speaker (e.g. from on-board microphones). This raised the question of whether it would be possible to get a regular user to make a room average acoustic measurement in their homes - something that would normally be performed by acoustics or audio systems engineers - whilst maintaining simplicity and quality of the Sonos user experience. I will discuss how the entire process - including the measurement method, stimulus tones, user guidance, and feedback were informed by balancing the objectives of premium sound quality with human-centric user design to achieve not only a performant result, but also a pleasant, and perhaps even magical end user experience.


Adib Mehrabi is a Senior Manager in the Advanced Technology group at Sonos, Inc, and Honorary Lecturer at Queen Mary University, London, UK. He received his PhD from the Centre for Digital Music at Queen Mary University, London, UK, and BSc in Audio Engineering from the University of the West of England, UK. Prior to working at Sonos, Adib was Head of Research at Chirp - a company that developed audio signal processing and machine learning methods for transmitting data between devices using sound. Adib currently leads the Advanced Rendering and Immersive Audio research group at Sonos, Inc.