IEEE ICASSP 2022

2022 IEEE International Conference on Acoustics, Speech and Signal Processing

7-13 May 2022
  • Virtual (all paper presentations)
22-27 May 2022
  • Main Venue: Marina Bay Sands Expo & Convention Center, Singapore
  • Satellite Venue: Shenzhen, China (Postponed)

ICASSP 2022
Signal Processing Cup
Tue, 10 May, 18:00 - 20:00 Singapore Time (UTC +8)
Tue, 10 May, 12:00 - 14:00 France Time (UTC +2)
Tue, 10 May, 10:00 - 12:00 UTC
Tue, 10 May, 06:00 - 08:00 New York Time (UTC -4)
Location: Zoom Track II
Virtual
Live-Stream

Synthetic Speech Attribution

Finalist Teams

Team “Synthesizer"

Bangladesh University of Engineering and Technology

Supervisor: Dr. Shaikh Anowarul Fattah

Students: Bishmoy Paul, Md Awsafur Rahman, Najibul Haque Sarker, Zaber Ibn Abdul Hakim

Team "Students Procrastinating"

Bangladesh University of Engineering and Technology

Supervisor: Tahsina Farah Sanam

Tutor: Naima Tasnim

Students: Farsia Kawsar Chwodhury, Imtiaz Ahmed, Md Boktiar Mahbub Murad, Md. Fahim Abid, Sayonto Khan, Swojan Datta, Tahsin Saad Chowdhury, Tasnim Nishat Islam, Utsab Saha, Voktho Das

Team “IITH"

Indian Institute of Technology, Hyderabad

Supervisor: Sri Rama Murty Kodukula

Tutor: Sreekanth Sankala

Students: Chaitanya Varun Sahukari, Muhammed Fayis PV, Pranav Kumar Kota, Uday Kiran Reddy Tadipatri


The IEEE Signal Processing Society is proud to announce the ninth edition of the Signal Processing Cup: a forensic challenge related to synthetic speech attribution.

Goal

The possibility of manipulating digital multimedia objects is within everyone's reach. For instance, fake synthetic speech audio tracks can be generated through a wide variety of available methods. These range from simple cut-and-paste techniques, to complex neural networks. The goal of the challenge is to design and develop a system for synthetic speech attribution. This means, given an audio recording representing a synthetically generated speech track, to detect which method among a list of candidate ones has been used to synthesize the speech.

Eligibility

Any team composed of one faculty member, at most one graduate student and 3 to 10 undergraduate students is welcomed to join the open competition. At least 3 students must be IEEE Student Members.

Dataset

A dataset containing audio speech tracks generated with different speech synthesis techniques will be distributed to the participants.

Prize

The three teams with highest performance in the open competition will be selected as finalists and will be invited to participate in the final competition at ICASSP 2022. The champion team will receive a grand prize of $5,000. The first and the second runner-up will receive a prize of $2,500 and $1,500, respectively, in addition to travel grants and complimentary conference registrations.

Important Dates

January 7, 2022 Competition webpage, Piazza forum and info
January 15, 2022 Dataset available
March 15, 2022 Team registration Register your Team Here
March 31, 2022 Team final submission
April 7, 2022 Finalists announced
May 22-27, 2022 Final competition at ICASSP 2022

Additional Information

The challenge description is available at this link.

General information and resources are available on Piazza at the following link:
https://piazza.com/ieee_sps/spring2022/spcup2022
To set up a free account, use the access code “spcup2022” to join as a student the “SPCUP 2022: IEEE Signal Processing Cup 2022” class.

Organizers

The challenge is organized as a joint effort between the Image and Sound Processing Lab (ISPL) of the Politecnico di Milano (Milan, Italy) and the Multimedia and Information Security Lab (MISL) of the Drexel University (Philadelphia, USA).

The ISPL team is represented by Dr. Paolo Bestagini (Assistant Professor), Dr. Fabio Antonacci (Assistant Professor), Clara Borrelli (Ph.D. Student) and Davide Salvi (Ph.D. Student).

The MISL lab is represented by its founder Dr. Matthew C. Stamm (Associate Professor) and Brian Hosler (Ph.D. student).

Sponsor

This competition is sponsored by the IEEE Signal Processing Society and MathWorks.