Deep Synthesizer Parameter Estimation

Oren Barkan, David Tsiris

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Manual tuning of synthesizer parameters to match a specific sound can be an exhaustive task. This paper proposes an automatic method for synthesizer parameters tuning to match a given input sound. The method is based on strided Convolutional Neural Networks and is capable of inferring the synthesizer parameters configuration from the input spectrogram and even from the raw audio. The effectiveness of our method is demonstrated on a subtractive synthesizer with frequency modulation. We present experimental results that showcase the superiority of our model over several baselines. We further show that the network depth is an important factor that contributes to the prediction accuracy.

Original languageEnglish
Title of host publication2019 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages3887-3891
Number of pages5
ISBN (Electronic)9781479981311
DOIs
StatePublished - May 2019
Externally publishedYes
Event44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019 - Brighton, United Kingdom
Duration: 12 May 201917 May 2019

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2019-May
ISSN (Print)1520-6149

Conference

Conference44th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2019
Country/TerritoryUnited Kingdom
CityBrighton
Period12/05/1917/05/19

Bibliographical note

Publisher Copyright:
© 2019 IEEE.

Keywords

  • deep parameter estimation
  • deep sound synthesis

Fingerprint

Dive into the research topics of 'Deep Synthesizer Parameter Estimation'. Together they form a unique fingerprint.

Cite this