Speech De-identification with Deep Neural Networks

Keywords: speech processing, voice conversion, deep neural network, text-to-speech, speaker privacy

Abstract

Cloud-based speech services are powerful practical tools but the privacy of the speakers raises important legal concerns when exposed to the Internet. We propose a deep neural network solution that removes personal characteristics from human speech by converting it to the voice of a Text-to-Speech (TTS) system before sending the utterance to the cloud. The network learns to transcode sequences of vocoder parameters, delta and delta-delta features of human speech to those of the TTS engine. We evaluated several TTS systems, vocoders and audio alignment techniques. We measured the performance of our method by (i) comparing the result of speech recognition on the de-identified utterances with the original texts, (ii) computing the Mel-Cepstral Distortion of the aligned TTS and the transcoded sequences, and (iii) questioning human participants in A-not-B, 2AFC and 6AFC tasks. Our approach achieves the level required by diverse applications.

Downloads

Download data is not yet available.
Published
2021-12-07
How to Cite
Fodor, Ádám, Kopácsi, L., Milacski, Z. Ádám, & Lőrincz, A. (2021). Speech De-identification with Deep Neural Networks. Acta Cybernetica, 25(2), 257-269. https://doi.org/10.14232/actacyb.288282
Section
Special Issue of the 12th Conference of PhD Students in Computer Science