Audiovisual Speech Synthesis using Tacotron2
AuthorsAhmed Hussen Abdelaziz*, Anushree Prasanna Kumar*, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou, Sachin Kajarekar
AuthorsAhmed Hussen Abdelaziz*, Anushree Prasanna Kumar*, Chloe Seivwright, Gabriele Fanelli, Justin Binder, Yannis Stylianou, Sachin Kajarekar
Audiovisual speech synthesis involves synthesizing a talking face while maximizing the coherency of the acoustic and visual speech. To solve this problem, we propose using AVTacotron2, which is an end-to-end text-to-audiovisual speech synthesizer based on the Tacotron2 architecture. AVTacotron2 converts a sequence of phonemes into a sequence of acoustic features and the corresponding controllers of a face model. The output acoustic features are passed through a WaveRNN model to reconstruct the speech waveform. The speech waveform and the output facial controllers are used to generate the corresponding video of the talking face. As a baseline, we use a modular system, where acoustic speech is synthesized from text using the traditional Tacotron2. The reconstructed acoustic speech is then used to drive the controls of the face model using an independently trained audio-to-facial-animation neural network. We further condition both the end-to-end and modular approaches on emotion embeddings that encode the required prosody to generate emotional audiovisual speech. A comprehensive analysis shows that the end-to-end system is able to synthesize close to human-like audiovisual speech with mean opinion scores (MOS) of 4.1, which is the same MOS obtained on the ground truth generated from professionally recorded videos.