View publication

Very deep CNNs achieve state-of-the-art results in both computer vision and speech recognition, but are difficult to train. The most popular way to train very deep CNNs is to use shortcut connections (SC) together with batch normalization (BN). Inspired by Self-Normalizing Neural Networks, we propose the self-normalizing deep CNN (SNDCNN) based acoustic model topology, by removing the SC/BN and replacing the typical RELU activations with scaled exponential linear unit (SELU) in ResNet-50. SELU activations make the network self-normalizing and remove the need for both shortcut connections and batch normalization. Compared to ResNet-50, we can achieve the same or lower (up to 4.5% relative) word error rate (WER) while boosting both training and inference speed by 60%-80%. We also explore other model inference optimization schemes to further reduce latency for production use.

Related readings and updates.

Apple at ICASSP 2020

Apple sponsored the 45th International Conference on Acoustics, Speech, and Signal Processing (ICASSP) in May 2020. With a focus on signal processing and its applications, the conference took place virtually from May 4 - 8. Read Apple’s accepted papers below.

See event details

Parametric Cepstral Mean Normalization for Robust Speech Recognition

This paper proposes a new channel normalization algorithm called parametric cepstral mean normalization (PCMN) to increase robust- ness of speech recognition to varying acoustic conditions. Rather than using a simple average of input speech features as channel es- timate, as done in the traditional CMN, PCMN weighs the running average of input speech frames in a frequency dependent manner. These weights are jointly optimized together with…
See paper details