Frame-level SpecAugment for Deep Convolutional Neural Networks in Hybrid ASR Systems
AuthorsXinwei Li, Yuanyuan Zhang, Xiaodan Zhuang, Daben Liu
Frame-level SpecAugment for Deep Convolutional Neural Networks in Hybrid ASR Systems
AuthorsXinwei Li, Yuanyuan Zhang, Xiaodan Zhuang, Daben Liu
We propose a frame-level SpecAugment method (f-SpecAugment) to improve the performance of deep convolutional neural networks (CNN) for hybrid HMM based ASR systems. Similar to the utterance level SpecAugment, f-SpecAugment performs three transformations: time warping, frequency masking, and time masking. Instead of applying the transformations at the utterance level, f-SpecAugment applies them to each convolution window independently during training. We demonstrate that f-SpecAugment is more effective than the utterance level SpecAugment for deep CNN based hybrid models. We evaluate the proposed f-SpecAugment on 50-layer Self-Normalizing Deep CNN (SNDCNN) acoustic models trained with up to 25000 hours of training data. We observe f-SpecAugment reduces WER by 0.5-4.5% relatively across different ASR tasks for four languages. As the benefits of augmentation techniques tend to diminish as training data size increases, the large scale training reported is important in understanding the effectiveness of f-SpecAugment. Our experiments demonstrate that even with 25k training data, f-SpecAugment is still effective. We also demonstrate that f-SpecAugment has benefits approximately equivalent to doubling the amount of training data for deep CNNs.
The Communication Complexity of Distributed Estimation
December 17, 2025research area Methods and Algorithms
We study an extension of the standard two-party communication model in which Alice and Bob hold probability distributions and over domains and , respectively. Their goal is to estimate
to within additive error for a bounded function , known to both parties. We refer to this as the distributed estimation problem. Special cases of this problem arise in a variety of areas…
Faster Differentially Private Samplers via Rényi Divergence Analysis of Discretized Langevin MCMC
December 2, 2020research area Privacyconference NeurIPS
Various differentially private algorithms instantiate the exponential mechanism, and require sampling from the distribution for a suitable function . When the domain of the distribution is high-dimensional, this sampling can be computationally challenging. Using heuristic sampling schemes such as Gibbs sampling does not necessarily lead to provable privacy. When is convex, techniques from log-concave sampling lead to…