View publication

In this paper, we address the problem of noise-robust multiple-input multiple-output (MIMO) adaptive filtering that is optimal in least-squares sense with application to multichannel acoustic echo cancellation. We formulate the problem as minimization of a multichannel least squares cost function that incorporates near-end speech and noise statistics resulting in a novel noise-robust framework for MIMO adaptive filtering. Although the issue of numerical stability has been widely explored in the context of recursive least squares (RLS) filtering, a rigorous mathematical treatment of the MIMO case in the context of numerically stable noise-robust multichannel echo cancellation remains absent. Guided by quantization-error modeling, we resolve the issue of numerical instability in our noise-robust scheme by utilizing transversal RLS filtering of Type 2. Thereafter, an explicit derivation of its inverse QR-decomposition (IQRD) counterpart based on Givens rotations is presented. We also derive computationally efficient lattice forms for our noise-robust RLS Type-2 and IQRD algorithms. It is highlighted that propagation of angle-normalized errors occurs naturally within the numerically stable least squares lattice (LSL). Thus, our approach combines the four sought after attributes in a multichannel echo cancellation scheme, i.e., computational efficiency, numerical stability, fast convergence and tracking, and robustness against noise. We analyze our formulations using simulations in terms of convergence, re-convergence, robustness in the presence of double-talk, and numerical stability.

Related readings and updates.

Robust Multichannel Linear Prediction for Online Speech Dereverberation Using Weighted Householder Least Squares Lattice Adaptive Filter

Speech dereverberation has been an important component of effective far-field voice interfaces in many applications. Algorithms based on multichannel linear prediction (MCLP) have been shown to be especially effective for blind speech dereverberation and numerous variants have been introduced in the literature. Most of these approaches can be derived from a common framework, where the MCLP problem for speech dereverberation is formulated as a…
See paper details

Least Squares Binary Quantization of Neural Networks

Quantizing weights and activations of deep neural networks results in significant improvement in inference efficiency at the cost of lower accuracy. A source of the accuracy gap between full precision and quantized models is the quantization error. In this work, we focus on the binary quantization, in which values are mapped to -1 and 1. We provide a unified framework to analyze different scaling strategies. Inspired by the pareto-optimality of…
See paper details