Least Squares Binary Quantization of Neural Networks
AuthorsHadi Pouransari, Zhucheng Tu, Oncel Tuzel
AuthorsHadi Pouransari, Zhucheng Tu, Oncel Tuzel
This paper was accepted at the Efficient Deep Learning in Computer Vision workshop at the CVPR 2020 conference.
Quantizing weights and activations of deep neural networks results in significant improvement in inference efficiency at the cost of lower accuracy. A source of the accuracy gap between full precision and quantized models is the quantization error. In this work, we focus on the binary quantization, in which values are mapped to -1 and 1. We provide a unified framework to analyze different scaling strategies. Inspired by the pareto-optimality of 2-bits versus 1-bit quantization, we introduce a novel 2-bits quantization with provably least squares error. Our quantization algorithms can be implemented efficiently on the hardware using bitwise operations. We present proofs to show that our proposed methods are optimal, and also provide empirical error analysis. We conduct experiments on the ImageNet dataset and show a reduced accuracy gap when using the proposed least squares quantization algorithms.
Apple sponsored the Conference on Computer Vision and Pattern Recognition (CVPR), which took place virtually from June 14 - 19. CVPR is the premier annual international computer vision event.