On Information Geometry and Iterative Optimization in Model Compression: Operator Factorization
AuthorsZakhar Shumaylov†‡, Vasileios Tsiaras, Yannis Stylianou
AuthorsZakhar Shumaylov†‡, Vasileios Tsiaras, Yannis Stylianou
The ever-increasing parameter counts of deep learning models necessitate effective compression techniques for deployment on resource-constrained devices. This paper explores the application of information geometry, the study of density-induced metrics on parameter spaces, to analyze existing methods within the space of model compression, primarily focusing on operator factorization. Adopting this perspective highlights the core challenge: defining an optimal low-compute submanifold (or subset) and projecting onto it. We argue that many successful model compression approaches can be understood as implicitly approximating information divergences for this projection. We highlight that when compressing a pre-trained model, using information divergences is paramount for achieving improved zero-shot accuracy, yet this may no longer be the case when the model is fine-tuned. In such scenarios, trainability of bottlenecked models turns out to be far more important for achieving high compression ratios with minimal performance degradation, necessitating adoption of iterative methods. In this context, we prove convergence of iterative singular value thresholding for training neural networks subject to a soft rank constraint. To further illustrate the utility of this perspective, we showcase how simple modifications to existing methods through softer rank reduction result in improved performance under fixed compression rates.
September 27, 2024research area Human-Computer Interaction, research area Tools, Platforms, Frameworksconference IEEE Visualization
March 30, 2022research area Methods and Algorithmsconference ICLR