Instruction-Following Pruning for Large Language Models
AuthorsBairu Hou†**, Qibin Chen, Jianyu Wang, Guoli Yin, Chong Wang, Nan Du, Ruoming Pang, Shiyu Chang†, Tao Lei
AuthorsBairu Hou†**, Qibin Chen, Jianyu Wang, Guoli Yin, Chong Wang, Nan Du, Ruoming Pang, Shiyu Chang†, Tao Lei
With the rapid scaling of large language models (LLMs), structured pruning has become a widely used technique to learn efficient, smaller models from larger ones, delivering superior performance compared to training similarly sized models from scratch. In this paper, we move beyond the traditional static pruning approach of determining a fixed pruning mask for a model, and propose a dynamic approach to structured pruning. In our method, the pruning mask is input-dependent and adapts dynamically based on the information described in a user instruction. Our approach, termed “instruction-following pruning”, introduces a sparse mask predictor that takes the user instruction as input and dynamically selects the most relevant model parameters for the given task. To identify and activate effective parameters, we jointly optimize the sparse mask predictor and the LLM, leveraging both instruction-following data and the pre-training corpus. Our method shares the same spirit as Mixture-of-Experts (MoE) by dynamically activating a subset of parameters, but is designed to work well for on-device inference. Specifically, by selecting and fixing the parameters for each user-specified task, our method significantly reduces the weight loading cost and makes decoding as efficient as a small-scale dense model. Experimental results confirm the effectiveness of our approach on a wide range of evaluation benchmarks. For example, our 3B activated model improves over the 3B dense model by 5-8 points of absolute margin on domains such as math and coding, and rivals the performance of a 9B model. It also significantly improves the inference efficiency of the 9B model and MoE with a similar number of activated parameters.
July 24, 2023research area Methods and Algorithms, research area Tools, Platforms, Frameworksconference ICML, conference NeurIPS