What Do Your Logits Know? (The Answer May Surprise You!)
AuthorsMasha Fedzechkina, Eleonora Gualdoni, Rita Ramos, Sinead Williamson
What Do Your Logits Know? (The Answer May Surprise You!)
AuthorsMasha Fedzechkina, Eleonora Gualdoni, Rita Ramos, Sinead Williamson
Recent work has shown that probing model internals can reveal a wealth of information not apparent from the model generations. This poses the risk of unintentional or malicious information leakage, where model users are able to learn information that the model owner assumed was inaccessible. Using vision-language models as a testbed, we present the first systematic comparison of information retained at different “representational levels” as it is compressed from the rich information encoded in the residual stream through two natural bottlenecks: low-dimensional projections of the residual stream obtained using tuned lens, and the final top- logits most likely to impact model’s answer. We show that even easily accessible bottlenecks defined by the model’s top logit values can leak task-irrelevant information present in an image-based query, in some cases revealing as much information as direct projections of the full residual stream.
M2R2: Mixture of Multi-Rate Residuals for Efficient Transformer Inference
March 5, 2025research area Speech and Natural Language Processingconference ICLR
Residual transformations enhance the representational depth and expressive power of large language models (LLMs). However, applying static residual transformations across all tokens in auto-regressive generation leads to a suboptimal trade-off between inference efficiency and generation fidelity. Existing methods, including Early Exiting, Skip Decoding, and Mixture-of-Depth address this by modulating the residual transformation based on…
A Survey on Privacy from Statistical, Information and Estimation-Theoretic Views
September 21, 2021research area Privacyconference IEEE BITS the Information Theory Magazine
The privacy risk has become an emerging challenge in both information theory and computer science due to the massive (centralized) collection of user data. In this paper, we overview privacy-preserving mechanisms and metrics from the lenses of information theory, and unify different privacy metrics, including f-divergences, Renyi divergences, and differential privacy, by the probability likelihood ratio (and the logarithm of it). We introduce…