Machine Learning
Research at Apple

Research highlights

A recent paper from Apple researchers, “The Super Weight in Large Language Models,” reveals that an extremely small subset of parameters in LLMs (in some cases, a single parameter) can exert a disproportionate influence on an LLM’s overall functionality (see Figure 1). This work highlights the critical role of these “super weights” and their corresponding “super activations,”…

Read more

Vision Language Models (VLMs) enable visual understanding alongside textual inputs. They are typically built by passing visual tokens from a pretrained vision encoder to a pretrained Large Language Model (LLM) through a projection layer. By leveraging the rich visual representations of the vision encoder and the world knowledge and reasoning capabilities of the LLM, VLMs can be useful for a wide range of applications, including accessibility…

Read more

Recent publications

Natural language processing (NLP) remains one of the most quickly evolving fields in AI, as new research continues to rapidly advance large language models (LLMs), systems for speech recognition and generation, language agents, and more. This technology is essential to many of today’s AI experiences, including Apple Intelligence and Siri, and fundamental research in NLP will be foundational to future AI.

Read more

Apple believes that privacy is a fundamental human right. As AI experiences become increasingly personal and a part of people’s daily lives, it’s important that novel privacy-preserving techniques are created in parallel to advancing AI capabilities.

Read more