A Small-Scale System for Autoregressive Program Synthesis Enabling Controlled Experimentation
AuthorsRuss Webb, Jason Ramapuram**
A Small-Scale System for Autoregressive Program Synthesis Enabling Controlled Experimentation
AuthorsRuss Webb, Jason Ramapuram**
What research can be pursued with small models trained to complete true programs? Typically, researchers study program synthesis via large language models (LLMs) which introduce issues such as knowing what is in or out of distribution, understanding fine-tuning effects, understanding the effects of tokenization, and higher demand on compute and storage to carry out experiments. We present a system called Cadmus which includes an integer virtual machine (VM), a dataset composed of true programs of diverse tasks, and an autoregressive transformer model that is trained for under $200 of compute cost. The system can be used to study program completion, out-of-distribution representations, inductive reasoning, and instruction following in a setting where researchers have effective and affordable fine-grained control of the training distribution and the ability to inspect and instrument models. Smaller models working on complex reasoning tasks enable instrumentation and investigations that may be prohibitively expensive on larger models. To demonstrate that these tasks are complex enough to be of interest, we show that these Cadmus models outperform GPT-5 (by achieving 100% accuracy while GPT-5 has 95% accuracy) even on a simple task of completing correct, integer arithmetic programs in our domain-specific language (DSL) while providing transparency into the dataset’s relationship to the problem. We also show that GPT-5 brings unknown priors into its reasoning process when solving the same tasks, demonstrating a confounding factor that prevents the use of large-scale LLMs for some investigations where the training set relationship to the task needs to be fully understood.
The 2025 AIML Residency Program Application is Now Open
October 24, 2024
Update - As of January 2025, applications for the 2025 Residency Program are closed.
The Apple AIML Residency is a one-year program for graduates with advanced degrees, designed to spur interdisciplinary collaboration and apply it to ML-based solutions to solve new complex problems. The program supports experts in various science, technology, engineering, and mathematics (STEM) fields, as well as emerging ML researchers and engineers as they…
The 2024 AIML Residency Program Application is Now Open
October 23, 2023
Update - As of January 2024, applications for the 2024 Residency Program are closed.
The Apple AIML Residency is a one-year program for graduates with advanced degrees, designed to spur interdisciplinary collaboration and apply it to ML-based solutions to solve new complex problems. The program supports experts in various science, technology, engineering, and mathematics (STEM) fields, as well as emerging ML researchers and engineers as they…