SQUIRE: Interactive UI Authoring via Slot QUery Intermediate REpresentations
AuthorsAlan Leung, Ruijia Cheng, Jason Wu, Jeffrey Nichols, Titus Barik
SQUIRE: Interactive UI Authoring via Slot QUery Intermediate REpresentations
AuthorsAlan Leung, Ruijia Cheng, Jason Wu, Jeffrey Nichols, Titus Barik
Frontend developers create UI prototypes to evaluate alternatives, which is a time-consuming process of repeated iteration and refinement. Generative AI code assistants enable rapid prototyping simply by prompting through a chat interface rather than writing code. However, while this interaction gives developers flexibility since they can write any prompt they wish, it makes it challenging to control what is generated. First, natural language on its own can be ambiguous, making it difficult for developers to precisely communicate their intentions. Second, the model may respond unpredictably, requiring the developer to re-prompt through trial-and-error to repair any undesired changes. To address these weaknesses, we introduce Squire, a system designed for guided prototype exploration and refinement. In Squire, the developer incrementally builds a UI component tree by pointing and clicking on different alternatives suggested by the system. Additional affordances let the developer refine the appearance of the targeted UI. All interactions are explicitly scoped, with guarantees on what portions of the UI will and will not be mutated. The system is supported by a novel intermediate representation called SquireIR with language support for controlled exploration and refinement. Through a user study where 11 frontend developers used Squire to implement mobile web prototypes, we find that developers effectively explore and iterate on different UI alternatives with high levels of perceived control. Developers additionally scored Squire positively for usability and general satisfaction. Our findings suggest the strong potential for code generation to be controlled in rapid UI prototyping tools by combining chat with explicitly scoped affordances.
Misty: UI Prototyping Through Interactive Conceptual Blending
August 15, 2025research area Human-Computer Interaction, research area Tools, Platforms, Frameworksconference CHI
UI prototyping often involves iterating and blending elements from examples such as screenshots and sketches, but current tools offer limited support for incorporating these examples. Inspired by the cognitive process of conceptual blending, we introduce a novel UI workflow that allows developers to rapidly incorporate diverse aspects from design examples into work-in-progress UIs. We prototyped this workflow as Misty. Through an exploratory…
ILuvUI: Instruction-Tuned Language-Vision Modeling of UIs from Machine Conversations
July 14, 2025research area Human-Computer Interactionconference IUI
Multimodal Vision-Language Models (VLMs) enable powerful applications from their fused understanding of images and language, but many perform poorly on UI tasks due to the lack of UI training data. In this paper, we adapt a recipe for generating paired text-image training data for VLMs to the UI domain by combining existing pixel-based methods with a Large Language Model (LLM). Unlike prior art, our method requires no human-provided annotations,…