View publication

Typical educational robotics approaches rely on imperative programming for robot navigation. However, with the increasing presence of AI in everyday life, these approaches miss an opportunity to introduce machine learning (ML) techniques grounded in an authentic and engaging learning context. Furthermore, the needs for costly specialized equipment and ample physical space are barriers that limit access to robotics experiences for all learners. We propose ARtonomous, a relatively low-cost, virtual alternative to physical, programming-only robotics kits. With ARtonomous, students employ reinforcement learning (RL) alongside code to train and customize virtual autonomous robotic vehicles. Through a study evaluating ARtonomous, we found that middle-school students developed an understanding of RL, reported high levels of engagement, and demonstrated curiosity for learning more about ML. This research demonstrates the feasibility of an approach like ARtonomous for 1) eliminating barriers to robotics education and 2) promoting student learning and interest in RL and ML.

Related readings and updates.

ELEGNT: Expressive and Functional Movement Design for Non-Anthropomorphic Robot

Nonverbal behaviors such as posture, gestures, and gaze are essential for conveying internal states, both consciously and unconsciously, in human interaction. For robots to interact more naturally with humans, robot movement design should likewise integrate expressive qualities—such as intention, attention, and emotions—alongside traditional functional considerations like task fulfillment, spatial constraints, and time efficiency. In this paper…
See paper details

ARMADA: Augmented Reality for Robot Manipulation and Robot-Free Data Acquisition

Teleoperation for robot imitation learning is bottlenecked by hardware availability. Can high-quality robot data be collected without a physical robot? We present a system for augmenting Apple Vision Pro with real-time virtual robot feedback. By providing users with an intuitive understanding of how their actions translate to robot motions, we enable the collection of natural barehanded human data that is compatible with the limitations of…
See paper details