View publication

Typical educational robotics approaches rely on imperative programming for robot navigation. However, with the increasing presence of AI in everyday life, these approaches miss an opportunity to introduce machine learning (ML) techniques grounded in an authentic and engaging learning context. Furthermore, the needs for costly specialized equipment and ample physical space are barriers that limit access to robotics experiences for all learners. We propose ARtonomous, a relatively low-cost, virtual alternative to physical, programming-only robotics kits. With ARtonomous, students employ reinforcement learning (RL) alongside code to train and customize virtual autonomous robotic vehicles. Through a study evaluating ARtonomous, we found that middle-school students developed an understanding of RL, reported high levels of engagement, and demonstrated curiosity for learning more about ML. This research demonstrates the feasibility of an approach like ARtonomous for 1) eliminating barriers to robotics education and 2) promoting student learning and interest in RL and ML.

Related readings and updates.

Safe Real-World Reinforcement Learning for Mobile Agent Obstacle Avoidance

Collision avoidance is key for mobile robots and agents to operate safely in the real world. In this work, we present an efficient and effective collision avoidance system that combines real-world reinforcement learning (RL), search-based online trajectory planning, and automatic emergency intervention, e.g. automatic emergency braking (AEB). The goal of the RL is to learn effective search heuristics that speed up the search for collision-free…
See paper details

Robust Robotic Control from Pixels Using Contrastive Recurrent State-Space Models

Modeling the world can benefit robot learning by providing a rich training signal for shaping an agent's latent state space. However, learning world models in unconstrained environments over high-dimensional observation spaces such as images is challenging. One source of difficulty is the presence of irrelevant but hard-to-model background distractions, and unimportant visual details of task-relevant entities. We address this issue by learning a…
See paper details