MicroWorkshop | Robotics

Welcome to the Micro-Workshop on Robotics Research! Presenting will be an impressive group of experts that together cover a considerable breadth of robotics research – including human-robot interactions, storm tracking via UAVs, underwater robotics, and autonomous vehicles. Dr. Christopher Clark will moderate the workshop. Any questions can be emailed directly to him at clarkAThmc.edu.

Scheduled Speakers

May 27, 11 AM-12 PM (PST)

Maja Mataric | Socially Assistive Robotics: Reshaping Interaction and Care
  • The nexus of in-home intelligent assistants, activity tracking, and machine learning creates opportunities for personalized virtual and physical agents/robots that can positively impact user health and quality of life. Well beyond providing information, such agents can serve as physical and mental health and education coaches and companions that support positive behavior change. However, sustaining user engagement and motivation over long-term interactions presents complex challenges. Our work over the past 15 years has addressed those challenges by developing human-machine (human-robot) interaction methods for socially assistive robotics that utilize multi-modal interaction data and expressive agent behavior to monitor, coach, and motivate users to engage in health- and wellness-promoting activities. This talk will very briefly touch on methods and results of modeling, learning, and personalizing user motivation, engagement, and coaching of healthy children and adults, as well as stroke patients, Alzheimer’s patients, and children with autism spectrum disorders, in short and long-term (month+) deployments in schools, therapy centers, and homes.

Eric Frew | Designing and Using Flying Robots to Study Tornados, Nature’s Most Violent Storms

  • The ability to understand and predict the dynamic behavior of our planet’s environment over multiple spatial and temporal scales remains an outstanding challenge for science and engineering. For the past 40+ years, sensors on spaceborne platforms have increasingly surveilled most of the Earth’s surface and atmosphere. However, a summary of US severe weather data shows no sustained reduction of fatalities or property losses. The Research and Engineering Center for Unmanned Vehicles at the University of Colorado Boulder has designed and deployed unmanned aircraft into supercell thunderstorms in order to improve our understanding and ability to forecast tornados, nature’s most violent storms. This presentation will describe the need for taking measurements inside severe storms, the design of an unmanned aircraft system to collect these measurements, and results from various field campaigns deploying aerial robots in severe weather.

Steve Waslander | Teaching Robots to See: 3D Perception for Autonomous Driving

  • The human world is a dynamic place, with people and vehicles moving freely and unpredictably throughout the environment. For robots to integrate safely into the human environment, a detailed understanding of the state and possible actions of all actors is needed in three-dimensional space. In this talk, I’ll briefly summarize the challenge of robust dynamic perception for robotics, and highlight how deep convolutional neural networks have enabled exciting advances in 3D object detection. I’ll also present some of the interesting approaches to 3D object detection being developed in my lab, specifically for the application of autonomous driving.

Geoff Hollinger | Marine Robotics: Planning, Decision Making, and Human-Robot Learning

  • Underwater gliders, propeller-driven submersibles, and other marine robots are increasingly being tasked with gathering information (e.g., in environmental monitoring, offshore inspection, and coastal surveillance scenarios). However, in most of these scenarios, human operators must carefully plan the mission to ensure completion of the task. Strict human oversight not only makes such deployments expensive and time-consuming but also makes some tasks impossible due to the requirement of reliable communication between the operator and the vehicle. We can mitigate these limitations by making the robotic information gatherers semi-autonomous, where the human provides high-level input to the system and the vehicle fills in the details on how to execute the plan. In this talk, I will show how a general framework that unifies information-theoretic optimization and physical motion planning makes semi-autonomous information gathering feasible in marine environments, allowing for optimized data collection across scientific, defense, and commercial applications.