Monday, June 30th
Talk 1: Davide Scaramuzza, University of Zurich, Switzerland
Towards Agile Flight of Vision-controlled Micro Flying Robots: from Frame-based to Event-based Vision
In the last two years, we have heard a lot of news about Micro Aerial Vehicles (MAVs), in the form of small quadrotors. Quadrotors have numerous advantages over ground vehicles: they can easily reach environments that no human can access and have more agility and navigational capabilities than ground vehicles. Unfortunately, their dynamics makes them extremely difficult to control and this is particularly true in absence of external positioning systems, such as GPS. In this talk, I will give an overview of my research activities on visual navigation of MAVs, from slow navigation (using standard frame-based cameras) to agile flight using event-based cameras.
Davide Scaramuzza (1980, Italian) is Assistant Professor of Robotics and Computer Vision at the University of Zurich. He is founder and director of the Robotics and Perception Group (http://rpg.ifi.uzh.ch). His research interests are robot vision and visually-guided micro aerial vehicles. He received his PhD (2008) in Robotics and Computer Vision at ETH Zurich (with Roland Siegwart). He was Postdoc at both ETH Zurich and the University of Pennsylvania (with Vijay Kumar and Kostas Daniilidis). From 2009 to 2012, he led the European project “sFly”, which introduced the world’s first autonomous navigation of micro quadrotors in GPS-denied environments using onboard cameras as the main sensor modality. For his research contributions, he was awarded the IEEE Robotics and Automation Society Early Career Award, a Google Research Award (2014), the European Young Researcher Award (2012), and the Robotdalen Scientific Award (2009). He is coauthor of the 2nd edition of the book “Introduction to Autonomous Mobile Robots” (MIT Press). He is also author of the first open-source Omnidirectional Camera Calibration Toolbox for MATLAB, which, besides accomplishing thousands of downloads worldwide, is also used at NASA, Philips, Bosch, and Daimler. He is also consultant for several companies and agencies, such as Dacuda, Sensefly, and the United-Nations International Atomic Energy Agency within the Fukushima Action Plan. Finally, he is author of several top-ranked robotics and computer vision journals. His research interests are field and service robotics, intelligent vehicles, and computer vision. Specifically, he investigates the use of cameras as the main sensors for robot navigation, mapping, exploration, reasoning, and interpretation. His interests encompass both ground and flying vehicles.
Talk 2: Angela Schoellig, University of Toronto, Canada
During the last decade, great progress has been made in vehicle sensing, control and communication technology. While these fields have largely been studied independently, they heavily influence each other when considering real-world robot applications. In this talk, I will transition from my earlier work, which focused on developing multi-vehicle control and planning algorithms in controlled, indoor environments, to recent work on robotic air and ground applications in uncontrolled, outdoor environments. In particular, I will present (i) a stereo-camera-equipped rover that learns to traverse unknown, rough terrain using vision only with applications to mining, agriculture and space exploration, and (ii) a multi-aerial-vehicle system for environmental monitoring. In both examples, the overall system performance can be improved by taking into account the inherent coupling of sensing, control and communication. I will conclude by highlighting that sensing, control and communication must be designed together to achieve high performance in real-world robot applications.
Talk 3: Gianni di Caro, IDSIA Lugano, Switzerland
The Cooperative Multi-Robot Observation of Multiple Moving Targets (CMOMMT) is a class of NP-Hard problems where mobile robots equipped with limited-range sensors are used to keep under observation a (possibly larger) set of mobile targets. The robots cooperatively aim to maximize some measure of the overall time during which each target falls under the sensing range of at least one robot over the mission’s time horizon. According to who/what plays the role of moving target (e.g., an intruder, a rescuer, a vehicle in a convoy), the CMOMMT class of problems finds a number of applications in different domains, such as surveillance, environmental monitoring, transportation, and search and rescue. The talk will present a novel multi-objective optimization model for CMOMMT scenarios, aiming to obtain balanced observations among all targets while maximizing the overall time a single target is being observed by at least one robot. The proposed integer linear formulation of the optimization problem exploits some available knowledge about targets’ motion patterns, and employs a Bayesian framework to continually update spatial maps associating to each portion of the environment the probability of being occupied by a moving target at a specified future time. An empirical analysis of the performance of the model is performed in simulation exploiting the ROS model of a flying drone as reference robot. Multiple scenarios have been considered to study the effect of varying the number of robots and targets’ mobility and prediction accuracy. Both centralized and distributed implementations will be shown and compared to each other evaluating the impact of multi-hop communications and limited information sharing among the robots.