Multi-agent reinforcement learning for decentralized UAV/UGV cooperative exploration
Over the last decade, artificial intelligence has flourished. From a research niche, it has been developed into a versatile tool, seemingly on route to bring automation into every aspect of human life. At the same time, robotics technology has also advanced significantly, and inexpensive multi-robot systems promise to accomplish all those tasks that require both physical parallelism and inherent fault tolerance—such as surveillance and extreme-environment exploration. Decentralized control laws are key to achieve reliability of these systems (as they eliminate the risks posed by single-points-of-failure). Yet, the effective synthesis of (i) machine learning, (ii) multi-robot approaches, and (iii) field robotics is no small task. Previous machine learning and distributed control research rarely ventures beyond computer simulations. GDLS-C and the University of Toronto will investigate how to effectively use multi-agent reinforcement learning in field robotics. GDLS-C's goal is to improve situational awareness of ground vehicles by using swarms of Unmanned Aerial Vehicles (UAV). Learning decentralized cooperation strategies will improve the resilience of these multi-robot systems—potentially faced with adversarial environments—and, ultimately, the safety of their human operators. Answering our research questions will also enable large collections of robots to learn how to interact with one another—beyond the point human designers can attain.