The availability and advantages of distributed electric propulsion have led to a new design space for flying vehicles. The rise of small multi-copter vehicles and more recently the concept of on-demand urban air transport gives autonomous vehicles an advantage because pilots negate the potential cost savings, limit vehicle designs, and corresponding market.
For example, multi-copters have the potential to rapidly reach a large set of relevant viewpoints for inspection, cinematography, and mapping. However, to reach this potential these vehicles need to be aware and react to their limitations, adapt and learn what “relevant” means, and need to respond to the changing world in a safe manner.
In my research, I use the terms model and adaptation broadly: Models can be the robot dynamics, error dynamics of state estimation, wind model, planning abstractions, or perception representations for example. Adaptation based on “error signals” can be an integrator, self-supervised learning, or offline training of CNNs.
Current approaches to improve operations of these vehicles fall short. Either they can reactively keep the vehicles safe, have high performance in the nominal case, or are able to adapt but not perform relevant missions. These shortcomings are because either approaches rely on static models and do not utilize the feedback available or they only rely on training of policies.
I am interested in overcoming these limitations to robot autonomy by increasing the amount of feedback signals utilized and automating adaptation to these signals in a safe way. This motivates me to answer these two fundamental research questions:
- How can one improve the safety of adaptive autonomy systems without sacrificing performance and respecting real-time constraints?
- What are appropriate adaptation mechanism and signals that can improve the capability for real-world (flying) robots while maintaining the ability to give safety guarantees?
These research questions address fundamental limitations of today’s autonomous systems where the complex interactions between different modules lead to fragile systems and unexpected behavior and the desired behavior is difficult to capture.
I am studying these questions by researching and applying machine learning to the traditional perception/state estimation/and planning components of an autonomy system that relies on strong models as well as blending the approach with model-free approaches where appropriate.
Over the last several years, we have found that adaptive model-based approaches are powerful and high-performance when we applied them to autonomous flight. Models are powerful because they enable the abstraction of irrelevant parts, the ability to predict and reason based on these predictions. However, we have also found that models are fragile and will fail if the modeling error is too large.
Any useful model will have some simplifying assumptions and if these assumptions are valid the overall system will work well. Modelling errors are typically easy to find; however, how to adapt to these errors is not obvious.
On the other hand, model-free approaches have the advantage of not placing an artificial constraint on the complexity of the model or where model complexity needs to be expressed; however, the performance outside of the observed boundaries is unknown and validating the correctness is difficult.
Over the next couple of years, I see us achieving a new level of safety and capability for flying systems. My team and I will achieve this by a shift and adaptation of current strong model-based paradigms to a mixed (model/model-free) paradigm that is able to adapt in real time at multiple frequencies and levels. This will lead to new applications beyond what is possible now such as on-demand urban air transport. The methods we develop to address these challenges will lead to a new research agenda of safe real-time adaptation algorithms. In the longer term, these adaptive systems will lead to the next set of questions on how to achieve systems that behave according to the designers with minimal external inputs.