Onsite talk by Prof. Karinne Ramirez-Amaro at Chalmers University of Technology
We are pleased to announce to you that Professor Karinne Ramirez-Amaro at Chalmers University of Technologyand her student will give talks at NII.
You are most welcome to come and join us.
Explainable AI meets Robotics - Robots that Learn and Reason from Experiences
The advances in Collaborative Robots (Cobots) have rapidly increased with the development of novel data- and knowledge-driven methods. These methods allow robots, to some extent, to explain their decisions. This research area is known as Explainable AI and is gaining importance in the robotics community. One advantage of such methods is the increase of human trust towards Cobots since robots could explain their decisions, especially when errors occur or when facing new situations. Explainability is a challenging and important component when deploying Cobots into real and dynamic environments.
In this talk, I will introduce a novel semantic-based learning method that generates compact and general models to infer human activities. I will also explain our current learning approaches to enable Cobots to learn from experience. Reasoning and learning from experiences are key when developing general-purpose machine learning methods. These experiences will allow robots to remember the best strategies to achieve a goal. Therefore, the new generation of robots should reason based on past experiences while providing explanations in case of errors. Thus, improving the autonomy of robots and human's trust to work with robots.
Explainable Robot Decision Making and Failure Explanations
Robots are envisioned to support humans in daily activities like housework. However, robots are bound to fail, particularly when they act in human-centered environments. Therefore, explainable decision-making and failure explanation have been important research directions for the last few years. In this talk, I will present several of my contributions to this overall objective of explainable robotics.
First, I will discuss our approach to learning from human demonstrations and the prospect of using Augmented Reality to verify the learned behavior. Instead of learning motor policies that mirror the demonstration, we try to abstract the underlying intention of each demonstrated action and capture it in the form of symbolic human understandable planning operators. Then, I will present a novel method that allows robots to generate contrastive explanations of their execution failures based on causal Bayesian Networks (BNs). Finally, I will discuss ongoing research in the direction of failure prevention and transfer of prior experience to learn BNs more data-efficiently.
Prof. Karinne Ramirez-Amaro
Professor, Chalmers University of Technology
Dr. Karinne Ramirez-Amaro is an Associate professor at Chalmers University of Technology since March 2022. Previously, she was a post-doctoral researcher at the Technical University of Munich (TUM), Germany. She completed her Ph.D. (summa cum laude) at the Department of Electrical and Computer Engineering at TUM in 2015. She has received different awards, e.g. the price of excellent Doctoral degree for female engineering students and the Google Anita Borg scholarship. In 2022, Karinne was elected as member of the Administrative Committee (AdCom) from the IEEE Robotics and Automation Society (RAS) and she is the chair of the IEEE RAS Women in Engineering (WiE). Her research interests include Explainable AI, Semantic Representations, Cause-based Learning Methods, Collaborative Robotics, and Human Activity Recognition and Understanding.
10:00-11:30 / Friday, October 21st, 2022
Venue:Room 1512 at National Institute of Informatics (National Center of Sciences)
If you would like to attend, send an email to the address below:
Email:inamura [at] nii.ac.jp
INAMURA Tetsunari - Principles of Informatics Research Division, NII