EVENT
Event News
Trustworthy AI Seminar by Andreas Brännström (Umeå University, Sweden)
This seminar aims to introduce participants to the European Commission's guidelines on Trustworthy AI and facilitate discussion on the ethical considerations related to deception in AI.
Lecturer:
Andreas Brännström (Umeå University, Sweden)
Lecturer Bio:
Andreas Brännström is a computing science researcher with an academic background in cognitive science and computing science. He has an industry background in software engineering where he has been engaged in ICT startups and full-stack development in a variety of municipal digitalization projects. He is currently a Doctoral student at the Department of Computing Science, Umeå universitet, Sweden. He belongs to the Responsible AI Research Group and WASP-HS Graduate School. The goal of his project "Strategic argumentation to deal with interactions between intelligent systems and humans" is to develop formal frameworks for providing AI systems with capabilities of comprehending and engaging with humans in a highly personalized manner. They have formalized dynamic models rooted in cognitive/psychological theories using formal methods such as Answer Set Programming (ASP), Formal Argumentation, and Formal Dialogues.
Part I:
Introduction to Trustworthy AI Guidelines
Objective and Procedure:
The primary objective of Part I is to familiarize participants with the seven key principles of Trustworthy AI as outlined by the European Commission. This will be achieved through a structured presentation that covers each principle in turn. A small video (2-5 minutes) linked to each of the seven key principles will be shown, followed by an interactive Q&A session using the menti platform (menti.com), allowing participants to contribute to a collective discussion.
Part II:
Use-case Discussion on Deception and AI
Objective and Procedure:
Part II of the seminar aims to explore the ethical challenges and considerations related to deception in AI through group discussions and collaborative exercises. The content will include an introduction to deception in AI, defining and providing examples of deception in AI. Participants will engage in a use-case-oriented group discussion on Trustworthy AI principles and Deception in AI, reflecting on challenges and ethical concerns in this research area. The group discussions will be followed by a plenary session where key points are summarized.
Data collection:
Data collection: Data collection will be facilitated through anonymous sharing on Menti (menti.com) and an online survey (Microsoft forms), allowing participants to contribute their insights without attribution.
Participation in this seminar and data sharing is entirely voluntary.
Reporting and Dissemination: The collected anonymous data will be analyzed using thematic analysis. The results will be compiled into a written report, which will be accessible to all participants.
Time/Date:
13:00-15:00 / Tuesday 20 August ,2024
Place:
NII Room #1509 (15F)
Contact:
If you would like to join, please contact by email.
Email: inoue [at] nii.ac.jp