AI seminar by Jean-Marie Lagniez and Gavain Bourgne
On Tuesday, May 16, we will hold the AI seminar (on explanation and ethics in AI) by inviting two guest speakers, Jean-Marie Lagniez and Gavain Bourgne.
About computing abductive explanation for ensemble learning models
Random forests and boosted regression trees have long been considered as powerful model ensembles in machine learning. By training multiple decision trees, whose diversity is fostered through data and feature subsampling, the resulting classifier can lead to more stable and reliable predictions than a single decision tree. This however comes at the cost of decreased interpretability: while decision trees are often easily interpretable, the predictions made by ensemble models are much more difficult to understand. In this talk, we will examine different types of reasons that explain "why" an input instance is classified as positive or negative by the classifier. Notably, as an alternative to prime-implicant explanations taking the form of subset-minimal implicants of these classifiers, we introduce the notion of tree specific reasons. For these abductive explanations, the tractability of the generation problem (finding one reason) and the optimization problem (finding one minimum-sized reason) are investigated. Unlike prime-implicant explanations, majoritary reasons may contain redundant features. However, in practice, prime-implicant explanations - for which the identification problem is generally DP-complete - are slightly larger than tree specific reasons that can be generated using a simple linear-time greedy algorithm.
Jean-Marie Lagniez, Professor, CRIL-CNRS, University of Artois, France
ACE framework for representing, comparing and combining different ethical principles
Computational ethics is a growing field of AI concerned with the design of ethical agents and the modelling of ethical reasoning using philosophical foundations from normative ethics. The focus of ethical reasoning is to determine which decision could be considered right in a given context and different ethical have been put forth by philosophers such a deontological approaches, where course of actions are assessed with respect to their adherence to some rule of conduct (Duty), or consequentialism, which consider that an action should be assessed by considering the general state of affairs brought about by its consequence (evaluating a notion of Good). To compare such approaches, it is important to represent the factual knowledge about the context and situation in a unified way. The ACE framework model ethical reasoning by considering three distinct layers : an Action layer, focusing on the factual knowledge about the situation of the effect of actions, a Causality layer, which analyses the different scenario to assess the causal relations linking actions with events, and an Ethical layers in which the different principles each conclude on the permissibility or not of the different alternatives. Then, as ethical conflict often arise from conflicting values or incompatible rules of conduct, we will discuss how to combine narrow principles into a broader scheme, focusing on some ordinal approaches where each value induce some partial preference among the possible alternatives.
Gauvain Bourgne, Associate Professor, LIP6, Sorbonne University, France
15:30 - / Tuesday, May 16th, 2023