Seminar Philosophy & computer sciences: Christophe Denis, Information & conviviality in deep neural networks

Mardi 3 mai 2022 (Jour entier) - Mardi 17 mai 2022 (Jour entier)

 

Seminar Tübingen-Nancy
Philosophical aspects of computer sciences – Ethics, Norms & Responsibility

Organisation : Maël Pégny, Reinhard Kahle, Thomas Piecha, Anna Zielinska, Cyrille Imbert & Open Language and Knowledge for Citizens - OLKi
Archives Henri Poincaré - Philosophie et Recherches sur les Sciences et les Technologies / Université de Lorraine 
Universität Tübingen


Christophe Denis

Associate Professor at Sorbonne University – Laboratory of Computer Sciences LIP6
PhD Student in Philosophy at University of Rouen-Normandie

The notions of information and conviviality in deep neural networks – or what about "Explainable AI" or "Trusted AI"?

16 May 2022, Monday
17:00 (CEST/heure de Paris)
Online seminar
The seminar takes place online. 
Please register by clicking here (for this and for the future meetings of the seminar)
https://forms.gle/papVbAjPoyoGEqTH9

Both the lecture and the discussion will be in English. 

Lundi 16 mai 2022, à 17h00, en ligne. L'exposé et la discussion auront lieu en anglais.

Abstract
The thunderous return of neural networks occurred in the sublime Florentine setting in 2012 during a renowned international computer vision conference. As for several years, the participants of this conference were invited to test their image recognition techniques. Geoffrey Hinton's team from the University of Toronto was the only one using deep neural networks: it outperformed the other competitors in two out of three categories of the competition.  The audience was stunned by the impact of the reduction in prediction error, a factor of three, while the algorithms based on the expertise of the researchers differ by a few percent. Other computational scientific disciplines, like computational fluid dynamics, geophysics, and climatology, have also started to use deep learning methods to predict phenomena which are difficult to solve with a classical hypothetical deductive approach.

Impressed by the results obtained by deep learning results, an American master student from the University of Maryland had set up an ambitious deep learning project. Its objective was to automatically detect a husky or a wolf on images representing only one of these two animals in their setting lives. This project seemed to be difficult as these two animals are very similar unlike for example cat and bird. The student and his teacher were amazed by the very good results achieved by the model … until a husky in the snow was classified as a wolf by the deep neural network.  After  further analysis, the explanation of the very good prediction results was disappointing: the neural network did not "learn" to distinguish a wolf from a husky, but only to detect snowy settings.  Did the machine learning model cheat? So how do we build trust between users and AI ? To ensure trust, many AI ethical committees recommend building in explanations of the predictive machine learning outcomes to be provided to the users.  For example, in France, the bioethics law recently voted by the French National Assembly requires that the designers of a medical device based on machine learning explain how it works.
 
We argue that systematically explaining deep learning to all its users is not always justified, could be counterproductive and even raises ethical issues. For example, how to assess the correctness of an explanation that could even be unintentionally permissive or even manipulative in a fraudulent context? There is therefore a need to revisit the theory of information (Fisher, Shannon) and the philosophy of information (eg. Floridi) in the light of deep learning. This information will allow certain users to produce their own reasoning (surely an abductive one) rather than receiving an explanation.
Last but not least, should we trust a machine learning model? Trust means handing over something valuable to someone, relying on them. The corollary is that "the person who trusts is immediately in a state of vulnerability and dependence", and all the more and all the more so on the basis of an explanation whose correctness is difficult to assess.
Last but not least, we strongly believe that using human relationship terms, like trust or fairness in the context of machine learning, necessarily induces anthropomorphism, whose bad effects could be addiction (Eliza effect) and persuasion rather than information.  In contrast, our philosophical and mathematical research direction tries to define conviviality criteria in machine learning based on Ivan Illich's thought. According to Illich, a convivial tool must have the following properties:
• it must generate efficiency without degrading personal autonomy;
• it must create neither slave nor master;
• it must widen the personal radius of action.
As presented in the last part of the talk, neural differential equations, by providing trajectories rather than predictions, seem to be an efficient mathematical formalism to implement convivial deep learning tools.


****
Our first season 2021-2022

15 November 2021
Maël Pégny
Mathematizing fairness? On statistical metrics of algorithmic fairness

21 February 2022
Carmela Troncoso
Mismatching concerns and definitions in current trends in machine learning

21 March 2022
Marija Slavkovik
Digital Voodoo Dolls

11 April 2022
Karoline Reinhardt
Dimensions of trust in AI Ethics

16 May 2022
Christophe Denis
Information & conviviality in deep neural networks

For the recording of the seminar, please check here: 
https://www.youtube.com/playlist?list=PL_7w_H-zjjuEqhh4gLTWg5MbmbTfGmXIU

Initial context of the event: 
Project Open Language and Knowledge for citizens (OLKi) [http://lue.univ-lorraine.fr/fr/open-language-and-knowledge-citizens-olki] is involved in the design of new machine learning algorithms dedicated to knowledge extraction from language data, with a focus on transparency, explainability of algorithms, and open and privacy-friendly science.
 

Affichage: 
Sur la page d'accueil
Date de l'événement: 
Lundi 16 mai 2022 - 17:00
Salle: 
online