Traduction en cours…
When an artificial intelligence (AI) or automatic system makes a decision, the reasons behind that decision are often obscure and difficult to interpret. In order to trust the outcomes of AI systems, one need to be able to present the rationale behind decisions in understandable and transparent ways. Visualisation and Interaction are two keys to address these issues. This research domain is called Explainable Artificial Intelligence with a focus on creating systems to help people interpret the reasoning behind decisions making.