TAIGERS
Transparency in Artificial Intelligence: considerinG Explainability, useR and System factors

The increasing demand for transparency in AI goes hand in hand with an increase in research in this area. At the same time, research faces the challenge of integrating the social science perspective with approaches that have so far been very computer science-oriented. Still, the disciplines hardly find common answers, but rather deal with the technical implementation under terms such as Explainability or XAI, and with the users' view of the topic under Understandability and Trust.
In the TAIGERS project, the IMA, together with the Chair of Communication Studies (CS), brings the perspectives together. The aim is to connect individual factors with demands on the system. Findings from focus groups and interviews with domain experts are validated in scenario-based surveys and online experiments and discussed with technical experts. The result is an attempt to develop a first domain-independent framework for transparency in AI that makes the human perspective on the topic of transparent AI usable for computer scientists and engineers.