
Impact:
Created AI feature with improved trust and understanding of AI by x and potentially increase AI democratization to 80% more than before.
NOTE: This is a result of ongoing research with Adobe, Siemens & LMU Munich.
Problem:
Explainable AI is a significant factor for trustworthiness and understandability of AI. Most XAI methods try to visualize AI behaviour, but they mostly are still not understandable by non-AI expert such as business stakeholders.


Solution:
Generative AI, if configured properly can generate explainations for XAI visualization to make them easier understandable. This could look like this.
Design & Development:
We first created low fidelity mock up and ideas to evaluate the logic. We then created visual prototypes, as shown here:

We further then integrated multimodal AI models into the app. The architecture behind looks like this:
User Testing:
We evaluated the design with 50 users and conducted an A/B hypothesis testing. The hypothesis was that GenAI-generated explaination improve trust in and understanding of AI (treatment B), compared to no explaination (treatment A).
The results showed that AI-generated XAI explainations improve, as seen here:
