Explanations are of crucial importance in enhancing not only but primarily both, the trustworthiness of a wide range of systems and the traceability of responses. With the ever-increasing complexity of software as well as the recent and fast progress in artificial intelligence (AI), systems become more and more opaque to both, the user and the developer. For both parties, such black-box systems are problematic. Therefore, in particular, in the area of AI, explainability plays an increasingly important role.
In this demo, we present an approach for post-hoc explainability within component-based AI-enhanced software systems, i.e. the explainability of the behavior of components in general (not limited to AI models). To do this, we rely on the system’s data that was processed and/or is reflecting the intermediate processing steps. Accordingly, (post-hoc) explanations are created for each component, regardless of whether or not it is a black-box component.
The demo is based on any Qanary system and built with Python using the Streamlit library.
This demo is available at https://wse-research.org/qanary-explanations
pip install -r requirements.txt
Note: If you are using a virtual environment, make sure to activate it before running the command.
python -m streamlit run qanary-explainability-frontend.py --server.port=8501
After that, you can access the application at http://localhost:8501.
The application is available at Dockerhub for free use in your environment.
docker run --rm -p 8501:8501 --name qanary-explainability-frontend --env-file=service_config/files/env qanary-explainability-frontend:latest
Now, you can access the application at http://localhost:8501.
We are happy to receive your contributions. Please create a pull request or an issue. As this tool is published under the MIT license, feel free to fork it and use it in your own projects.