Alibi aims to be the go-to library for ML model interpretability and monitoring. There are multiple challenges for developing a high-quality, production-ready library that achieves this. In addition to having high quality reference implementations of the most promising algorithms, we need extensive documentation and case studies comparing the different interpretability methods and their respective pros and cons. A clean and a usable API is also a priority.
Ongoing optimizations of existing algorithms (speed, parallelisation, explanation quality)
Finalize a unified API (Github PR)
Initial visualizations and visualization backends (Github issue)
White-box explanation methods (e.g. Integrated Gradients)
Support both TensorFlow and PyTorch for white-box methods
Explanations for regression models
Explanations for sequential and structured data