Uncertainty estimation for xAI
The simplest kind of explanation a model can provide is an estimation of the uncertainty of its prediction. If such an uncertainty is accurate enough, the user would be able to trust the model when it has high confidence and ask for a retraining or more data when the confidence is low.
At SAL a research group around Federico Pittino from the research unit Collaborative Perception & Learning has tested an approach for uncertainty estimation in the prediction of the volume of fruits and vegetables placed on a table. A user records a short video sequence of the object with a smartphone, and a model has been trained to visualize the class of the object and its volume. An approach based on ensembling and a GNLL loss has been used for providing an estimate of uncertainty, which achieved the theoretical 95% accuracy at a 2σ threshold.
Hierarchical concept bottleneck
In the context of images segmentation and objects classification, in real applications it is sometimes hard to derive models that can accurately and reliably perform a fine-grained classification. Recently, Concept Bottleneck models have been proposed for images classification, partitioning the problem in two stages and thereby defining a hierarchy of concepts. So far, however, little work has been done to investigate the applicability of this approach to other datasets with higher intra-class variability and ambiguity, and to discuss its flexibility to tasks different from whole-images classification.
At SAL, Federico Pittino, Vesna Dimitrievska and Rudolf Heer have developed a concept bottleneck model for images segmentation, objects fine classification and tracking. All their models have been trained and tested on a dataset comprised of pictures of fridges filled with various objects, however the method can be applied to any fine classification task. The proposed model makes full use of the hierarchy in concepts, exploiting the relationships between different categories at the same hierarchical level and relying on a novel method for handling multi-labels classifications. The research group has shown that the performance on fine classification is on par with a regular Mask R-CNN, but with a significant increase in explainability and in handling classes confusion. New explainable metrics have been proposed to quantitatively evaluate the increase in explainability. They have also demonstrated the effectiveness of the derived Concept Bottleneck features on related tasks, i.e., the tracking of objects between consecutive pictures in a sequence. Their paper has been published in Engineering Applications of Artificial Intelligence and can be checked out online at
https://www.sciencedirect.com/science/article/pii/S0952197622006649.