Understanding models
Use your predictions to open the black box.
Once you have trained a model version, in addition to looking at the evaluation metrics and estimating how good in general your model is, another challenge is to understand the strengths and weaknesses of your model (on which kind of images it is good, on which kind of images it needs to improve).
To do so, the best way is to click on the model version you want to study, and then on the Gallery tab. you thus have access to four tabs which are detailed below.
On the four tabs are displayed:
For classification and tagging views, you will see on the tabs a gallery of images.
For detection views, you will see on the tabs a gallery of boxes (which are part of images).
In both cases, you can only see images that were already in your view when you launched the training of the model version. This means that if you have added images after training your model version, these images will not be displayed on any of the four tabs.
To switch between training and validation, use the toggle at the top right corner of the page.
All
What is displayed?
On the All tab are displayed all the predictions of your model version, without any impact of the annotations.
Sorting
To choose the images you want to display, you must use the filter to select the concept for which you want to see the predictions.
The images or boxes displayed are sorted according to the prediction score for the concept you choose, with the highest scores first. Thus, you visualize in priority the most certain predictions of your model version, i.e. those on which the neural network has no difficulty to make the prediction. They are also generally the easiest images to analyze (very little ambiguity).
Fast forward to threshold
Beyond the images for which the model has no difficulty in making the prediction, it is essential to look at the images with a prediction score around the threshold. Those images are indeed images or boxes for which the model has a lot of difficulty to make a prediction. To check those images, there is a shortcut using the button on the top right of the page.
How is the threshold determined? The threshold is calculated automatically to minimize the errors on your validation set. Using the predictions of the model version and the annotation as a groundtruth, the thresholds are automatically calculated.
Matched annotation
What is displayed?
On the Matched annotation tab are displayed all the images or boxes of your model version for which annotation and prediction match. For classification and tagging, only positive annotations (and of course correct predictions) are displayed.
Filtering & Sorting
When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected annotated (and thus predicted) concept.
As in the All tab, images or boxes are sorted by prediction score, with most simple images or boxes displayed first.
Missing annotation
What is displayed?
On the Missing annotation tab are displayed all the images or boxes predicted by your model version that don't match with the annotation. It is similar to the Error spotting tabs.
Filtering & Sorting
When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected predicted concept.
As in the two previous tabs, images or boxes are sorted by prediction score, with most simple images or boxes displayed first.
Extra annotation
What is displayed?
On the Extra annotation tab are displayed all the images or boxes annotated that haven't match with your model version prediction. It is similar to the Error spotting tabs.
Filtering & Sorting
When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected annotated concept.
The images or boxes are also sorted in this tab:
for classification and tagging views, we sort the images according to the predicted score for the selected concept.
for detection views, we don't have a predicted score because the annotated box precisely didn't match any predicted box. However, we still use the predictions from the model version, looking at the predicted boxes with prediction score below the threshold. And we display boxes in reverse order based on the best match with prediction boxes whose score is below the threshold.