Deepomatic Platform
  • Overview
  • Release notes
    • January 2025
    • November 21, 2024
    • October 17, 2024
    • September 19, 2024
    • July 18, 2024
    • June 27, 2024
    • May 23, 2024
    • April 18, 2024
    • March 21, 2024
    • February 22, 2024
    • January 18, 2024
    • December 13, 2023
    • October 26, 2023
    • July 20, 2023
    • June 29, 2023
    • May 29, 2023
    • April 27, 2023
    • March 30, 2023
    • February 17, 2023
    • January 19, 2023
    • December 22, 2022
    • November 18, 2022
    • October 19, 2022
    • September 19, 2022
    • July 27, 2022
    • June 26, 2022
    • May 17, 2022
    • April 13, 2022
    • March 17, 2022
    • February 10, 2022
    • December 21, 2021
    • October 26, 2021
  • Getting started
  • ADMIN & USER MANAGEMENT
    • Invite and manage users
      • Invite group of users at once
      • SSO
        • Azure Active Directory
  • Deepomatic Engage
    • Integrate applications
      • Deepomatic vocabulary
      • Deepomatic connectors
        • Set-up
        • Camera Connector
        • Work Order Connector
      • API integration
        • Authentication
        • Errors
        • API reference
          • Work order management
          • Analysis
            • Guide field workers
            • Perform an analysis
            • Correct an analysis
          • Data retrieval
          • Endpoints' list
      • Batch processing
        • Format
        • Naming conventions
        • Processing
        • Batch status & errors
      • Data export
    • Use the mobile application
      • Configure a mobile application
      • Create & visualize work orders
      • Complete work orders
      • Offline experience
    • Manage your business operations with customisable solutions
      • Roles
      • Alerting
      • Field services
        • Reviewing work orders
        • Exploring work orders
        • Grouping work orders
        • Monitoring assets performance
      • Insights
  • Security
    • Security
    • Data Protection
Powered by GitBook
On this page
  • All
  • Matched annotation
  • Missing annotation
  • Extra annotation

Was this helpful?

  1. Deepomatic Drive
  2. Configuring Visual Automation Applications

Understanding models

Use your predictions to open the black box.

Once you have trained a model version, in addition to looking at the evaluation metrics and estimating how good in general your model is, another challenge is to understand the strengths and weaknesses of your model (on which kind of images it is good, on which kind of images it needs to improve).

To do so, the best way is to click on the model version you want to study, and then on the Gallery tab. you thus have access to four tabs which are detailed below.

On the four tabs are displayed:

  • For classification and tagging views, you will see on the tabs a gallery of images.

  • For detection views, you will see on the tabs a gallery of boxes (which are part of images).

In both cases, you can only see images that were already in your view when you launched the training of the model version. This means that if you have added images after training your model version, these images will not be displayed on any of the four tabs.

To switch between training and validation, use the toggle at the top right corner of the page.

All

What is displayed?

On the All tab are displayed all the predictions of your model version, without any impact of the annotations.

Sorting

To choose the images you want to display, you must use the filter to select the concept for which you want to see the predictions.

The images or boxes displayed are sorted according to the prediction score for the concept you choose, with the highest scores first. Thus, you visualize in priority the most certain predictions of your model version, i.e. those on which the neural network has no difficulty to make the prediction. They are also generally the easiest images to analyze (very little ambiguity).

Fast forward to threshold

Beyond the images for which the model has no difficulty in making the prediction, it is essential to look at the images with a prediction score around the threshold. Those images are indeed images or boxes for which the model has a lot of difficulty to make a prediction. To check those images, there is a shortcut using the button on the top right of the page.

How is the threshold determined? The threshold is calculated automatically to minimize the errors on your validation set. Using the predictions of the model version and the annotation as a groundtruth, the thresholds are automatically calculated.

Matched annotation

What is displayed?

On the Matched annotation tab are displayed all the images or boxes of your model version for which annotation and prediction match. For classification and tagging, only positive annotations (and of course correct predictions) are displayed.

Filtering & Sorting

When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected annotated (and thus predicted) concept.

As in the All tab, images or boxes are sorted by prediction score, with most simple images or boxes displayed first.

Missing annotation

What is displayed?

Filtering & Sorting

When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected predicted concept.

As in the two previous tabs, images or boxes are sorted by prediction score, with most simple images or boxes displayed first.

Extra annotation

What is displayed?

Filtering & Sorting

When changing the filter, the images displayed are changed accordingly. You always visualize the images or boxes corresponding to the selected annotated concept.

The images or boxes are also sorted in this tab:

  • for classification and tagging views, we sort the images according to the predicted score for the selected concept.

  • for detection views, we don't have a predicted score because the annotated box precisely didn't match any predicted box. However, we still use the predictions from the model version, looking at the predicted boxes with prediction score below the threshold. And we display boxes in reverse order based on the best match with prediction boxes whose score is below the threshold.

Was this helpful?

On the Missing annotation tab are displayed all the images or boxes predicted by your model version that don't match with the annotation. It is similar to the tabs.

On the Extra annotation tab are displayed all the images or boxes annotated that haven't match with your model version prediction. It is similar to the tabs.

Error spotting
Error spotting