Deepomatic Platform
  • Overview
  • Release notes
    • January 2025
    • November 21, 2024
    • October 17, 2024
    • September 19, 2024
    • July 18, 2024
    • June 27, 2024
    • May 23, 2024
    • April 18, 2024
    • March 21, 2024
    • February 22, 2024
    • January 18, 2024
    • December 13, 2023
    • October 26, 2023
    • July 20, 2023
    • June 29, 2023
    • May 29, 2023
    • April 27, 2023
    • March 30, 2023
    • February 17, 2023
    • January 19, 2023
    • December 22, 2022
    • November 18, 2022
    • October 19, 2022
    • September 19, 2022
    • July 27, 2022
    • June 26, 2022
    • May 17, 2022
    • April 13, 2022
    • March 17, 2022
    • February 10, 2022
    • December 21, 2021
    • October 26, 2021
  • Getting started
  • ADMIN & USER MANAGEMENT
    • Invite and manage users
      • Invite group of users at once
      • SSO
        • Azure Active Directory
  • Deepomatic Engage
    • Integrate applications
      • Deepomatic vocabulary
      • Deepomatic connectors
        • Set-up
        • Camera Connector
        • Work Order Connector
      • API integration
        • Authentication
        • Errors
        • API reference
          • Work order management
          • Analysis
            • Guide field workers
            • Perform an analysis
            • Correct an analysis
          • Data retrieval
          • Endpoints' list
      • Batch processing
        • Format
        • Naming conventions
        • Processing
        • Batch status & errors
      • Data export
    • Use the mobile application
      • Configure a mobile application
      • Create & visualize work orders
      • Complete work orders
      • Offline experience
    • Manage your business operations with customisable solutions
      • Roles
      • Alerting
      • Field services
        • Reviewing work orders
        • Exploring work orders
        • Grouping work orders
        • Monitoring assets performance
      • Insights
  • Security
    • Security
    • Data Protection
Powered by GitBook
On this page
  • Models Library
  • Train a model version
  • Training progress - Metrics during training

Was this helpful?

  1. Deepomatic Drive
  2. Configuring Visual Automation Applications

Training models and model versions

Last updated 5 months ago

Was this helpful?

KEY TERMS

Models in Machine Learning correspond to specific solutions, specific ways to address a problem. Within a view, a model then corresponds to a specific set of concepts. Adding or deleting a concept changes the model that you are building because it implies that you are modeling the problem differently, or creating a different solution to the same problem. On the contrary, adding new images to the project only adds data but does not alter the model.

Model Versions are instances of a solution to your problem. Within a view, a model version then corresponds to a specific trained neural network, with a specific architecture, and on a specific set of training images.

Models Library

When on a given view, to train model versions and to see training history, click on the Library tab in the Models section of the navigation bar. You will see a listing of all the models that have been created, the different versions that you have trained for each one of them, their status, a first indicator on their performances and a few useful links.

Train a model version

To train a new model version, click on Train a new model version, give a name to your model version, and decide on your parameters in the training options panel. By clicking on Create, you launch the training.

To better understand and decide on the options, go to the following sections:

Which annotations are used during a training? How about images without concepts?

The training algorithm will use the training set's annotations for the appropriate view.

  • All of the images and annotations from the training set are utilized to train the classification view. In a classification view, an image cannot be "without concept."

  • Images can be annotated for a tagging view as "without concept," which signifies that they don't validate any concept in the view. During training, all images and annotations—including those that are "without concept"—are utilized.

  • Images can be marked as "without concept" for a detection view, indicating that the image contains no concepts. At training time, only images with at least one bounding box are used; images "without concept" are not.

Training progress - Metrics during training

During training, several metrics are displayed:

  • Learning rate corresponds to the evolution of the value of the learning rate during the training

  • Total loss (batch) is calculated on each training batch, and corresponds only to the loss of a specific training batch - The interest of this metric is to confirm that it is decreasing with time, and that the model is training correctly

  • Total loss (val set) is the loss computed over the entire validation set. It is computed at the end of each epoch

  • Accuracy (val set) is the accuracy computed over the entire validation set. In the particular case of Tagging, the accuracy is computed for a threshold of 0.5, so this metric should be taken with a grain of salt for this particular task.

  • mAP (val set) is the Mean Average Precision that is computed over the entire validation set. This metric is not used in the case of Classification Task.

At the end of each epoch, or after a number of iterations equal to the size of the training set divided by the batch size, the metrics on the validation set are calculated. These measures enable the identification of a difference in the loss behavior between the training set and the validation set, hence highlighting the over-fitting of the model.

For classification, the model is saved at the iteration with the greatest value using the accuracy on the validation set. Mean Average Precision is used in its place for tagging. For more details on those measures, please go to the page after this one.

Once the training is over, you can click on the model version to get more information on the training information and evaluate the performance of your model.

Training options
Image resizer options
Dataset options
Data Augmentation
Metrics explained: Classification and Tagging
Metrics explained: Detection
Evaluating performances
You get a progress bar with the different steps required and a status on their advancement.
Example of metrics during training for a Tagging model