Deepomatic Platform
  • Overview
  • Release notes
    • January 2025
    • November 21, 2024
    • October 17, 2024
    • September 19, 2024
    • July 18, 2024
    • June 27, 2024
    • May 23, 2024
    • April 18, 2024
    • March 21, 2024
    • February 22, 2024
    • January 18, 2024
    • December 13, 2023
    • October 26, 2023
    • July 20, 2023
    • June 29, 2023
    • May 29, 2023
    • April 27, 2023
    • March 30, 2023
    • February 17, 2023
    • January 19, 2023
    • December 22, 2022
    • November 18, 2022
    • October 19, 2022
    • September 19, 2022
    • July 27, 2022
    • June 26, 2022
    • May 17, 2022
    • April 13, 2022
    • March 17, 2022
    • February 10, 2022
    • December 21, 2021
    • October 26, 2021
  • Getting started
  • ADMIN & USER MANAGEMENT
    • Invite and manage users
      • Invite group of users at once
      • SSO
        • Azure Active Directory
  • Deepomatic Engage
    • Integrate applications
      • Deepomatic vocabulary
      • Deepomatic connectors
        • Set-up
        • Camera Connector
        • Work Order Connector
      • API integration
        • Authentication
        • Errors
        • API reference
          • Work order management
          • Analysis
            • Guide field workers
            • Perform an analysis
            • Correct an analysis
          • Data retrieval
          • Endpoints' list
      • Batch processing
        • Format
        • Naming conventions
        • Processing
        • Batch status & errors
      • Data export
    • Use the mobile application
      • Configure a mobile application
      • Create & visualize work orders
      • Complete work orders
      • Offline experience
    • Manage your business operations with customisable solutions
      • Roles
      • Alerting
      • Field services
        • Reviewing work orders
        • Exploring work orders
        • Grouping work orders
        • Monitoring assets performance
      • Insights
  • Security
    • Security
    • Data Protection
Powered by GitBook
On this page
  • Architecture
  • Iterations
  • Epochs
  • Iteration vs Epoch
  • Optimizer and Learning Rate

Was this helpful?

  1. Deepomatic Drive
  2. Configuring Visual Automation Applications
  3. Training models and model versions

Training options

Last updated 5 months ago

Was this helpful?

The three key training parameters are the number of iterations, initial learning rate, and neural network architecture.

Architecture

For the neural network architecture selection, you first need to understand how neural networks work:

and then decide on the best option for your problem:

Iterations

The number of iterations is the number of passes (one pass corresponding to the forward of some images into the neural network and the backpropagation of the error in the neural layers) through the neural network.

For each pass, the number of images that will be used is defined by the batch size. It can be found on the page Available architectures.

Epochs

One epoch corresponds to the number of iterations that are necessary to go through all the images in a training set.

We no longer allow initiating the training with the iteration number hyperparameter; instead, we now launch training with the specified epoch number.

A good value for the number of epochs is between 6 and 15

Iteration vs Epoch

The relationship between the number of iterations and the number of epochs is the following:

Number of iterations per epoch = number of training images/ batch size

Optimizer and Learning Rate

The optimizer is the algorithm technically responsible for training the model, in other words, the rule that is followed to update the parameters of the model in order to improve its performance.

In the literature, we can encounter different algorithms that use mainly the gradient of the loss as a rule to update the parameters.

In the platform we have set by default, for each architecture, a given optimizer with a given learning rate, which were a result of a benchmark campain.

In the case of Classification and Tagging, you have a choice between the following optimizers :

  • Momentum (SGD)

  • Nadam

  • Adam

  • Rectified Adam

  • YOGI

  • RMS Prop

If you change the optimizer (in the case of Classification and Tagging) the value of the learning rate changes automatically. This value is a recommended value, but you can change it if you want to experiment

For Object Detection Tasks, changing the optimizer will not modify the learning rate. We recommend you use the default optimizer per architecture, as well as its default learning rate

Neural networks explained
Choosing the right architecture
Available architectures
Available architectures