Deepomatic Platform

Training models and model versions

Models in Machine Learning correspond to specific solutions, specific ways to address a problem. Within a view, a model then corresponds to a specific set of concepts. Adding or deleting a concept changes the model that you are building because it implies that you are modeling the problem differently, or creating a different solution to the same problem. On the contrary, adding new images to the project only adds data but does not alter the model.
Model Versions are instances of a solution to your problem. Within a view, a model version then corresponds to a specific trained neural network, with a specific architecture, and on a specific set of training images.

Models Library

When on a given view, to train model versions and to see training history, click on the Library tab in the Models section of the navigation bar. You will see a listing of all the models that have been created, the different versions that you have trained for each one of them, their status, a first indicator on their performances and a few useful links.

Train a model version

To train a new model version, click on Train a new model version, give a name to your model version, and decide on your parameters in the training options panel. By clicking on Create, you launch the training.
The number of iterations is the number of passes (one pass corresponding to the forward of images into the neural network and the back propagation of the error in the neural layers) through the neural network. For each pass, the number of images that will be used is defined by the batch size. It can be found in the page Available architectures.
The training algorithm will use the annotations of the training set for the corresponding view.
One image can have:
  • Between 1 and n regions and 1 tag per region for a classification task. An image can have several crops, each will be considered a datapoint to train, and a classification decision will be made on each one (when in a children view).
  • Between 1 and n regions and between 0 and n concepts per region for a tagging task. Same as for the classification, but it's possible to have several concepts per region, or none at all if the annotation is "Without concept" (hard negative). Those images "without concept" are used at training time.
  • Between 0 and n regions and exactly 1 concept per region for a detection task. Same as for the tagging task, it's possible to have no region at all for an image (hard negative). Those images "without concept" are not used at training time though for detection tasks.
You get a progress bar with the different steps required and a status on their advancement.
You also get a graph with several curves built in real time as the training progresses:
  • Total loss (batch) is calculated on each training batch and corresponds only to the loss of a specific training batch
  • Learning rate corresponds to the evolution of the value of the learning rate during the training
  • Total loss (train set) is the loss computed over the entire training set
  • Total loss (val set) is the loss computed over the entire validation set
  • Accuracy (train set) is the accuracy computed over the entire training set
  • Accuracy (val set) is the accuracy computed over the entire validation set
The last four metrics are calculated at regular intervals depending on the number of iterations in the training. These metrics are calculated every thousand iterations, and at most 10 times when there are more than 10,000 iterations.
These metrics make it possible to highlight the overfitting of the model, by identifying a difference between the behaviour of the accuracy and/or the loss between the training set and the validation set.
For more information on the advanced options panel, see our Guidebook on how to build your custom AI. For information on all the available models, click on the link below.
Once the training is launched, you can click on the model version to get more information on the training information and evaluate the performance of your model.