Deepomatic Platform

Training models and model versions

Models in Machine Learning correspond to specific solutions, specific ways to address a problem. Within a view, a model then corresponds to a specific set of concepts. Adding or deleting a concept changes the model that you are building because it implies that you are modeling the problem differently, or creating a different solution to the same problem. On the contrary, adding new images to the project only adds data but does not alter the model.
Model Versions are instances of a solution to your problem. Within a view, a model version then corresponds to a specific trained neural network, with a specific architecture, and on a specific set of training images.

Models Library

When on a given view, to train model versions and to see training history, click on the Library tab in the Models section of the navigation bar. You will see a listing of all the models that have been created, the different versions that you have trained for each one of them, their status, a first indicator on their performances and a few useful links.

Train a model version

To train a new model version, click on Train a new model version, give a name to your model version, and decide on your parameters in the training options panel. By clicking on Create, you launch the training.
To better understand and decide on the options, go to the following sections:
Which annotations are used during a training? How about images without concepts?
The training algorithm will use the training set's annotations for the appropriate view.
  • All of the images and annotations from the training set are utilized to train the classification view. In a classification view, an image cannot be "without concept."
  • Images can be annotated for a tagging view as "without concept," which signifies that they don't validate any concept in the view. During training, all images and annotations—including those that are "without concept"—are utilized.
  • Images can be marked as "without concept" for a detection view, indicating that the image contains no concepts. At training time, only images with at least one bounding box are used; images "without concept" are not.

Training progress - Metrics during training

You get a progress bar with the different steps required and a status on their advancement.
Example of metrics during training for a Tagging model
During training, several metrics are displayed:
  • Learning rate corresponds to the evolution of the value of the learning rate during the training
  • Total loss (batch) is calculated on each training batch, and corresponds only to the loss of a specific training batch - The interest of this metric is to confirm that it is decreasing with time, and that the model is training correctly
  • Total loss (val set) is the loss computed over the entire validation set. It is computed at the end of each epoch
  • Accuracy (val set) is the accuracy computed over the entire validation set. In the particular case of Tagging, the accuracy is computed for a threshold of 0.5, so this metric should be taken with a grain of salt for this particular task.
  • mAP (val set) is the Mean Average Precision that is computed over the entire validation set. This metric is not used in the case of Classification Task.
At the end of each epoch, or after a number of iterations equal to the size of the training set divided by the batch size, the metrics on the validation set are calculated. These measures enable the identification of a difference in the loss behavior between the training set and the validation set, hence highlighting the over-fitting of the model.
For classification, the model is saved at the iteration with the greatest value using the accuracy on the validation set. Mean Average Precision is used in its place for tagging. For more details on those measures, please go to the page after this one.
Once the training is over, you can click on the model version to get more information on the training information and evaluate the performance of your model.