Deepomatic Platform
Search…
Deepomatic Platform
v2.3.0
Overview
Release notes
Deepomatic Drive
Getting started
Adding and managing data
Configuring Visual Automation Applications
Training models and model versions
Evaluating performances
Validation split
Qualitative evaluation
Performance report
Metrics explained: detection
Understanding models
Assembling workflows
Publishing Visual Automation Apps
Sharing models among organizations
Harnessing the continuous improvement loop
Deepomatic Engage
Deploying Visual Automation Applications
Integrating Visual Automation Applications
Using Mobile application to capture visual insights on the field
Managing your business operations with customisable solutions
Deepomatic CLI
Presentation
Installation
Setup your API key
Platform commands
Site commands
FAQ
Security
Security
Data Protection
Powered By
GitBook
Evaluating performances
To get an evaluation of a model version's performances, you need to create a validation set
before
training a model version.
Validation split
Once you have trained a model, you will likely want to know how good it performs at the task you are trying to automate.
You can start with a qualitative evaluation here:
Qualitative evaluation
You can then access the performance report to get more details on your models:
Performance report
Previous
Available architectures
Next
Validation split
Last modified
5d ago
Copy link