# Evaluating performances

To get an evaluation of a model version's performances, you need to create a validation set **before** training a model version.

{% content-ref url="evaluating-performances/validation-split" %}
[validation-split](https://docs.deepomatic.com/platform-documentation/deepomatic-drive/configuring-visual-automation-applications/evaluating-performances/validation-split)
{% endcontent-ref %}

&#x20;Once you have trained a model, you will likely want to know how good it performs at the task you are trying to automate.

You can start with a qualitative evaluation here:

{% content-ref url="evaluating-performances/qualitative-evaluation" %}
[qualitative-evaluation](https://docs.deepomatic.com/platform-documentation/deepomatic-drive/configuring-visual-automation-applications/evaluating-performances/qualitative-evaluation)
{% endcontent-ref %}

You can then access the performance report to get more details on your models:

{% content-ref url="evaluating-performances/performance-report" %}
[performance-report](https://docs.deepomatic.com/platform-documentation/deepomatic-drive/configuring-visual-automation-applications/evaluating-performances/performance-report)
{% endcontent-ref %}
