Before going further on the metrics used to evaluate the quality of a model, we need to introduce the notion of validation set. By default, when you do not specify any information, all the images that you add to a project are added to the training set. See JSON Upload to add an image directly to the validation set.
To avoid any bias, we need to create two separate and independent sets of images. Indeed, you don't want to evaluate the performances of a model on a set of images that has been used during the training phase. We then distinguish between training and validation sets. To access the validation set, you need to click on validation on the navigation bar.
If there isn't any bias in your set of images, you can do a random validation split directly from the Home Page of your project. See the Project Home Page section of the Managing Projects page.