Qualitative Evaluation

On the platform

By clicking on a given model in the Models page, and going on the Inference tab, you access a page where you can quickly test your model on few images.
You can either drag & drop images, or enter several image URLs. Your images will go through the neural network you have trained and you will get the predictions for these images.

Using Deepomatic CLI

Each model version is deployed as a web API after it has been trained. To run some inferences and evaluate the performances of your trained model version, you can use Deepomatic CLI.

Installation & Credentials

See the link below to install Deepomatic CLI
In order to test your trained model versions as Cloud APIs, you will need two pieces of information:
  • DEEPOMATIC_APP_ID
  • DEEPOMATIC_API_KEY
Both your credentials can either be retrieved from your account page on the Deepomatic platform or through your Deepomatic Platform administrator. Then you need to export them as environment variables before using the script:
Linus/MacOS
Windows
export DEEPOMATIC_APP_ID=xxxxxxxxxxxx
export DEEPOMATIC_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
set DEEPOMATIC_APP_ID=xxxxxxxxxxxx
set DEEPOMATIC_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Retrieve your Model Version ID

To retrieve the MODEL_VERSION_ID of the model version that you want to test, you need to go to the Library page and click on the options button of the desired model version. You then need to click on Credentials to get the model version id.
The MODEL_VERSION_ID is specified with the -r or --recognition_id argument.

Sample commands

Drawing a prediction with Deepomatic API
export DEEPOMATIC_APP_ID=xxxxxxxxxxxx
export DEEPOMATIC_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
deepo infer -i img.jpg -o pred.json -r 12345

Run model actions

There are three different model actions:
  • infer: Compute predictions only.
  • draw: Display the prediction result, whether tags or the bounding boxes.
  • blur: Blur the bounding boxes.
  • noop is mentionned later on but not sure what it is
They follow the same recipe:
  1. 1.
    Retrieve one or several inputs.
  2. 2.
    Compute predictions using the trained neural network
  3. 3.
    Output the result in different formats: image, video, JSON, stream, etc.
Generic command
deepo infer -i myinput -o myoutput1 myoutput2 ...

Input

Input types

The Deepomatic CLI supports different types of input:
  • Image: Supported formats include bmp, jpeg, jpg , jpe,png, tif and tiff.
  • Video: Supported formats include avi, mp4, webm and mjpg.
  • Studio JSON: Deepomatic Studio JSON format, used to specify several images or videos stored locally.
Sample input JSON format
{
"images": [
{
"location": "/path/to/img.jpg"
},
{
"location": "/path/to/video.mp4"
},
]
}
  • Directory: Analyse all images and videos found in the directory.
  • Digit: Retrieve the stream from the corresponding device. For instance, 0 for the installed webcam.
  • Network stream: Supported network streams include rtsp, http and https.

Specify input

Inputs are specified using the -i for input option. Below an example with each type of inputs.
Sample input commands
deepo infer -i /path/to/my_img.bmp ... # Image
deepo infer -i /path/to/my_vid.mp4 ... # Video
deepo infer -i /path/to/my_studio.json ... # Studio JSON
deepo infer -i /path/to/my_dir ... # Directory
deepo infer -i 0 ... # Device number
deepo infer -i rtsp://ip:port/channel ... # RTSP stream

Output

Output types

The Deepomatic CLI supports different types of output:
  • Image: Supported formats include bmp, jpeg, jpg , jpe,png, tif and tiff.
  • Video: Supported formats include avi and mp4.
  • Run JSON: Deepomatic Run JSON format for raw predictions.
  • Studio JSON: Deepomatic Studio JSON format for Studio-like prediction scores. This is specified using the -s or --studio_format option.
  • Integer wildcard JSON: A standard Run/Studio JSON, except that the name contains the frame number. For instance -o frame %03d.json will output frame001.json, frame002.json, ...
  • String wildcard JSON: Same as the integer wildcard except this time the frame name is used. For instance -o pred_%s.jpg will output pred_img1_123.json, pred_img2_123.json, ...
  • Standard output: On rare occasions you might want to output the model results directly to the process standard output using the stdout option. For instance this allows you to stream directly to vlc.
  • Display output: Opens a window and displays the result. Quit with q.

Specify output

Output are specified using the -o for output option. Below an example with each type of inputs.
Please note that in order to avoid duplicate computations, it is possible to specify several outputs at the same time, for instance to blur an image and store the predictions.
Sample output commands
deepo draw -i img.jpg -o img_drawn.jpg ... # Image
deepo draw -i vid.mp4 -o img_drawn_%04d.jpg ... # Wildcard images
deepo draw -i vid.mp4 -o vid_drawn.mp4 ... # Video
deepo draw -i img.jpg -o pred.json ... # Run JSON
deepo draw -i img.jpg -o pred.json -s ... # Studio JSON
deepo draw -i vid.mp4 -o pred_%s.json ... # String wildcard JSON
deepo draw -i vid.mp4 -o pred_%04d.json ... # Integer wildcard JSON
deepo draw -i vid.mp4 -o stdout ... # Standard output
deepo draw -i vid.mp4 -o window ... # Display output
deepo draw -i vid.mp4 -o vid_drawn.mp4 pred_%04d.json ... # Multiple outputs

Options

Commands have additional options that you can use with a flag. There is a short flag -f and a long flag --flag. Note that one use a simple - while the other uses two --. Also some options need an additional argument. Find below the option table. When indicated, all means that all three commands infer, draw and blur are concerned.
Short
Long
Commands
Description
i
input
all
Input consumed.
input_fps
all
Input FPS used for video extraction.
skip_frame
all
Number of frames to skip in-between two frames.
R
recursive
all
Recursive directory search.
o
output
all
Outputs produced.
output_fps
all
Output FPS used for video reconstruction.
s
studio_format
infer draw blur
Convert from Run to Studio format.
F
fullscren
draw blur noop
Fullscreen if window output.
from_file
draw blur
Use prediction from precomputed JSON.
r
recognition_id
infer draw blur
Model version ID.
t
threshold
infer draw blur
Threshold for predictions.
S
draw_score
draw
Overlay prediction score.
no_draw_scores
draw
Do not overlay prediction score.
L
draw_labels
draw
Overlay the prediction label.
no_draw_labels
draw
Do not overlay the prediction label.
M
blur_method
blur
Blur method,pixel, gaussian or black.
B
blur_strengh
blur
Blur strength.
verbose
all
Increase output verbosity.