Platform commands enable you to interact directly with the Deepomatic web platform from your terminal. Here is the list of actions that you can do:
add-images: upload images and their metadata directly from your local machine
model: use the Deepomatic cloud API to run inferences on your trained models (infer, draw or blur based on the type of output that you require)
app: interact with visual automation applications (create or delete applications)
engage-app: interact with field services application (create or delete applications)
app-version: interact with application versions (create or delete application versions)
engage-app-version: interact with field service application versions (create or delete field services application versions)
service: add services to your visual automation applications
All these actions use the Deepomatic Studio credentials DEEPOMATIC_API_KEY
. Make sure you have followed the previous section to setup your command line environment:
First, you will need to retrieve the org
(organization name) and project_name
from Deepomatic platform. This is the destination project for the upload. Simply go into the project and retrieve the URL which will contain the project name: https://studio.deepomatic.com/<org>/project-views/<project_name>/views
.
To upload all images directly to the specified project
, you can specify either:
A single image file to be uploaded.
A directory, in which case all images directly inside the directory will be uploaded.
All images in the directory and subdirectories using the --recursive
option
Sometime you will already have information regarding the images that you'd like to upload along the image. That could be information for pretagging the image, preexisting bounding boxes or metadata such as image provenance.
If you have a large quantity of images stored locally, it is better to use the txt format as well.
In order to pass it along with the image during upload time, you will need to use the text dataset format, more information about the format can be found on the Deepomatic CLI import text file Beware to :
use a .txt extension
use the right key in the data
field for each image. As the image is stored locally use the file
key.
Each model version is deployed as a web API after it has been trained on the platform. To run some inferences and evaluate the performances of your trained model version, you can use Deepomatic CLI.
There are three different model actions that you can use:
infer
: Compute predictions only.
draw
: Display the prediction result, whether tags or the bounding boxes.
blur
: Blur the inside of the bounding boxes.
They follow the same recipe:
Retrieve one or several inputs.
Compute predictions using the trained neural network
Output the result in different formats: image, video, JSON, stream, etc.
The Deepomatic CLI supports different types of input:
Image: Supported formats include bmp
, jpeg
, jpg
, jpe
,png
, tif
and tiff
.
Video: Supported formats include avi
, mp4
, webm
and mjpg
.
Studio JSON: Deepomatic Studio JSON format, used to specify several images or videos stored locally.
Directory: Analyse all images and videos found in the directory.
Digit: Retrieve the stream from the corresponding device. For instance, 0 for the installed webcam.
Network stream: Supported network streams include rtsp
, http
and https
.
Inputs are specified using the -i
for input
option. Below an example with each type of inputs.
The Deepomatic CLI supports different types of output:
Image: Supported formats include bmp
, jpeg
, jpg
, jpe
,png
, tif
and tiff
.
Video: Supported formats include avi
and mp4.
Run JSON: Deepomatic Run JSON format for raw predictions.
Studio JSON: Deepomatic Studio JSON format for Studio-like prediction scores. This is specified using the -s
or --studio_format
option.
Integer wildcard JSON: A standard Run/Studio JSON, except that the name contains the frame number. For instance -o frame %03d.json
will output frame001.json
, frame002.json
, ...
String wildcard JSON: Same as the integer wildcard except this time the frame name is used. For instance -o pred_%s.jpg
will output pred_img1_123.json
, pred_img2_123.json
, ...
Standard output: On rare occasions you might want to output the model results directly to the process standard output using the stdout
option. For instance this allows you to stream directly to vlc.
Display output: Opens a window and displays the result. Quit with q
.
Output are specified using the -o
for output
option. Below an example with each type of inputs.
Please note that in order to avoid duplicate computations, it is possible to specify several outputs at the same time, for instance to blur an image and store the predictions.
Commands have additional options that you can use with a flag. There is a short flag -f
and a long flag --flag
. Note that one use a simple -
while the other uses two --
. Also some options need an additional argument. Find below the option table. When indicated, all
means that all three commands infer
, draw
and blur
are concerned.
Short
Long
Commands
Description
i
input
all
Input consumed.
input_fps
all
Input FPS used for video extraction.
skip_frame
all
Number of frames to skip in-between two frames.
R
recursive
all
Recursive directory search.
o
output
all
Outputs produced.
output_fps
all
Output FPS used for video reconstruction.
s
studio_format
infer
draw
blur
Convert from Run to Studio format.
F
fullscren
draw
blur
noop
Fullscreen if window output.
from_file
draw
blur
Use prediction from precomputed JSON.
r
recognition_id
infer
draw
blur
Model version ID.
t
threshold
infer
draw
blur
Threshold for predictions.
S
draw_score
draw
Overlay prediction score.
no_draw_scores
draw
Do not overlay prediction score.
L
draw_labels
draw
Overlay the prediction label.
no_draw_labels
draw
Do not overlay the prediction label.
M
blur_method
blur
Blur method,pixel
, gaussian
or black.
B
blur_strengh
blur
Blur strength.
verbose
all
Increase output verbosity.
To create an application using the Deepomatic CLI, you need to provide a name, and an app spec like the following.
An engage application corresponds to an application for Field Services use cases. On top of a traditional application, we will deploy an API that interact with your application
To create an engage application using the Deepomatic CLI, you only need to provide a name.
To create an application version using the Deepomatic CLI, you need to specify the application for which you want to create a version, a name and the list of model version ids that should be used within your application version.
The update
command updates the app_version of a site in the api.
To create an engage application version using the Deepomatic CLI, you need to specify the application for which you want to create a version, , a workflow.yaml
file and optionally a custom_nodes.py
file. You also need to spey the list of model version ids (also called recognition ids).
The model version ids must be listed in the same order as the model are listed in the workflow file.
The update
command updates the app_version of a site in the api.
To add services to your application after you have created it, you need to use the service
commands.
Here is the list of the services you can add to your application:
workflow-server: this is one of the key component of the Deepomatic software infrastructure. It is in charge of orchestrating all workflow operations.
worker-nn: this is also one of the key component of the Deepomatic software infrastructure. It is in charge of handling all neural network inferences.
customer-api: this is an optional component that you need to add when you want to create a web API on top of your workflow.
camera-server: this is an optional component that you need to add when you want to connect cameras to workflow.