Platform commands
Platform commands enable you to interact directly with the Deepomatic web platform from your terminal. Here is the list of actions that you can do:
add-images: upload images and their metadata directly from your local machine
model: use the Deepomatic cloud API to run inferences on your trained models (infer, draw or blur based on the type of output that you require)
app: interact with visual automation applications (create or delete applications)
engage-app: interact with field services application (create or delete applications)
app-version: interact with application versions (create or delete application versions)
engage-app-version: interact with field service application versions (create or delete field services application versions)
service: add services to your visual automation applications
All these actions use the Deepomatic Studio credentials DEEPOMATIC_API_KEY
. Make sure you have followed the previous section to setup your command line environment:
Add images
Organization and project
First, you will need to retrieve the org
(organization name) and project_name
from Deepomatic platform. This is the destination project for the upload. Simply go into the project and retrieve the URL which will contain the project name: https://studio.deepomatic.com/<org>/project-views/<project_name>/views
.
Raw local images
To upload all images directly to the specified project
, you can specify either:
A single image file to be uploaded.
A directory, in which case all images directly inside the directory will be uploaded.
All images in the directory and subdirectories using the
--recursive
option
# Upload simple image
deepo platform add-images -o org -p myproject -i myimage.jpg
# Upload all images in directory
deepo platform add-images -o org -p myproject -i mydir
# Upload all images in directory and subdirectories
deepo platform add-images -o org -p myproject -i mydir --recursive
Images and metadata
Sometime you will already have information regarding the images that you'd like to upload along the image. That could be information for pretagging the image, preexisting bounding boxes or metadata such as image provenance.
If you have a large quantity of images stored locally, it is better to use the txt format as well.
In order to pass it along with the image during upload time, you will need to use the text dataset format, more information about the format can be found on the Deepomatic CLI import text file Beware to :
use a .txt extension
use the right key in the
data
field for each image. As the image is stored locally use thefile
key.
# Upload images stored locally using a txt file
deepo platform add-images -o org -p myproject -i path/to_upload.txt --txt
Model commands
Each model version is deployed as a web API after it has been trained on the platform. To run some inferences and evaluate the performances of your trained model version, you can use Deepomatic CLI.
Sample commands
deepo platform model infer -i img.jpg -o pred.json -r 12345
Run model actions
There are three different model actions that you can use:
infer
: Compute predictions only.draw
: Display the prediction result, whether tags or the bounding boxes.blur
: Blur the inside of the bounding boxes.
They follow the same recipe:
Retrieve one or several inputs.
Compute predictions using the trained neural network
Output the result in different formats: image, video, JSON, stream, etc.
deepo platform model infer -i myinput -o myoutput1 myoutput2 ...
Input
Input types
The Deepomatic CLI supports different types of input:
Image: Supported formats include
bmp
,jpeg
,jpg
,jpe
,png
,tif
andtiff
.Video: Supported formats include
avi
,mp4
,webm
andmjpg
.Studio JSON: Deepomatic Studio JSON format, used to specify several images or videos stored locally.
{
"images": [
{
"location": "/path/to/img.jpg"
},
{
"location": "/path/to/video.mp4"
},
]
}
Directory: Analyse all images and videos found in the directory.
Digit: Retrieve the stream from the corresponding device. For instance, 0 for the installed webcam.
Network stream: Supported network streams include
rtsp
,http
andhttps
.
Specify input
Inputs are specified using the -i
for input
option. Below an example with each type of inputs.
deepo platform model infer -i /path/to/my_img.bmp ... # Image
deepo platform model infer -i /path/to/my_vid.mp4 ... # Video
deepo platform model infer -i /path/to/my_studio.json ... # Studio JSON
deepo platform model infer -i /path/to/my_dir ... # Directory
deepo platform model infer -i 0 ... # Device number
deepo platform model infer -i rtsp://ip:port/channel ... # RTSP stream
Output
Output types
The Deepomatic CLI supports different types of output:
Image: Supported formats include
bmp
,jpeg
,jpg
,jpe
,png
,tif
andtiff
.Video: Supported formats include
avi
andmp4.
Run JSON: Deepomatic Run JSON format for raw predictions.
Studio JSON: Deepomatic Studio JSON format for Studio-like prediction scores. This is specified using the
-s
or--studio_format
option.Integer wildcard JSON: A standard Run/Studio JSON, except that the name contains the frame number. For instance
-o frame %03d.json
will outputframe001.json
,frame002.json
, ...String wildcard JSON: Same as the integer wildcard except this time the frame name is used. For instance
-o pred_%s.jpg
will outputpred_img1_123.json
,pred_img2_123.json
, ...Standard output: On rare occasions you might want to output the model results directly to the process standard output using the
stdout
option. For instance this allows you to stream directly to vlc.Display output: Opens a window and displays the result. Quit with
q
.
Specify output
Output are specified using the -o
for output
option. Below an example with each type of inputs.
Please note that in order to avoid duplicate computations, it is possible to specify several outputs at the same time, for instance to blur an image and store the predictions.
deepo platform model draw -i img.jpg -o img_drawn.jpg ... # Image
deepo platform model draw -i vid.mp4 -o img_drawn_%04d.jpg ... # Wildcard images
deepo platform model draw -i vid.mp4 -o vid_drawn.mp4 ... # Video
deepo platform model draw -i img.jpg -o pred.json ... # Run JSON
deepo platform model draw -i img.jpg -o pred.json -s ... # Studio JSON
deepo platform model draw -i vid.mp4 -o pred_%s.json ... # String wildcard JSON
deepo platform model draw -i vid.mp4 -o pred_%04d.json ... # Integer wildcard JSON
deepo platform model draw -i vid.mp4 -o stdout ... # Standard output
deepo platform model draw -i vid.mp4 -o window ... # Display output
deepo platform model draw -i vid.mp4 -o vid_drawn.mp4 pred_%04d.json ... # Multiple outputs
Options
Commands have additional options that you can use with a flag. There is a short flag -f
and a long flag --flag
. Note that one use a simple -
while the other uses two --
. Also some options need an additional argument. Find below the option table. When indicated, all
means that all three commands infer
, draw
and blur
are concerned.
Short
Long
Commands
Description
i
input
all
Input consumed.
input_fps
all
Input FPS used for video extraction.
skip_frame
all
Number of frames to skip in-between two frames.
R
recursive
all
Recursive directory search.
o
output
all
Outputs produced.
output_fps
all
Output FPS used for video reconstruction.
s
studio_format
infer
draw
blur
Convert from Run to Studio format.
F
fullscren
draw
blur
noop
Fullscreen if window output.
from_file
draw
blur
Use prediction from precomputed JSON.
r
recognition_id
infer
draw
blur
Model version ID.
t
threshold
infer
draw
blur
Threshold for predictions.
S
draw_score
draw
Overlay prediction score.
no_draw_scores
draw
Do not overlay prediction score.
L
draw_labels
draw
Overlay the prediction label.
no_draw_labels
draw
Do not overlay the prediction label.
M
blur_method
blur
Blur method,pixel
, gaussian
or black.
B
blur_strengh
blur
Blur strength.
verbose
all
Increase output verbosity.
App commands
To create an application using the Deepomatic CLI, you need to provide a name, and an app spec like the following.
[{"recognition_spec_id": 123,
"queue_name": "spec_123.forward"}]
deepo platform app create -n my-first-app -s my-app-specs
deepo platform app delete -i app_id
Engage App commands
An engage application corresponds to an application for Field Services use cases. On top of a traditional application, we will deploy an API that interact with your application
To create an engage application using the Deepomatic CLI, you only need to provide a name.
deepo platform engage-app create -n my-first-workflow-app
deepo platform engage-app delete -i app_id
Application version commands
To create an application version using the Deepomatic CLI, you need to specify the application for which you want to create a version, a name and the list of model version ids that should be used within your application version.
deepo platform app-version create -a app_id -n v1 -r 63555 63556
deepo platform app delete -a app_version_id
The update
command updates the app_version of a site in the api.
deepo site update -i site_id -v app-version-id
Engage Application version commands
To create an engage application version using the Deepomatic CLI, you need to specify the application for which you want to create a version, , a workflow.yaml
file and optionally a custom_nodes.py
file. You also need to spey the list of model version ids (also called recognition ids).
deepo platform engage-app-version create deepo platform engage-app create -n my-first-workflow-app -w workflow.yaml [-c custom_nodes.py] -a app_id -r 63555 63556
deepo platform app delete -i engage-app_version_id
The update
command updates the app_version of a site in the api.
deepo site update -i site_id -v app-version-id
Service commands
To add services to your application after you have created it, you need to use the service
commands.
deepo platform service create -a app_id -n service_name
Here is the list of the services you can add to your application:
workflow-server: this is one of the key component of the Deepomatic software infrastructure. It is in charge of orchestrating all workflow operations.
worker-nn: this is also one of the key component of the Deepomatic software infrastructure. It is in charge of handling all neural network inferences.
customer-api: this is an optional component that you need to add when you want to create a web API on top of your workflow.
camera-server: this is an optional component that you need to add when you want to connect cameras to workflow.
Was this helpful?