Deepomatic Platform

Assembling workflows

A workflow carries the business logic that enables several analysis steps to be assembled together. These steps are first and foremost deep learning models for advanced image recognition tasks, but they are also logical steps for creating the information that best meets the business need.
A step corresponds to an elementary brick of a workflow, and is defined by a type, inputs and outputs.
Entries and Outcomes are specific steps within a workflow. Entries let you define the global inputs of your workflow that you can then use as input for the steps you define. Outcomes are optional and let you define specific outputs of your global workflow

Why do you need Deepomatic workflows?

In most cases, a suitable solution to a problem cannot be achieved by using a single neural network. This is indeed not the way to achieve the best performance and it is generally good practice to break down the overall problem into smaller steps. Deepomatic workflows give you this capacity.
You can create complex solutions without having to worry about deployment or runtime.

How to build your workflow?

A workflow corresponds to a directed graph (without cycles). It is defined with a YAML file where you need to list all the data processing steps required in your solution.
The workflow.yaml file defines:
  • The entries and outcomes of the workflow
  • The structure of the steps
  • The configuration of each step
The order in which you write the steps doesn't really matter. The workflow server takes care of reconstructing a graph from the inputs of each step.
Naming Rule: each entry, outcome and step must have a unique name. Names are case sensitive. Underscores and spaces are allowed.

Data structure

During the execution of a workflow, the data resulting from the different analysis steps are stored and accessible at the end of the execution to help the data scientist adjust his workflow and achieve the desired behavior.
It is useful to understand this data structure in order to better understand the construction of the different steps constituting the workflow.
  • Two main objects store all the data, one corresponding to the outcomes, and the other called FlowContainer which store all the data related to the execution.
  • In the FlowContainer is stored a list of couples (entry, regions), with regions corresponding to a list of bounding boxes (potentially the default one) associated with concepts.
  • Concepts have a type (Boolean, Text, Number) and a value
During the execution of your workflow, the objective is to add concepts to the existing regions according to the results of analyses or logical rules, or to create new regions (in the specific case of a detection neural network for instance). In the end, you get the outcomes and the FlowContainer. These outcomes allow you to update the checkpoints when you have them. The FlowContainer helps you develop your workflow by having access to much more granular and low-level data.

Workflow metadata and structure

Every workflow file starts with mandatory metadata: the workflow configuration version and the name of the workflow.
Workflow structure
version: "1.2"
workflow_name: My first workflow
### List of your entries
### List of your outcomes
### List of your steps


The entries defined can be later used as an input for other steps. An entry is composed of two mandatory fields: a name and a data type.
Workflow entries
- name: image input
data_type: Image
- name: context
data_type: Text
The data type must be one of the following three: Image, Text or Number.


Outcomes are optional and are especially useful when you want to build an augmented technician API as they allow you to create the checkpoints that the technician must complete throughout his operation. They are composed of the following fields: a name, an output_of step name, a concept, and optionally a regions field.
Workflow outcomes
- name: hello world
output_of: hello world step
name: speech to the world
type: Text
The outcomes must correspond to the list of checkpoints that you want to enforce for the technician on the field, and this is also the information that you will be able to display on the technician application.
In addition to this information, it is possible in a native way to add the visualisation of any objects that may have been detected (bounding box), to provide more information to the technician in the field.
Workflow outcomes with regions
- name: hello world
output_of: hello world step
name: speech to the world
type: Text
- small_object_detector
- big_object_detector
In the above example, all bounding boxes from the small_object_detector and big_object_detector steps are passed in the outcome (see below for the inference steps). The good practice is to list here all inference steps corresponding to an object detection task that are useful to explain the final prediction of the checkpoint.


Steps allow you to build your analysis graph. There are several steps available by default, but you can also write custom steps in python if the operation you need to perform is not implemented.
The business logic is detailed via those steps which are listed one after the other (the order does not matter). A step is composed of the following fields:
  • name
  • type: see below for the Deepomatic step library or for implementing custom steps
  • inputs: inputs are the names of the steps from which the output is retrieved
  • args: they depend on the type of the step
Workflow steps
- name: my first step
type: Inference
- image_input
model_id: 12345
- persons
Here is the list of all the steps that are available by default in the Deepomatic library and the syntax that allows to use them.

Custom steps

It is also possible to write custom steps in Python to implement the steps that are missing to build you specific workflow. To do so, you need to write the code of those custom steps in a separate Python file