In most cases, a suitable solution to a problem cannot be achieved by using a single neural network. This is indeed not the way to achieve the best performance and it is generally good practice to break down the overall problem into smaller steps. Deepomatic workflows give you this capacity.
You can create complex solutions without having to worry about deployment or runtime.
A workflow corresponds to a directed graph (without cycles). It is defined with a YAML file where you need to list all the data processing steps required in your solution.
The workflow.yaml file defines:
The entries and outcomes of the workflow
The structure of the steps
The configuration of each step
The order in which you write the steps doesn't really matter. The workflow server takes care of reconstructing a graph from the inputs of each step.
During the execution of a workflow, the data resulting from the different analysis steps are stored and accessible at the end of the execution to help the data scientist adjust his workflow and achieve the desired behavior.
It is useful to understand this data structure in order to better understand the construction of the different steps constituting the workflow.
Two main objects store all the data, one corresponding to the
outcomes, and the other called
FlowContainer which store all the data related to the execution.
FlowContainer is stored a list of couples (
regions), with regions corresponding to a list of
bounding boxes (potentially the default one) associated with
Concepts have a type (Boolean, Text, Number) and a value
During the execution of your workflow, the objective is to add concepts to the existing regions according to the results of analyses or logical rules, or to create new regions (in the specific case of a detection neural network for instance). In the end, you get the outcomes and the FlowContainer. These outcomes allow you to update the checkpoints when you have them. The FlowContainer helps you develop your workflow by having access to much more granular and low-level data.
Every workflow file starts with mandatory metadata: the workflow configuration version and the name of the workflow.
Workflow structureversion: "1.2"workflow_name: My first workflowworkflow:entries:### List of your entriesoutcomes:### List of your outcomessteps:### List of your steps
The entries defined can be later used as an input for other steps. An entry is composed of two mandatory fields: a name and a data type.
Workflow entriesentries:- name: image inputdata_type: Image- name: contextdata_type: Text
The data type must be one of the following three: Image, Text or Number.
Outcomes are optional and are especially useful when you want to build an augmented technician API as they allow you to create the checkpoints that the technician must complete throughout his operation. They are composed of the following fields: a name, an output_of step name, a concept, and optionally a regions field.
Workflow outcomesoutcomes:- name: hello worldoutput_of: hello world stepconcept:name: speech to the worldtype: Text
The outcomes must correspond to the list of checkpoints that you want to enforce for the technician on the field, and this is also the information that you will be able to display on the technician application.
In addition to this information, it is possible in a native way to add the visualisation of any objects that may have been detected (bounding box), to provide more information to the technician in the field.
Workflow outcomes with regionsoutcomes:- name: hello worldoutput_of: hello world stepconcept:name: speech to the worldtype: Textregions:- small_object_detector- big_object_detector
In the above example, all bounding boxes from the
big_object_detector steps are passed in the outcome (see below for the inference steps). The good practice is to list here all inference steps corresponding to an object detection task that are useful to explain the final prediction of the checkpoint.
Steps allow you to build your analysis graph. There are several steps available by default, but you can also write custom steps in python if the operation you need to perform is not implemented.
The business logic is detailed via those steps which are listed one after the other (the order does not matter). A step is composed of the following fields:
type: see below for the Deepomatic step library or for implementing custom steps
inputs: inputs are the names of the steps from which the output is retrieved
args: they depend on the type of the step
Workflow stepssteps:- name: my first steptype: Inferenceinputs:- image_inputargs:model_id: 12345concepts:- persons
Here is the list of all the steps that are available by default in the Deepomatic library and the syntax that allows to use them.
It is also possible to write custom steps in Python to implement the steps that are missing to build you specific workflow. To do so, you need to write the code of those custom steps in a separate Python file