Assembling workflows

KEY TERMS

A workflow carries the business logic that enables several analysis steps to be assembled together. These steps are first and foremost deep learning models for advanced image recognition tasks, but they are also logical steps for creating the information that best meets the business need.

A step corresponds to an elementary brick of a workflow, and is defined by a type, inputs and outputs.

Entries and Outcomes are specific steps within a workflow. Entries let you define the global inputs of your workflow that you can then use as input for the steps you define. Outcomes are optional and let you define specific outputs of your global workflow

Why do you need Deepomatic workflows?

In most cases, a suitable solution to a problem cannot be achieved by using a single neural network. This is indeed not the way to achieve the best performance and it is generally good practice to break down the overall problem into smaller steps. Deepomatic workflows give you this capacity.

You can create complex solutions without having to worry about deployment or runtime.

How to build your workflow?

A workflow corresponds to a directed graph without cycles. It is defined with a YAML file where you need to list all the data processing steps required in your solution.

The workflow.yaml file defines:

  • The entries and outcomes of the workflow

  • The structure of the steps

  • The configuration of each step

The order in which you write the steps doesn't really matter. The workflow server takes care of reconstructing a graph from the inputs of each step.

Naming Rule: each entry, outcome and step must have a unique name. Names are case sensitive. Underscores and spaces are allowed.

Workflow metadata and structure

Every workflow file starts with mandatory metadata: the workflow configuration version and the name of the workflow.

Workflow structure
version: "1.2"
workflow_name: my_first_workflow
workflow:
entries:
### List of your entries
outcomes:
### List of your outcomes
steps:
### List of your steps

Entries

The entries defined can be later used as an input for other steps. An entry is composed of two mandatory fields: a name and a data type.

Workflow entries
entries:
- name: image input
data_type: Image
- name: context
data_type: Text

The data type must be one of the following three: Image, Text or Number.

Outcomes

Outcomes are optional and are especially useful when you want to build an augmented technician API. They are composed of the following fields: a name, an output_of step name, a concept, and optionally a regions field.

Workflow outcomes
outcomes:
- name: hello world
output_of: hello world step
concept:
- name: speech to the world
data_type: Text

The outcomes must correspond to the list of checkpoints that you want to enforce for the technician on the field, and this is also the information that you will be able to display on the technician application.

In addition to this information, it is possible in a native way to add the visualisation of any objects that may have been detected (bounding box), to provide more information to the technician in the field.

Workflow outcomes with regions
outcomes:
- name: hello world
output_of: hello world step
concept:
- name: speech to the world
data_type: Text
regions:
- small_object_detector
- big_object_detector

In the above example, all bounding boxes from the small_object_detector and big_object_detector steps are passed in the outcome (see below for the inference steps). The good practice is to list here all inference steps corresponding to an object detection task that are useful to explain the final prediction of the checkpoint.

Steps

Steps allow you to build your analysis graph. There are several steps available by default, but you can also write custom steps in python if the operation you need to perform is not implemented.

The business logic is detailed via those steps which are listed one after the other (the order does not matter). A step is composed of the following fields: a name, a type, some inputs, and args. The args depend on the type of the step.

Workflow steps
steps:
- name: my first step
type: Inference
inputs:
- image input
args:
model_id: 12345
concepts:
- persons

Below are listed all steps that are available by default in the Deepomatic library.

Inference Step

An inference step lets you use any model trained on the Deepomatic platform. When writing a workflow, you only refer to a model via its id, and not to a model version (see here for the difference between a model and a model version).

Inference step
steps:
- name: my first inference step
type: Inference
inputs:
- image input
args:
model_id: 12345
concepts:
- my objects

The inputs should be an image as this is the only input supported for now by the Deepomatic models.

The concepts is a name given to the output of the step and that you need to use this output in other steps.

You must make sure that the model ids that you specify in your workflow have all been trained in the same organisation on the Deepomatic platform. When publishing the application, you need to use credentials from the same organisation.

PredictionRouter Step

The PredictionRouter step divides the list of input regions according to the name of your model's concepts in the previous inference step. This step therefore necessarily follows an inference step.

The order in which you write the steps in your workflow file does not matter. The execution graph is reconstructed at runtime based on the inputs and outputs of each step.

PredictionRouter step
steps:
- name: items_router
type: PredictionsRouter
inputs:
- detector
args:
routing_name: items
routing_values:
- PLATS_PRINCIPAUX
- BOUTEILLES
- FRUITS
- PAIN
top_prediction: False

The routing_name corresponds to the concept name of the inference step preceding the current step. The routing_values must be coherent with the concept names that your model has been trained on. A PredictionRouter step is then only compatible with a specific set of models.

The last argument top_predictionis optional and lets you filter even more the regions generated by the preceding inference step, by using only the top prediction when relevant. By default, top_prediction is False.

To use the output of a PredictionRouter step as input for another step, here is the syntax that you must use:

PredictionRouter step - Use output as input
steps:
- name: detector
type: Inference
inputs:
- image input
args:
models_id: 23456
concepts:
- items
- name: items_router
type: PredictionsRouter
inputs:
- detector
args:
routing_name: items
routing_values:
- PLATS_PRINCIPAUX
- BOUTEILLES
- FRUITS
- PAIN
top_prediction: False
- name: fruits_classification
type: Inference
inputs:
- items_router: 2
args:
model_id: 54321
concepts:
- fruits type

In the above example, regions are created with the detector step. The items_router step then creates branches for the different categories of items detected. Finally, for one of those categories, FRUITS, the bounding boxes are sent to another model to determine the type of fruit.

RegionCounter step

The RegionCounter step gives you the capacity to count regions and it therefore only accepts a list of regions as input.

RegionCounter step
steps:
- name: furniture_counter
type: RegionCounter
inputs:
- furniture_detector
args:
entry: furniture_image
concept_name: furniture_count
count_only_concepts: [furniture]

The count number if stored in a new region attached to the specified entry.

It is possible to add the optional argument count_only_concepts to filter the regions.

Custom steps

Finally, it is possible to write custom steps in Python to implement the steps that are missing to build you specific workflow. To do so, you need to write the code of those custom steps in a separate Python file custom_nodes.py.

For each custom step, you need to create a new class that inherits from the CutomNode class, and implement the __init__ and process methods. You will then be able to add steps with this type in your workflow.yaml file (the name of your class is the type of the step).

Custome step
from deepomatic.workflows.nodes import CustomNode
class MyCustomStep(CustomNode):
def __init__(self, config, node_name, input_nodes, *args):
super(MyCustomStep, self).__init__(config, node_name, input_nodes)
def process(self, context, regions*):
### Implements the logic of your custom step

Let's look at the following custom step and see how you can add arguments and inputs in the logic you implement.

steps:
- name:
type: MyCustomStep
inputs:
- image input
- category
args:
my_first_arg: context_technician
my_second_arg: intervention_type

You might indeed need to add arguments to your custom step, so that you can use the same custom step and adapt it to a specific usage (the model_id is for instance an argument of the Inference step). To do so, you need to specify those arguments in the signature of the __init__ method. You will also probably want to save those values as attributes to use them in the process method.

def __init__(self, config, node_name, input_nodes, my_first_arg, my_second_arg):
super(MyCustomStep, self).__init__(config, node_name, input_nodes)
self._context_technician = my_first_arg
self._category = my_second_arg

In the same way, your step will use input that might be entries of your workflow or outputs from other steps. You are able to use those inputs in the process method to implement the logic that you need.

def process(self, context, regions*, image input, category):

Finally, the process method should return either:

  • the list of modified or created regions

  • an empty list is nothing has been created or modified

  • None if you want to stop the execution of the following steps in the corresponding workflow branch