Downloading Your Model

Sometimes, you might want to have access to the raw model itself and run it on your end. If the need arise, you can download the model and the associated files such as:

  • Raw model: either a Tensorflow, Caffe or Darknet file containing the model weights.

  • Preprocess: all the preprocessing operations to be applied for each image.

  • Postprocess: all the postprocessing operations such as Non-Maximum Suppression.

  • Outputs: the correspondance between the output tensor and the different labels.


First you need to setup your Python environment. We use Pipenv for this purpose but the instructions are easily adaptable to your python management tool of choice.

Install python packages and start a new shell
pipenv --python 3.6 # setup pipenv with python 3.6
pipenv install requests
pipenv install tensorflow
pipenv install deepomatic-api

Then you have to setup your Deepomatic Studio credentials as environment variables to be passed to the Deepomatic Python Client.

Setup Deepomatic Studio credentials and start Python
export DEEPOMATIC_APP_ID=xxxxxxxxxxxx
export DEEPOMATIC_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
pipenv run python

Your python shell is not all set!

Associated Files

In the next section we will proceed in several steps to retrieve this different files:

  1. Initialize the Deepomatic Client with our credentials

  2. Retrieve the recognition version and download the postprocess file

  3. Retrieve the network and download the preprocess file

  4. Retrieve the recognition specification and download the outputs file

First, you will need to specify the recognition version that we want to retrieve.

Specify the recognition version

We're not ready to download the different files.

Retrieve associated files
import os
import json
from deepomatic.api.client import Client
def pretty_save_json_to_file(json_data, json_path):
"""Helper function to save json to file in readable fashion."""
with open(json_path, 'w') as json_file:
json.dump(json_data, json_file, indent=4, sort_keys=True)
print(f"Could not save file {json_path} in json format.")
# Intialize client
app_id = os.getenv('DEEPOMATIC_APP_ID')
api_key = os.getenv('DEEPOMATIC_API_KEY')
client = Client(app_id, api_key)
# Retrieve the recognition version
version_id = os.getenv('DEEPOMATIC_VERSION_ID')
version = client.RecognitionVersion.retrieve(version_id)
version_data =
postprocess_file = 'postprocess.json'
pretty_save_json_to_file(version_data['post_processings'], postprocess_file)
print(f"Recognition Version number {version_id} saved to {postprocess_file}")
# Retrieve the network
network_id = version_data['network_id']
network = client.Network.retrieve(network_id)
network_data =
preprocess_file = 'preprocess.json'
pretty_save_json_to_file(network_data['preprocessing'], preprocess_file)
print(f"Network number {network_id} saved to {preprocess_file}")
# Retrieve the recognition specification
spec_id = version_data['spec_id']
spec = client.RecognitionSpec.retrieve(spec_id)
spec_data =
outputs_file = 'outputs.json'
pretty_save_json_to_file(spec_data['outputs'], outputs_file)
print(f"Recognition Specification {spec_id} saved to {outputs_file}")

Download Raw Model

Now that we've retrieve all the additional files, we will download the raw model with all the architecture weights. Note that this make take some time as the file can weight up to several Gigabytes.

Download and Extract Raw Model
import zipfile
import requests
# Download the raw model
model_url = f"{network_id}/download"
print("Starting network download...")
r = requests.get(model_url, headers={'X-APP-ID': app_id, 'X-API-KEY': api_key})
model_file = ''
with open(model_file, 'wb') as f:
print(f"Raw model saved to {model_file}")
# Extract network file
zip_ref = zipfile.ZipFile(model_file, 'r')

The raw model is located in the new network directory.

Exploring The Model

As an illustration, we provide bellow the code to load a Tensorflow model and display the node names of the computation graph.

import tensorflow as tf
from tensorflow.python.platform import gfile
from tensorflow.core.protobuf import saved_model_pb2
from tensorflow.python.util import compat
# Open tensorflow model and list all nodes
model_filename = 'network/saved_model.pb'
nodes_file = 'nodes.json'
with tf.Session() as sess:
with gfile.FastGFile(model_filename, 'rb') as f:
# Convert pb file to tensorflow graph
data = compat.as_bytes(
sm = saved_model_pb2.SavedModel()
g_in = tf.import_graph_def(sm.meta_graphs[0].graph_def)
# Print all nodes in graph
nodes = [ for n in tf.get_default_graph().as_graph_def().node]
pretty_save_json_to_file(nodes, nodes_file)
print(f"Models Nodes saved to {nodes_file}")

For The Lazy

In case you don't want to go through all the steps detailed above, you will find bellow all the scripts and generated files.

Going Further

We do not provide any additional support on how to deploy and exploit the models on your own infrastructures outside of the Deepomatic Run framework.

Indeed, industrialising the deployment of neural networks is a complex task and the main factor in the development of the whole Deepomatic Software Suite.