Sometimes, you might want to have access to the raw model itself and run it on your end. If the need arise, you can download the model and the associated files such as:
Raw model: either a Tensorflow, Caffe or Darknet file containing the model weights.
Preprocess: all the preprocessing operations to be applied for each image.
Postprocess: all the postprocessing operations such as Non-Maximum Suppression.
Outputs: the correspondance between the output tensor and the different labels.
First you need to setup your Python environment. We use Pipenv for this purpose but the instructions are easily adaptable to your python management tool of choice.
Install python packages and start a new shellpipenv --python 3.6 # setup pipenv with python 3.6pipenv install requestspipenv install tensorflowpipenv install deepomatic-api
Then you have to setup your Deepomatic Studio credentials as environment variables to be passed to the Deepomatic Python Client.
Setup Deepomatic Studio credentials and start Pythonexport DEEPOMATIC_APP_ID=xxxxxxxxxxxxexport DEEPOMATIC_API_KEY=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxpipenv run python
Your python shell is not all set!
In the next section we will proceed in several steps to retrieve this different files:
Initialize the Deepomatic Client with our credentials
Retrieve the recognition version and download the postprocess file
Retrieve the network and download the preprocess file
Retrieve the recognition specification and download the outputs file
First, you will need to specify the recognition version that we want to retrieve.
Specify the recognition versionexport DEEPOMATIC_VERSION_ID=xxxxx
We're not ready to download the different files.
Retrieve associated filesimport osimport jsonfrom deepomatic.api.client import Clientdef pretty_save_json_to_file(json_data, json_path):"""Helper function to save json to file in readable fashion."""try:with open(json_path, 'w') as json_file:json.dump(json_data, json_file, indent=4, sort_keys=True)except:print(f"Could not save file {json_path} in json format.")raise# Intialize clientapp_id = os.getenv('DEEPOMATIC_APP_ID')api_key = os.getenv('DEEPOMATIC_API_KEY')client = Client(app_id, api_key)# Retrieve the recognition versionversion_id = os.getenv('DEEPOMATIC_VERSION_ID')version = client.RecognitionVersion.retrieve(version_id)version_data = version.data()postprocess_file = 'postprocess.json'pretty_save_json_to_file(version_data['post_processings'], postprocess_file)print(f"Recognition Version number {version_id} saved to {postprocess_file}")# Retrieve the networknetwork_id = version_data['network_id']network = client.Network.retrieve(network_id)network_data = network.data()preprocess_file = 'preprocess.json'pretty_save_json_to_file(network_data['preprocessing'], preprocess_file)print(f"Network number {network_id} saved to {preprocess_file}")# Retrieve the recognition specificationspec_id = version_data['spec_id']spec = client.RecognitionSpec.retrieve(spec_id)spec_data = spec.data()outputs_file = 'outputs.json'pretty_save_json_to_file(spec_data['outputs'], outputs_file)print(f"Recognition Specification {spec_id} saved to {outputs_file}")
Now that we've retrieve all the additional files, we will download the raw model with all the architecture weights. Note that this make take some time as the file can weight up to several Gigabytes.
Download and Extract Raw Modelimport zipfileimport requests# Download the raw modelmodel_url = f"https://api.deepomatic.com/v0.7/networks/{network_id}/download"print("Starting network download...")r = requests.get(model_url, headers={'X-APP-ID': app_id, 'X-API-KEY': api_key})model_file = 'network.zip'with open(model_file, 'wb') as f:f.write(r.content)print(f"Raw model saved to {model_file}")# Extract network filezip_ref = zipfile.ZipFile(model_file, 'r')zip_ref.extractall('network')zip_ref.close()
The raw model is located in the new network
directory.
As an illustration, we provide bellow the code to load a Tensorflow model and display the node names of the computation graph.
import tensorflow as tffrom tensorflow.python.platform import gfilefrom tensorflow.core.protobuf import saved_model_pb2from tensorflow.python.util import compat# Open tensorflow model and list all nodesmodel_filename = 'network/saved_model.pb'nodes_file = 'nodes.json'with tf.Session() as sess:with gfile.FastGFile(model_filename, 'rb') as f:# Convert pb file to tensorflow graphdata = compat.as_bytes(f.read())sm = saved_model_pb2.SavedModel()sm.ParseFromString(data)g_in = tf.import_graph_def(sm.meta_graphs[0].graph_def)# Print all nodes in graphnodes = [n.name for n in tf.get_default_graph().as_graph_def().node]pretty_save_json_to_file(nodes, nodes_file)print(f"Models Nodes saved to {nodes_file}")
In case you don't want to go through all the steps detailed above, you will find bellow all the scripts and generated files.
We do not provide any additional support on how to deploy and exploit the models on your own infrastructures outside of the Deepomatic Run framework.
Indeed, industrialising the deployment of neural networks is a complex task and the main factor in the development of the whole Deepomatic Software Suite.