Using work order batch processing
Rather than integrating the various endpoints of the Field Services API, it is also possible to send batches of work orders so that they can be analysed quickly and without the constraints of a Rest API integration.
To use batch processing, it is necessary to respect the format defined below.
- The archive should implement the following tree structure with one folder per work order. All images and files containing metadata and parameters for each work order should then be stored within each of these folders.
.
├── WO_123456/
│ ├── metadata.json
│ ├── parameters.json
│ ├── task_group_name_A-0.jpg
│ ├── task_group_name_A-1.jpg
│ ├── task_group_name_B-2.jpg
│ ├── ...
│ └── task_group_name_X-8.jpg
├── WO_789123/
│ ├── metadata.json
│ ├── parameters.json
│ ├── task_group_name_A-2.jpg
│ ├── task_group_name_B-0.jpg
│ ├── task_group_name_B-1.jpg
│ ├── ...
│ └── task_group_name_X-67.jpg
│
├── ...
│
└── WO_456789/
├── metadata.json
├── parameters.json
├── task_group_name_A.jpg
├── task_group_name_B-0.jpg
├── task_group_name_B-1.jpg
├── ...
└── task_group_name_X.jpg
- The name of each directory must correspond to a unique id that identifies the work order.
- Each image should be named according to the task group to which it relates (the task group corresponds to the type of the image - see the page of your application version to know the list of task groups admissible).
- Optionally, you can add a suffix "-X" with a number at the end of the photo name. If several photos contain this suffix, the photos are processed in the order thus provided. All photos without suffixes are processed after photos with suffixes.
- The
metadata.json
andparameters.json
files are JSON files containing dictionaries with all the information as specified on your application page on the Deepomatic platform. If no metadata or parameters are required according to the information available on the application page, then there is no need to include the corresponding empty file or files.
However, the analysis of the work order will not be possible if the parameters file does not contain all the required information as specified on the Deepomatic platform.
To upload and start processing a batch, there is an API that you must use.
A first way to use this API is to run the python script below with the following arguments:
- endpoint: it corresponds to the deployment site and has the following URL - https://api.{$SITEID}.customer.deepomatic.com - your $SITE_ID can be found on the platform.
- filename: it corresponds to the local path of the archive with the format specified above.
Script execution
python3 upload.py endpoint filename
upload.py
import sys
import requests
CHUNK_SIZE = 262144 * 2
def upload(endpoint: str, filepath: str):
# Setup base headers with auth token
headers = {
'Authorization': f'Token {os.getenv("CUSTOMER_API_KEY")}',
'content-type': 'application/json'
}
# Use endpoint to get a resumable url
try:
response = requests.post(
f"{endpoint}/v0.2/batches",
headers=headers
)
# check for bad answers
response.raise_for_status()
resumable_url = response.json()["upload_url"]
except Exception as err:
print(err)
sys.exit(1)
print(f"Using resumable_url: {resumable_url}")
# Setup chunk tracking variables
index = 0
offset = 0
content_size = os.stat(filepath).st_size
# No more auth on the resumable_url.
# Setting content-type
headers = {
'content-type': 'application/octet-stream'
}
with open(filepath, "rb") as archive:
while True:
chunk = archive.read(CHUNK_SIZE)
if not chunk:
break
offset = index + len(chunk)
headers['Content-Range'] = 'bytes %s-%s/%s' % (index, offset - 1, content_size)
index = offset
try:
response = requests.put(resumable_url, data=chunk, headers=headers)
print("response: %s, Content-Range: %s" % (response, headers['Content-Range']))
print(response.text)
response.raise_for_status()
except Exception as err:
print(err)
if __name__ == "__main__":
if "--help" in sys.argv or len(sys.argv) != 3:
print(
"Usage:"
" upload.py <endpoint> <filepath>"
)
upload(sys.argv[1], sys.argv[2])
To upload a batch and launch an analysis, you need to use the following command and specify:
- either the batch id if you have already created the batch via the GUI or the upload url if this is not the case
- the archive path
Batch processing
deepo site work_order batch upload -i batch_id -u upload_url -f archive_path
On the Deepomatic platform, you can access the list of batches that have been created for a given site and track their processing status.
To access this interface, you need to go on the Deployments section and click on the options button for the corresponding site.

If this option is not activated, contact your Customer Success Manager.
Here is an example of how the processing status of a batch changes:

Finally, at the end of batch processing, you can download a file that lists the processing errors that occurred for a given batch, which can help you understand the nature of the errors, and which work orders were affected.

Finally, it is possible on this interface to manually create a batch, to retrieve its id, then to use Deepomatic CLI to upload the archive.
Last modified 1mo ago