On premise Server

Hardware requirements

Deepomatic can run using the following minimum requirements:

  • A processor supporting the x86-64 instruction set.

  • 4GB of free RAM.

  • 10GB of storage.

The recommended requirements are however:

  • A quad core processor supporting the x86-64 instruction set.

  • 8GB of free RAM.

  • 40GB of storage.

Deep learning models require lot of computation and it is recommended to use a specific hardware accelerator to make computation tractable. See below for a list of supported accelerators:

  • NVIDIA GPU accelerators, of the family Tesla, Quadro, RTX or GTX, with CUDA compute capability 3.0 or more.

  • Intel Movidius accelerators, see for example Aaeon's manufactured modules.

  • Intel GPU: Iris Pro Graphics and Intel HD Graphics, on Intel Core from 6th to 10th generation or Intel Xeon (excluding the e5 family)

  • Intel CPU: Intel Core from 6th to 10th generation or Intel Xeon (excluding the e5 family)

Software requirement

Operating system

Deepomatic relies on Docker to distribute its software and the operating system does not matter much as long as you manager to install the drivers for your deep learning accelerator. That said, we recommend a Linux distribution, and more specifically Ubuntu 18.04.

Core requirements

Deepomatic relies on Docker to distribute its software. You will thus need to install the following softwares on a compatible OS (we recommend Ubuntu 18.04):

Additionally, you will need to install additional software depending on the deep learning you may have chosen.

Nvidia GPU

You need to install Nvidia drivers 410.48+ and Nvidia Docker.

On Ubuntu 18.04+, you can install the drivers with:

  • sudo apt update

  • sudo apt install --no-install-recommends nvidia-driver-418

Intel Movidius

You will need to install the HDDL driver on your host to take advantage of the Movidius chips. See the OpenVino manual.

Intel GPU

You will need to install the NEO OpenCL driver on your host to take advantage of the GPU chips. See the OpenVino manual.

Intel CPU

You do not need to install additional software.

Set up files

In a dedicated repository, create a file named main.yml with the following content:

version: "2.4"
volumes:
deepomatic-resources:
services:
resource-server:
restart: always
image: deepomatic/run-resource-server:0.5.0
environment:
- DEEPOMATIC_API_URL=https://api.deepomatic.com
- DEEPOMATIC_API_KEY=${DEEPOMATIC_API_KEY} # Replace ${DEEPOMATIC_API_KEY} with yours
- DEEPOMATIC_SITE_ID=${DEEPOMATIC_SITE_ID} # Replace ${DEEPOMATIC_SITE_ID} with yours
- DOWNLOAD_ON_STARTUP=1
- INIT_SYSTEM=circus
volumes:
- deepomatic-resources:/var/lib/deepomatic
rabbitmq:
restart: always
image: rabbitmq:3.7
expose:
- 5672
ports:
- 5672:5672
environment:
- RABBITMQ_DEFAULT_USER=user
- RABBITMQ_DEFAULT_PASS=password
- RABBITMQ_DEFAULT_VHOST=deepomatic
redis:
restart: always
image: "redis:5"
expose:
- 6379
operate-showcase-ui:
restart: always
depends_on:
- redis
image: deepomatic/operate-showcase-ui:2.4.0
devices:
- /dev/video0:/dev/video0
ports:
- 8080:8080
environment:
- STUDIO_URL=https://studio.deepomatic.com
- STUDIO_TOKEN=${DEEPOMATIC_API_KEY} # Replace ${DEEPOMATIC_API_KEY} with yours
- ORGANIZATION=${ORGANIZATION} # Replace ${ORGANIZATION} with yours
- LOCATION=${LOCATION} # Replace ${LOCATION} with the site name
- DEEPOMATIC_CONFIG_DIR=/etc/deepomatic/cameras
#- OPENCV_FFMPEG_CAPTURE_OPTIONS=rtsp_transport;udp
volumes:
- ./cameras:/etc/deepomatic/cameras
- ./videos:/videos/
- ./metadata.json:/etc/deepomatic/metadata.json

Let's now create a file for the inference service. Choose the tab below according to your hardware.

Nvidia GPU
Intel Movidius
Intel GPU
Intel CPU
Nvidia GPU

Make sure your have installed Nvidia drivers 410.48+ and Nvidia Docker.

On Ubuntu, you can install the drivers with:

  • sudo apt update

  • sudo apt install --no-install-recommends nvidia-driver-418

Create a file named inference.yml with the following content:

version: "2.4"
services:
neural-worker:
restart: always
image: deepomatic/run-neural-worker:0.5.0-native
runtime: nvidia
environment:
- GPU_IDS=0
- INIT_SYSTEM=circus
- AUTOSTART_WORKER=false
- AMQP_URL=amqp://user:password@rabbitmq:5672/deepomatic
- DEEPOMATIC_STORAGE_DIR=/var/lib/deepomatic/services/worker-nn
- WORKFLOWS_PATH=/var/lib/deepomatic/services/worker-nn/resources/workflows.json
volumes:
- deepomatic-resources:/var/lib/deepomatic
Intel Movidius

Make sure your have installed the HDDL driver on your host to take advantage of the Movidius chips. See the OpenVino manual. You should have /dev/ion present on the host if installed correctly.

Make sure the HDDL service is running. This is typically done by running this in another terminal:

export HDDL_INSTALL_DIR=/opt/intel/openvino/inference_engine/external/hddl
export LD_LIBRARY_PATH=${HDDL_INSTALL_DIR}/lib:${LD_LIBRARY_PATH}
${HDDL_INSTALL_DIR}/bin/hddldaemon

Create a file named inference.yml with the following content:

version: "2.4"
services:
neural-worker:
privileged: true
restart: always
image: deepomatic/run-neural-worker:0.5.0-openvino
devices:
- /dev/ion:/dev/ion
environment:
- OPENVINO_PLUGIN=HDDL
- INIT_SYSTEM=circus
- AUTOSTART_WORKER=false
- AMQP_URL=amqp://user:password@rabbitmq:5672/deepomatic
- DEEPOMATIC_STORAGE_DIR=/var/lib/deepomatic/services/worker-nn
- WORKFLOWS_PATH=/var/lib/deepomatic/services/worker-nn/resources/workflows.json
volumes:
- /var/tmp:/var/tmp
- deepomatic-resources:/var/lib/deepomatic
Intel GPU

Make sure your have installed the NEO OpenCL driver on your host to take advantage of the GPU chips. See the OpenVino manual. You should have /dev/dri present on the host if installed correctly.

Create a file named inference.yml with the following content:

version: "2.4"
services:
neural-worker:
restart: always
image: deepomatic/run-neural-worker:0.5.0-openvino
devices:
- /dev/dri:/dev/dri
environment:
- OPENVINO_PLUGIN=GPU
- INIT_SYSTEM=circus
- AUTOSTART_WORKER=false
- AMQP_URL=amqp://user:password@rabbitmq:5672/deepomatic
- DEEPOMATIC_STORAGE_DIR=/var/lib/deepomatic/services/worker-nn
- WORKFLOWS_PATH=/var/lib/deepomatic/services/worker-nn/resources/workflows.json
volumes:
- deepomatic-resources:/var/lib/deepomatic
Intel CPU

Create a file named inference.yml with the following content:

version: "2.4"
services:
neural-worker:
restart: always
image: deepomatic/run-neural-worker:0.5.0-openvino
environment:
- OPENVINO_PLUGIN=CPU
- INIT_SYSTEM=circus
- AUTOSTART_WORKER=false
- AMQP_URL=amqp://user:password@rabbitmq:5672/deepomatic
- DEEPOMATIC_STORAGE_DIR=/var/lib/deepomatic/services/worker-nn
- WORKFLOWS_PATH=/var/lib/deepomatic/services/worker-nn/resources/workflows.json
volumes:
- deepomatic-resources:/var/lib/deepomatic

Once ready, launch all services with the following commands. Do not forget to replace the value of DEEPOMATIC_API_KEY and DEEPOMATIC_SITE_ID with the credentials you get for the corresponding site in the Deployment section on the Deepomatic platform.

export DEEPOMATIC_API_KEY=0123456789abcdef0123456789abcdef # Put your API key here
export DEEPOMATIC_SITE_ID=01234567-89ab-cdef-0123-456789abcdef # Put the site ID here
docker-compose -f main.yml -f inference.yml up -d