Deepomatic Platform
  • Overview
  • Release notes
    • January 2025
    • November 21, 2024
    • October 17, 2024
    • September 19, 2024
    • July 18, 2024
    • June 27, 2024
    • May 23, 2024
    • April 18, 2024
    • March 21, 2024
    • February 22, 2024
    • January 18, 2024
    • December 13, 2023
    • October 26, 2023
    • July 20, 2023
    • June 29, 2023
    • May 29, 2023
    • April 27, 2023
    • March 30, 2023
    • February 17, 2023
    • January 19, 2023
    • December 22, 2022
    • November 18, 2022
    • October 19, 2022
    • September 19, 2022
    • July 27, 2022
    • June 26, 2022
    • May 17, 2022
    • April 13, 2022
    • March 17, 2022
    • February 10, 2022
    • December 21, 2021
    • October 26, 2021
  • Getting started
  • ADMIN & USER MANAGEMENT
    • Invite and manage users
      • Invite group of users at once
      • SSO
        • Azure Active Directory
  • Deepomatic Engage
    • Integrate applications
      • Deepomatic vocabulary
      • Deepomatic connectors
        • Set-up
        • Camera Connector
        • Work Order Connector
      • API integration
        • Authentication
        • Errors
        • API reference
          • Work order management
          • Analysis
            • Guide field workers
            • Perform an analysis
            • Correct an analysis
          • Data retrieval
          • Endpoints' list
      • Batch processing
        • Format
        • Naming conventions
        • Processing
        • Batch status & errors
      • Data export
    • Use the mobile application
      • Configure a mobile application
      • Create & visualize work orders
      • Complete work orders
      • Offline experience
    • Manage your business operations with customisable solutions
      • Roles
      • Alerting
      • Field services
        • Reviewing work orders
        • Exploring work orders
        • Grouping work orders
        • Monitoring assets performance
      • Insights
  • Security
    • Security
    • Data Protection
Powered by GitBook
On this page
  • Pre-training
  • Classification and tagging backbone architectures
  • Detection meta-architectures

Was this helpful?

  1. Deepomatic Drive
  2. Configuring Visual Automation Applications
  3. Training models and model versions
  4. Training options

Available architectures

Here is the complete list of all the neural network architectures available in Studio. When available, links to the research papers are provided.

Pre-training

All our architectures are pre-trained on real-world, open datasets. Training them from scratch would take days or weeks to have correct predictions. Thanks to the pre-training, all you are doing is fine-tuning the model to your specific use-case. You can see the pre-training dataset in the table below.

Classification and tagging backbone architectures

Backbone

Inference time on Nvidia T4 (ms)

(Sync)

Inference time on Nvidia T4 (ms) (ASync)

Input size

Training batch size

Trained on

Research paper

EfficientNet B0

25.0

15.9

224x224

32

ImageNet

EfficientNet B1

27.6

17.5

240x240

32

ImageNet

EfficientNet B2

28.5

18.2

260x260

32

ImageNet

EfficientNet B3

30.6

19.6

300x300

32

ImageNet

EfficientNet B4

35.2

23.2

380x380

16

ImageNet

EfficientNet B5

47.8

35.6

456x456

8

ImageNet

EfficientNet B6

66.8

55.6

528x528

4

ImageNet

Inception-Resnet v2

38.8

28.2

299x299

32

ImageNet

ResNet-50

28.6

17.8

224x224

32

ImageNet

ResNet-101

33.8

21.2

224x224

32

ImageNet

ResNet-152

37.0

24.6

224x224

32

ImageNet

Inception v1

22.8

15.0

224x224

32

ImageNet

Inception v2

25.4

15.9

224x224

32

ImageNet

Inception v3

28.4

20.2

299x299

32

ImageNet

Inception v4

34.8

25.5

299x299

32

ImageNet

VGG 16

78.3

67.0

224x224

32

ImageNet

Detection meta-architectures

Architecture

Backbone

Inference time on Nvidia T4 (ms) (Sync)

Inference time on Nvidia T4 (ms) (ASync)

Input size

Training batch size

Trained on

Research paper

EfficientDet

Eff. Net B0

82.6

49.2

512x512

16

COCO 2018

Eff. Net B1

117

69.3

640x640

8

COCO 2018

Eff. Net B2

158

96.3

768x768

4

COCO 2018

Eff. Net B3

221

132

896x896

2

COCO 2018

Eff. Net B4

316

173

1024x1024

1

COCO 2018

Yolo v3

Darknet 53

56.7

36.0

416x416

64

ImageNet 2012

Yolo v8

Nano

44.3

28.6

640x640

16

COCO 2018

Small

4.74

31.6

640x640

16

COCO 2018

Medium

57.4

45.0

640x640

16

COCO 2018

Large

75.5

64.3

640x640

8

COCO 2018

Extra

99.3

87.7

640x640

8

COCO 2018

Faster-RCNN

Resnet-50

N/A

N/A

1024x1024*

1

COCO 2018

Resnet-101

174

169

1024x1024*

1

COCO 2018

SSD

Inception v2

34.8

17.0

300x300

24

COCO 2018

MobileNet v1

32.5

16.4

300x300

24

COCO 2018

MobileNet v2

32.4

18.1

300x300

24

COCO 2018

SSDLite

MobileNet v2

28.7

16.2

300x300

24

COCO 2018

(*) Faster-RCNN does not require a fixed image input size. As such they can accept images from 600 to 1024 pixel.

Last updated 5 months ago

Was this helpful?

arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv
arXiv