Model Conversion

This guide outlines steps to convert models so that they can be imported and used in Lumeo.

Lumeo's ability to parse your model's outputs into objects or class attributes is conditional upon the model's output layers matching the format expected by Lumeo.

This guide outlines information on expected model output layers for each Model Format and Architecture. You can verify if your model output layers match these using Netron.

Follow instructions in the table below to process/convert your model, and then head over to AI Models for the steps required to add your model to the Lumeo platform.

πŸ“˜

Custom Model Output Parsing

If your model's output layers don't match the ones built-in to Lumeo and there is no conversion guide that works, you can choose to write a Custom Model Parser.

Supported Formats and Architectures

Model ArchitectureModel FormatExpected Output LayersInstructions
DetectNet:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
FasterRCNN:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
MobileNet:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
MRCNN:white-check-mark: ONNX, Caffe, UFF, ETLT1: generate_detections
2: mask_head/mask_fcn_logits/BiasAdd
Natively supported.
Resnet:white-check-mark: ONNX, Caffe, UFF, ETLT1: conv2d_bbox
2: conv2d_cov/Sigmoid
Natively supported.
SSD:white-check-mark: ONNX, Caffe, UFF, ETLT

:leftwards-arrow-with-hook: Tensorflow
1: num_detections
2: detection_scores
3: detection_classes
4: detection_boxes
ONNX, Caffe, UFF, ETLT: Natively supported.

Tensorflow: Convert to ONNX using the Guide below.
SSD Azure CustomVision:white-check-mark: ONNX (Azure Customvision.ai General (compact) S1 Model)1: num_detections
2: detected_scores
3: detected_classes
4: detected_boxes
Natively supported. Check the complete procedure in the Azure CustomVision guide
YOLOv2
YOLOv2 Tiny
YOLOv3
YOLOv3 Tiny YOLOv4
YOLOv4 Tiny
:white-check-mark: Darknet / YOLO Native1: num_detections
2: detection_boxes
3: detection_scores
4: detection_classes
Natively supported.

Check the YOLO Native Models (Darknet) guide
YOLOv5
YOLOv6
YOLOv7
YOLOv8
YOLOR
YOLOX
DAMO-YOLO
PP-YOLOE
YOLO-NAS
:white-check-mark: ONNX Lumeo YOLO

:no-entry-sign: ONNX
:no-entry-sign: PyTorch
1: boxes
2: scores
3: classes
Convert the weights to ONNX Lumeo YOLO format using this guide

Guides

Tensorflow to ONNX

pip3 install -U tf2onnx
  • convert Tensorflow savedmodel to ONNX:
    change the current working directory to the one that contains the saved_model.pb file, for example:
cd /home/user/saved_model_tensorflow

perform the conversion:

python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --saved-model ./ --output ./converted_model.onnx

Alternatives:

python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --tflite ./ --output ./converted_model.onnx
python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --target tensorrt --tflite ./ --output ./converted_model.onnx

Note: Our inference engine expects the input data in NCHW format (N: batch_size, C: channel, H: height, W: width), so its required change the default input format of Tensorflow (NHWC) to this using the --inputs-as-nchw argument followed by the input layer name, which on the above example is sequential_1_input:0.

A file named converted_model.onnx will be created on the same folder, that's the one you should upload as weights file to update your model in Lumeo console.

YOLO Native Models (Darknet)

The original YOLO models (YOLOv2, YOLOv2 Tiny, YOLOv3, YOLOv3 Tiny, YOLOv4 & YOLOv4 Tiny) trained in the Darknet format can be imported to your Lumeo Application without perform any additional conversion.

YOLO weights pretrained on the MS COCO dataset (80 classes):

Here's the procedure for the YOLOv4 Tiny model:

  1. Download the model's cfg,weights, and class labelsfiles.
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/data/coco.names
  1. Upload the model's files to an AI Model in Lumeo Console.
    Click on Design -> AI Models -> Add Model and fill the entries as in the following images:
5\.  a) Type and Weights tab

5. a) Type and Weights tab

5\.  b) Parameters tab

5. b) Parameters tab

5\. c) Inference parameters tab

5. c) Inference parameters tab

After clicking in "Finish & Save model" button, you can start using your newly uploaded model in the AI Model Node

YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOR, YOLOX, DAMO-YOLO, PP-YOLOE & YOLO-NAS conversion to ONNX Lumeo YOLO format

The conversion procedure of those formats to ONNX Lumeo YOLO weights can be found in Lumeo's models-conversion repository.