Model Conversion

This guide outlines steps to convert models so that they can be imported and used in Lumeo.

Lumeo's ability to parse your model's outputs into objects or class attributes is conditional upon the model's output layers matching the format expected by Lumeo.

This guide outlines information on expected model output layers for each Model Format and Architecture. You can verify if your model output layers match these using Netron.

Follow instructions in the table below to process/convert your model, and then head over to AI Models for the steps required to add your model to the Lumeo platform.

📘

Custom Model Output Parsing

If your model's output layers don't match the ones built-in to Lumeo and there is no conversion guide that works, you can choose to write a Custom Model Parser.

Supported Formats and Architectures

Model ArchitectureModel FormatExpected Output LayersInstructions
DetectNet:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
FasterRCNN:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
MobileNet:white-check-mark: ONNX, Caffe, UFF, ETLTNatively supported.
MRCNN:white-check-mark: ONNX, Caffe, UFF, ETLTgenerate_detections, mask_head/mask_fcn_logits/BiasAddNatively supported.
Resnet:white-check-mark: ONNX, Caffe, UFF, ETLTconv2d_bbox, conv2d_cov/SigmoidNatively supported.
SSD:white-check-mark: ONNX, Caffe, UFF, ETLT

:leftwards-arrow-with-hook: Tensorflow
num_detections, detection_scores, detection_classes, detection_boxesONNX, Caffe, UFF, ETLT: Natively supported.

Tensorflow: Convert to ONNX using the Guide below.
SSD:white-check-mark: ONNX (Azure Customvision.ai General (compact) S1 Model)num_detections, detected_scores, detected_classes, detected_boxesNatively Supported.
Yolo, YoloV2, YoloV2-Tiny, YoloV3, YoloV3-Tiny, YoloV4, YoloV4-Tiny:white-check-mark: Darknet / Yolo Native

:white-check-mark: ONNX
Natively supported.
YoloV5, YoloR:white-check-mark: Darknet / Yolo Native

:no-entry-sign: ONNX
:no-entry-sign: PyTorch
Convert to Darknet Weights using the Guide below
YoloV6, YoloV7Coming soon.

Guides

Tensorflow to ONNX

pip3 install -U tf2onnx
  • convert Tensorflow savedmodel to ONNX:
    change the current working directory to the one that contains the saved_model.pb file, for example:
cd /home/user/saved_model_tensorflow

perform the conversion:

python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --saved-model ./ --output ./converted_model.onnx

Alternatives:

python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --tflite ./ --output ./converted_model.onnx
python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --target tensorrt --tflite ./ --output ./converted_model.onnx

Note: Our inference engine expects the input data in NCHW format (N: batch_size, C: channel, H: height, W: width), so its required change the default input format of Tensorflow (NHWC) to this using the --inputs-as-nchw argument followed by the input layer name, which on the above example is sequential_1_input:0.

A file named converted_model.onnx will be created on the same folder, that's the one you should upload as weights file to update your model in Lumeo console.

YoloV5 Models

In order to use YoloV5 Models in Lumeo, first convert them to the Darknet format using the steps below. (reproduced here from this Github Repo)

  1. Download the YOLOv5 repo and install the requirements
git clone https://github.com/ultralytics/yolov5.git
cd yolov5
pip3 install -r requirements.txt
  1. Grab the Conversion script
    Copy gen_wts_yoloV5.py file from here to the Yolov5 folder.

  2. Download the model
    Download the .pt file from YOLOv5 repo or use your own custom trained YoloV5 .pt file.

    NOTE: You can use your custom model, but it is important to keep the YOLO model reference (yolov5_) in the resulting output cfg and weights/wts filenames to generate the engine correctly.

  3. Convert model
    Generate the cfg and wts files (example for YOLOv5s)

python3 gen_wts_yoloV5.py -w yolov5s.pt

NOTE: To change the inference size (default: 640), add:

-s SIZE
--size SIZE
-s HEIGHT WIDTH
--size HEIGHT WIDTH
  1. Upload Generated files to Lumeo
    Upload the generated cfg and wts files to a Custom Model in Lumeo Console.

YoloR Models

In order to use YoloR Models in Lumeo, first convert them to the Darknet format using the steps below. (reproduced here from this Github Repo)

  1. Download the YOLOR repo and install the requirements
git clone https://github.com/WongKinYiu/yolor.git
cd yolor
pip3 install -r requirements.txt
  1. Grab the Conversion script
    Copy the gen_wts_yolor.py file from here to the yolor folder.

  2. Download the model
    Download the pt file from YOLOR repo or use your own custom trained YOLOR .pt file.

    NOTE: You can use your custom model, but it is important to keep the YOLO model reference (yolor_) in the resulting output cfg and weights/wts filenames to generate the engine correctly.

  3. Convert model
    Generate the cfg and wts files (example for YOLOR-CSP)

python3 gen_wts_yolor.py -w yolor_csp.pt -c cfg/yolor_csp.cfg
  1. Upload Generated files to Lumeo
    Upload the generated cfg and wts files to a Custom Model in Lumeo Console.