Model Conversion
This guide outlines steps to convert models so that they can be imported and used in Lumeo.
Lumeo's ability to parse your model's outputs into objects or class attributes is conditional upon the model's output layers matching the format expected by Lumeo.
This guide outlines information on expected model output layers for each Model Format and Architecture. You can verify if your model output layers match these using Netron.
Follow instructions in the table below to process/convert your model, and then head over to AI Models for the steps required to add your model to the Lumeo platform.
Custom Model Output Parsing
If your model's output layers don't match the ones built-in to Lumeo and there is no conversion guide that works, you can choose to write a Custom Model Parser.
Supported Formats and Architectures
Model Architecture | Model Format | Expected Output Layers | Instructions |
---|---|---|---|
DetectNet | ✅ ONNX, Caffe, UFF, ETLT | Natively supported. | |
FasterRCNN | ✅ ONNX, Caffe, UFF, ETLT | Natively supported. | |
MobileNet | ✅ ONNX, Caffe, UFF, ETLT | Natively supported. | |
MRCNN | ✅ ONNX, Caffe, UFF, ETLT | 1: generate_detections 2: mask_head/mask_fcn_logits/BiasAdd | Natively supported. |
Resnet | ✅ ONNX, Caffe, UFF, ETLT | 1: conv2d_bbox 2: conv2d_cov/Sigmoid | Natively supported. |
SSD | ✅ ONNX, Caffe, UFF, ETLT ↩️ Tensorflow | 1: num_detections 2: detection_scores 3: detection_classes 4: detection_boxes | ONNX, Caffe, UFF, ETLT: Natively supported. Tensorflow: Convert to ONNX using the Guide below. |
SSD Azure CustomVision | ✅ ONNX (Azure Customvision.ai General (compact) S1 Model) | 1: num_detections 2: detected_scores 3: detected_classes 4: detected_boxes | Natively supported. Check the complete procedure in the Azure CustomVision guide |
YOLOv2 YOLOv2 Tiny YOLOv3 YOLOv3 Tiny YOLOv4 YOLOv4 Tiny | ✅ Darknet / YOLO Native | 1: num_detections 2: detection_boxes 3: detection_scores 4: detection_classes | Natively supported. Check the YOLO Native Models (Darknet) guide |
YOLOv5 YOLOv6 YOLOv7 YOLOv8 YOLOR YOLOX DAMO-YOLO PP-YOLOE YOLO-NAS | ✅ ONNX Lumeo YOLO 🚫 ONNX 🚫 PyTorch | 1: boxes 2: scores 3: classes | Convert the weights to ONNX Lumeo YOLO format using this guide |
Guides
Tensorflow to ONNX
-
Github Repo: https://github.com/onnx/tensorflow-onnx
-
install tf2onnx:
pip3 install -U tf2onnx
- convert Tensorflow savedmodel to ONNX:
change the current working directory to the one that contains thesaved_model.pb
file, for example:
cd /home/user/saved_model_tensorflow
perform the conversion:
python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --saved-model ./ --output ./converted_model.onnx
Alternatives:
python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --tflite ./ --output ./converted_model.onnx
python3 -m tf2onnx.convert --inputs-as-nchw sequential_1_input:0 --target tensorrt --tflite ./ --output ./converted_model.onnx
Note: Our inference engine expects the input data in NCHW format (N: batch_size, C: channel, H: height, W: width), so its required change the default input format of Tensorflow (NHWC) to this using the --inputs-as-nchw
argument followed by the input layer name, which on the above example is sequential_1_input:0
.
A file named converted_model.onnx
will be created on the same folder, that's the one you should upload as weights file to update your model in Lumeo console.
YOLO Native Models (Darknet)
The original YOLO models (YOLOv2, YOLOv2 Tiny, YOLOv3, YOLOv3 Tiny, YOLOv4 & YOLOv4 Tiny) trained in the Darknet format can be imported to your Lumeo Application without perform any additional conversion.
YOLO weights pretrained on the MS COCO dataset (80 classes):
- YOLOv4x-Mish [cfg] [weights]
- YOLOv4-CSP [cfg] [weights]
- YOLOv4 [cfg] [weights]
- YOLOv4-Tiny [cfg] [weights]
- YOLOv3-SPP [cfg] [weights]
- YOLOv3 [cfg] [weights]
- YOLOv3-Tiny-PRN [cfg] [weights]
- YOLOv3-Tiny [cfg] [weights]
- YOLOv3-Lite [cfg] [weights]
- YOLOv3-Nano [cfg] [weights]
- YOLO-Fastest [cfg] [weights]
- YOLO-Fastest-XL [cfg] [weights]
- YOLOv2 [cfg] [weights]
- YOLOv2-Tiny [cfg] [weights]
Here's the procedure for the YOLOv4 Tiny
model:
- Download the model's
cfg
,weights
, and classlabels
files.
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.weights
wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/data/coco.names
- Upload the model's files to an AI Model in Lumeo Console.
Click onDesign -> AI Models -> Add Model
and fill the entries as in the following images:
After clicking in "Finish & Save model"
button, you can start using your newly uploaded model in the AI Model Node
YOLOv5, YOLOv6, YOLOv7, YOLOv8, YOLOR, YOLOX, DAMO-YOLO, PP-YOLOE & YOLO-NAS conversion to ONNX Lumeo YOLO
format
ONNX Lumeo YOLO
formatThe conversion procedure of those formats to ONNX Lumeo YOLO
weights can be found in Lumeo's models-conversion repository.
Updated about 1 year ago