AI Models are used in Pipelines to infer on video, and the inference output can be visualized on the stream or sent to your application using other Pipeline Nodes. Lumeo provides Ready-to-use models and allows you to bring your own models to use for inferencing.
You can run an AI Model (BYO or one from the Analytics Library) using the AI Model Node in the Pipeline. To do so, add the Model Inference Node and select the model you'd like to use from Node properties.
The Model inference node adds inference output (detected objects, labels, etc.) for supported (ie non-Custom) model architectures & capabilities to the Pipeline metadata. This can be displayed on the video stream using the Display Stream Info Node and accessed within the Function Node for processing.
For Custom model architectures and capabilities, Lumeo does not extract any metadata, but adds raw inference output tensors to the Pipeline metadata which you can extract and parse using the Function Node.
The Ready-to-use Models section in the Analytics Library lists a set of curated models that you can use within your Pipelines right away. Lumeo manages these models and keeps your Solution up to date with new versions of these models whenever they are updated.
Lumeo allows you to easily upload and use your own / custom models within Pipelines. Lumeo keeps your Solution up to date with new versions of these models whenever you update them in the Console.
To upload your own model, head to Analytics Library -> Your AI Models section and click Upload a Model.
Lumeo supports the following capabilities, architectures and formats of models :
Format refers to how models are represented & stored, often tied to frameworks used to generate them. Lumeo supports the following model formats:
- Darknet / Yolo Native
- ETLT (Nvidia TAO / Transfer Learning Toolkit)
Coming soon: TensorFlow, PyTorch native support.
In the Model Conversion guide you can find instructions how to convert the most popular formats to a format that works with Lumeo. If you have a custom model you'd like to use with Lumeo, please contact us and we will get it working with you.
Capability defines the nature of inputs, outputs and what the model does. Lumeo supports models with the following capabilities:
- Detection : Given an image, detects specific objects in the image & outputs bounding boxes + detection probabilities.
- Classification : Given an image, outputs a list of categories with probabilities.
- Custom : For models with "Custom" capabilities, Lumeo will run the model but does not attempt to parse the result to extract an object or a category. You will need to write a custom parser using the Function Node to process the result and extract any relevant information.
Architecture or Topology refers to the structure of the model (# of layers, operations, interconnects, inputs, outputs, output formats, etc.). Lumeo supports the following architectures :
- YoloV2, YoloV2 Tiny, YoloV3, YoloV3 Tiny, YoloV4, YoloV4 Tiny
- Custom : For models with "Custom" architecture, Lumeo will run the model but does not attempt to parse the result to extract an object or a category. You will need to write a custom parser using the Function Node to process the result and extract any relevant information. See Guide here : Custom Model Parser
|Model Format||Weights File||Metadata File|
|Darknet / Yolo Native||Yolo Config file.|
Must be renamed in this format:
|ETLT (Nvidia Transfer Learning Toolkit / TAO Toolkit)||None|
Importing your Model
Lumeo's ability to parse your model's outputs into objects or class attributes is conditional upon the model's output layers matching the format expected by Lumeo. See the Model Conversion guide for more information on expected model output layers for each Architecture. You can verify if your model output layers match these using Netron.
If your model's output layers don't match the ones built-in to Lumeo and there is no conversion guide that works, you can choose to write a Custom Model Parser.
This section contains Format or Architecture specific parameters needed to run the Model.
This section contains additional post-processing parameters.
Updated 4 days ago