Pre-built AI models now download from cloud cache for faster startup, pipeline branches without source are ignored, license plate recognition is improved, NVIDIA-related crashes and memory leaks are mitigated, high CPU usage bug is fixed (redeploy required)
New
- Pre-built AI model engines are now downloaded for Marketplace models from a cloud cache when available. This allows to skip "Optimizing AI model" step when starting a deployment on a gateway for the first time, which saves 15-20 minutes. Note: Pre-built Model engines are only available for common hardware configurations, so some hardware might still see longer start times.
- Pipeline Nodes that don't have a path to the source node are now ignored. This allows ignoring the whole branches of the pipeline. Previously, only Function nodes could be left unconnected.
- Support for improved License plate recognition model which is faster and more robust for side angle views of license plates.
Fixes
- Deployment runner process crashes that started happening in 1.39.12 due to NVIDIA driver issues are now mitigated with the side effect of adding some delay to starting/stopping the deployment. This is a workaround till we upgrade Deepstream to 7.1 and NVIDIA patches the DGPU drivers.
- Runners are now limited to 500 deployment starts. For deployments with a file source, the runner process stops accepting new deployments, and a new runner is started. For deployments with a live stream, the runner process is terminated and deployments are restarted. This helps to mitigate the memory leak in NVIDIA object tracker.
- Gateway version 1.39.12 introduced an issue with higher than normal CPU usage for existing deployments. That is now resolved with 1.41.33, but requires that you re-deploy (not just restart) those pipelines (Deployment detail view -> Configuration tab -> Edit configuration -> Deploy).
Known Issues
- When multiple deployments are running, logs from one deployment (in the Deployment detail view -> Logs tab) may appear in another deployment's view. We are working on a fix which will be rolled out in a patch release shortly.