Save Clip

Save a clip and associated metadata when a specific condition is met

Overview

The node saves clips in mp4 format, along with the Pipeline metadata contained in those frames, as a files API object.
Files created by this node are also listed in the Console under the Deployment detail page when the save location is Lumeo cloud.

Format of output

This node creates 2 files for every clip it saves:

  1. Media file (MP4 format), named <node_name>-YYYY-MM-DDThhmmss.usZ.mp4. Ex. clip1-2023-01-20T083909.952Z.mp4. The timestamp in the file name is the creation timestamp. This file contains the media for the clip in mp4 format.

  2. Metadata file (JSON format) named <node_name>-YYYY-MM-DDThhmmss.usZ.mp4.json. Ex clip1-2023-01-20T083909.952Z.mp4.json. This file contains the metadata for all the frames contained in the media file in JSON format.

Inputs & Outputs

  • Inputs : 1, Media Format : Encoded Video
  • Outputs : 1, Media Format: Encoded Video
  • Output Metadata : None

Properties

PropertyValue
max_durationMaximum duration for the clip, in seconds. Once this duration is reached, a new clip will be started.
prebuffer_intervalDuration, in seconds, of video to record before the trigger condition is met.

Note: See section below on how Pre-buffer works depending on the trigger_mode property.
max_sizeMaximum size of a clip in bytes. Once this size is reached, a new clip will be started.
locationWhere to save clips & associated metadata. Options :

local : Save to local disk, in the folder specified under path property.

lumeo_cloud : Upload to Lumeo's cloud service. For this location, Lumeo will always create a files object for each clip.

s3 : Upload to a custom S3 bucket.
pathPath to save the clips, if location == local. If you change the defaults, ensure that the path is writable by lumeod user.

If location is set to lumeo_cloud clips are temporarily stored at /var/lib/lumeo/clip_upload (till they are uploaded).
max_edge_filesMaximum number of files to keep on local disk, when location == local, for this specific node. Ignored when location == lumeo_cloud.

If this property is not set, Lumeo will continue to save till the disk fills up. If it is set, Lumeo will save to local disk in a round robin manner (overwrite the 1st file once max_edge_files have been written).
triggerStart recording a clip once the trigger condition is met.

The trigger expression must be a valid Dot-notation expression that operates on Pipeline Metadata and evaluates to True or False. See details here.

ex. nodes.annotate_line_counter1.lines.line1.total_objects_crossed_delta > 0
trigger_modeExact : Record for as long as the trigger condition is met, but only up to the defined maximum duration.

Fixed Duration : Record for the defined maximum duration once the trigger condition is met.
s3_endpointAWS S3-compatible endpoint to upload to, excluding bucket name. Used when location == s3.

ex. https://storage.googleapis.com or https://fra1.digitaloceanspaces.com or https://s3.us-east1.amazonaws.com
s3_regionAWS S3-compatible endpoint Region. Used when location == s3.

ex. us or fra1
s3_bucketAWS S3-compatible bucket name. Used when location == s3.

ex. lumeo-data
s3_key_prefixFolder prefix within the AWS S3-compatible bucket that files will be created in. Skip the trailing /. Used when location == s3.

ex. production/test
s3_access_key_idS3 API Key ID with write access to the specified bucket. Used when location == s3.

ex. GOOGVMQE4RCI3Z2CYI4HSFHJ
s3_secret_access_keyS3 API Key with write access to the specified bucket.

Pre-buffer Interval

This diagram below explains how the pre-buffer interval changes how the video clip is recorded.

Metadata

Metadata PropertyDescription
NoneNone

Common Provider Configurations

Examples

PropertyAWSDigital OceanWasabiGCP
s3_endpointhttps://s3.us-east-1.amazonaws.comhttps://nyc3.digitaloceanspaces.comhttps://s3.wasabisys.comhttps://storage.googleapis.com
s3_regionus-east-1nyc3us-east-1us-central1
s3_bucketmy-aws-bucketmy-do-bucketmy-wasabi-bucketmy-google-bucket
s3_key_prefixmy-foldermy-foldermy-foldermy-folder
s3_access_key_idAKIAIOSFODNN7EXAMPLEDO00QWERTYUIOPASDFGHJKLZXCVBNMWASABI00QWERTYUIOPASDFGHJKLZXCVBNMGOOG00QWERTYUIOPASDFGHJKLZXCVBNM
s3_secret_access_keyxxxxxxxxxxxxxxxxxxxxxxx

AWS Credentials

Create user
aws iam create-user --user-name <username>

Attach policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::<bucket-name>/*"
    },
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::<bucket-name>"
    }        
  ]
}

aws iam put-user-policy --user-name <username> --policy-name S3PutObjectPolicy --policy-document file://s3-put-policy.json

Create long lived access key for user and grab the credentials
aws iam create-access-key --user-name <username>