Save Snapshot

Save a snapshot and associated metadata when a specific condition is met

Overview

The node saves a snapshot in jpg format, along with the Pipeline metadata contained in that frame, as a files API object.

Files created by this node are also listed in the Console under the Deployment detail page.

Format of output

This node creates 2 files for every snapshot it saves:

  1. Media file (JPEG format), named <node_name>-YYYY-MM-DDThhmmss.usZ.jpg. Ex. snapshot1-2023-01-20T083909.952Z.jpg. The timestamp in the file name is the creation timestamp. This file contains the media for the snapshot in jpg format.

  2. Metadata file (JSON format) named <node_name>-YYYY-MM-DDThhmmss.usZ.jpg.json. Ex snapshot-2023-01-20T083909.952Z.jpg.json. This file contains the metadata for the frame contained in the media file in JSON format.

Inputs & Outputs

  • Inputs : 1, Media Format : Raw Video
  • Outputs : 1, Media Format: Raw Video
  • Output Metadata : None

Properties

PropertyValue
locationWhere to save snapshots & associated metadata. Options :

local : Save to local disk, in the folder specified under path property.

lumeo_cloud : Upload to Lumeo's cloud service. For this location, Lumeo will always create a files object for each snapshot.

s3 : Upload to a custom S3 bucket.
pathPath to save the snapshots, if location == local. If you change the defaults, ensure that the path is writable by lumeod user.

If location is set to lumeo_cloud snapshots are temporarily stored at /var/lib/lumeo/clip_upload (till they are uploaded).
max_edge_filesMaximum number of files to keep on local disk, when location == local, for this specific node. Ignored when location == lumeo_cloud.

If this property is not set, Lumeo will continue to save till the disk fills up. If it is set, Lumeo will save to local disk in a round robin manner (overwrite the 1st file once max_edge_files have been written).
triggerSave a snapshot when this condition is met, subject to Trigger mode property below.

The trigger expression must be a valid Dot-notation expression that operates on Pipeline Metadata and evaluates to True or False. See details here.

ex. nodes.annotate_line_counter1.lines.line1.total_objects_crossed_delta > 0
trigger_modeSingle : Take a snapshot once every time the trigger condition is met (goes from False to True).

Exact : Take a snapshot of every frame for as long as the condition is met.
s3_endpointAWS S3-compatible endpoint to upload to, excluding the bucket name. Used when location == s3.

ex. https://storage.googleapis.com or https://fra1.digitaloceanspaces.com or https://s3.us-east1.amazonaws.com
s3_regionAWS S3-compatible endpoint Region. Used when location == s3.

ex. us or fra1
s3_bucketAWS S3-compatible bucket name. Used when location == s3.

ex. lumeo-data
s3_key_prefixFolder prefix within the AWS S3-compatible bucket that files will be created in. Skip trailing /s. Used when location == s3.

ex. production/test
s3_access_key_idS3 API Key ID with write access to the specified bucket. Used when location == s3.

ex. GOOGVMQE4RCI3Z2CYI4HSFHJ
s3_secret_access_keyS3 API Key with write access to the specified bucket.

Metadata

Metadata PropertyDescription
NoneNone

Common Provider Configurations

Examples

PropertyAWSDigital OceanWasabiGCP
s3_endpointhttps://s3.us-east-1.amazonaws.comhttps://nyc3.digitaloceanspaces.comhttps://s3.wasabisys.comhttps://storage.googleapis.com
s3_regionus-east-1nyc3us-east-1us-central1
s3_bucketmy-aws-bucketmy-do-bucketmy-wasabi-bucketmy-google-bucket
s3_key_prefixmy-foldermy-foldermy-foldermy-folder
s3_access_key_idAKIAIOSFODNN7EXAMPLEDO00QWERTYUIOPASDFGHJKLZXCVBNMWASABI00QWERTYUIOPASDFGHJKLZXCVBNMGOOG00QWERTYUIOPASDFGHJKLZXCVBNM
s3_secret_access_keyxxxxxxxxxxxxxxxxxxxxxxx

AWS Credentials

Create user
aws iam create-user --user-name <username>

Attach policy

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject"
      ],
      "Resource": "arn:aws:s3:::<bucket-name>/*"
    },
    {
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::<bucket-name>"
    }        
  ]
}

aws iam put-user-policy --user-name <username> --policy-name S3PutObjectPolicy --policy-document file://s3-put-policy.json

Create long lived access key for user and grab the credentials
aws iam create-access-key --user-name <username>