Save Snapshot
Save a snapshot and associated metadata when a specific condition is met
Overview
The node saves a snapshot in jpg
format, along with the Pipeline metadata contained in that frame, as a files API object.
Files created by this node are also listed in the Console under the Deployment detail page.
Format of output
This node creates 2 files for every snapshot it saves:
-
Media file (JPEG format), named
<node_name>-YYYY-MM-DDThhmmss.usZ.jpg
. Ex.snapshot1-2023-01-20T083909.952Z.jpg
. The timestamp in the file name is the creation timestamp. This file contains the media for the snapshot in jpg format. -
Metadata file (JSON format) named
<node_name>-YYYY-MM-DDThhmmss.usZ.jpg.json
. Exsnapshot-2023-01-20T083909.952Z.jpg.json
. This file contains the metadata for the frame contained in the media file in JSON format.
Inputs & Outputs
- Inputs : 1, Media Format : Raw Video
- Outputs : 1, Media Format: Raw Video
- Output Metadata : None
Properties
Property | Value |
---|---|
location | Where to save snapshots & associated metadata. Options : local : Save to local disk, in the folder specified under path property. lumeo_cloud : Upload to Lumeo's cloud service. For this location, Lumeo will always create a files object for each snapshot. s3 : Upload to a custom S3 bucket. |
path | Path to save the snapshots, if location == local . If you change the defaults, ensure that the path is writable by lumeod user. If location is set to lumeo_cloud snapshots are temporarily stored at /var/lib/lumeo/clip_upload (till they are uploaded). |
max_edge_files | Maximum number of files to keep on local disk, when location == local , for this specific node. Ignored when location == lumeo_cloud . If this property is not set, Lumeo will continue to save till the disk fills up. If it is set, Lumeo will save to local disk in a round robin manner (overwrite the 1st file once max_edge_files have been written). |
trigger | Save a snapshot when this condition is met, subject to Trigger mode property below. The trigger expression must be a valid Dot-notation expression that operates on Pipeline Metadata and evaluates to True or False . See details here. ex. nodes.annotate_line_counter1.lines.line1.total_objects_crossed_delta > 0 |
trigger_mode | Single : Take a snapshot once every time the trigger condition is met (goes from False to True). Exact : Take a snapshot of every frame for as long as the condition is met. |
s3_endpoint | AWS S3-compatible endpoint to upload to, excluding the bucket name. Used when location == s3 . ex. https://storage.googleapis.com or https://fra1.digitaloceanspaces.com or https://s3.us-east1.amazonaws.com |
s3_region | AWS S3-compatible endpoint Region. Used when location == s3 . ex. us or fra1 |
s3_bucket | AWS S3-compatible bucket name. Used when location == s3 . ex. lumeo-data |
s3_key_prefix | Folder prefix within the AWS S3-compatible bucket that files will be created in. Skip trailing / s. Used when location == s3 . ex. production/test |
s3_access_key_id | S3 API Key ID with write access to the specified bucket. Used when location == s3 . ex. GOOGVMQE4RCI3Z2CYI4HSFHJ |
s3_secret_access_key | S3 API Key with write access to the specified bucket. |
Metadata
Metadata Property | Description |
---|---|
None | None |
Common Provider Configurations
Examples
Property | AWS | Digital Ocean | Wasabi | GCP |
---|---|---|---|---|
s3_endpoint | https://s3.us-east-1.amazonaws.com | https://nyc3.digitaloceanspaces.com | https://s3.wasabisys.com | https://storage.googleapis.com |
s3_region | us-east-1 | nyc3 | us-east-1 | us-central1 |
s3_bucket | my-aws-bucket | my-do-bucket | my-wasabi-bucket | my-google-bucket |
s3_key_prefix | my-folder | my-folder | my-folder | my-folder |
s3_access_key_id | AKIAIOSFODNN7EXAMPLE | DO00QWERTYUIOPASDFGHJKLZXCVBNM | WASABI00QWERTYUIOPASDFGHJKLZXCVBNM | GOOG00QWERTYUIOPASDFGHJKLZXCVBNM |
s3_secret_access_key | xxxxx | xxxxx | xxxxxx | xxxxxxx |
AWS Credentials
Create user
aws iam create-user --user-name <username>
Attach policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<bucket-name>/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<bucket-name>"
}
]
}
aws iam put-user-policy --user-name <username> --policy-name S3PutObjectPolicy --policy-document file://s3-put-policy.json
Create long lived access key for user and grab the credentials
aws iam create-access-key --user-name <username>
Updated 3 months ago