Face Recognition Azure

Lookup detected faces using Azure Face Recognition API

Overview

The Face Recognition (Azure) node uses Azure Face API to recognize faces (by a match to your private repository) within a specified region in live video stream, and adds face metadata to the frame metadata.

To use this node, you register faces directly with Azure Face API and configure this node to perform lookup against those registered faces. You can also add metadata to those faces using the Azure Face API, which this node will retrieve and display or color the face with.

This node makes it easy to build common use cases such as access control, customer experience improvements, etc.

This node requires a AI Model Node and a Track Objects Node before it in order to function properly.

Inputs & Outputs

  • Inputs : 1, Media Format : Raw Video
  • Outputs : 1, Media Format: Raw Video
  • Output Metadata : Face Information

Properties

PropertyValue
azure_endpointAzure endpoint for cognitive services.
ex. https://lumeo.cognitiveservices.azure.com/
azure_api_keyAzure API Key for your endpoint
ex. asoir2983271sdhaoa
person_group_idID of the Azure Face API Large Person Group to lookup faces from.
ex. fastpass-users
display_roiBoolean. If true, ROI info will be drawn on video
Ex. true / false
display_faceinfoIf true, the Face and any related attributes is drawn on video. See below for face attributes format.
roisSemicolon separated list of areas in the video within which the Face Recognition is performed, each area identified by a set of normalized coordinates.
If none is specified, the Face Recognition recognizes faces within the entire frame.
Format: x1,y1,x2,y2,x3,y3,x1,y1
Ex: 0,0,0.25,0.5,0.75,0.5,0,0
roi_labelsComma separated list of labels for each ROI in the rois list above.
Format: label1, label2
Ex: door, window
max_lookups_per_face Maximum number of lookups for each new face before marking it unrecognized.
min_confidenceIgnores face matches if they are below this threshold.
min_face_size_pixelsMinimum width & height of a face that we attempt to lookup.

Metadata

Metadata PropertyDescription
nodes.<node_id>Describes the ROIs monitored by this node, and their properties.
Format: as defined in the table below.

<node_id> for Face Recognition Nodes is of the form face_rec_azureX (ex. face_rec_azure1)

Example

"nodes": {
    "face_rec_azure1": {
        "type": "face_rec_azure",
        "rois": {
            "<roi_name>": {
                "recognized_faces": [{
                    "id": 1230171012121,
                    "person_id": "1239-12381-23110-1213",
                    "confidence": 0.4,
                    "user_data" : {},
                    "label": "person name"
                  }],
                  "unrecognized_face_ids": [120398123,12397231,21312],
                  "recognized_face_count": 3,
                  "recognized_face_delta": 2,
                  "unrecognized_face_count": 3,          
                  "unrecognized_face_delta": 1          
            }
        }
    }
}

Format

KeyTypeDescription
recognized_facesDictionaryAttributes for recognized faces
recognized_faces.idStringTracking ID for the face
recognized_faces.person_idStringAzure Person ID associated with this face. You can use this to match against repeat occurences of this face.
recognized_faces.confidenceFloatConfidence of the match
recognized_faces.user_dataDictionaryAny user_data stored with the Azure Face API
recognized_faces.labelStringFace Name as stored in the Azure Face API
unrecognized_face_idsArray of StringsTracking IDs for unrecognized Faces.
recognized_face_countIntegerNumber of recognized faces in this frame
recognized_face_deltaIntegerNumber of newly recognized faces in this frame
unrecognized_face_countIntegerNumber of unrecognized faces in this frame
unrecognized_face_deltaIntegerNumber of newly unrecognized faces in this frame

Objects metadata augmentation

The following information is added to the detected object's "attributes" array:

"class_id" field"label" field"probability" field
10200The face label - usually the person name (string)The face recognition confidence
10201The face ID (string)The face recognition confidence
"objects": [{
    "id": 5750484150146564100,
    "label": "face",
    "class_id": 0,
    "probability": 0.98,
    "rect": {
        "width": 47,
        "top": 201,
        "left": 656,
        "height": 25.
    },
    "attributes": [{
        "label": "person name",
        "class_id": 10200,
        "probability": 1.0,
    },
    {
        "label": "1230171012121",
        "class_id": 10201,
        "probability": 1.0,
    }]
}]

Azure Face API

Lumeo will lookup an Azure LargePersonGroup for faces, trained using these attributes:

  • detectionModel : detection_03
  • recognitionModel : recognition_04

Create an Azure Face API Endpoint

Follow the steps outlined under Prerequisites in the Azure Face API Quickstart to obtain your Face API Endpoint and API Key to configure within this node.

Create and Train a Person Group

You will register your faces directly with the Azure API, and then Lumeo will use the specified group to lookup unknown faces against that group in real time.

Helpful resources:

📘

Azure Face API UI

For a quick and easy way to get started, we've created a sample app using the Azure Face API that lets you create Person Groups, Persons and register Face images using a web app. Check it out here.

The code samples below show you how to train a Azure large person group to work with Lumeo. Start by installing the client library :

pip install --upgrade azure-cognitiveservices-vision-face
import glob, os, sys, time
from urllib.request import urlopen
from azure.cognitiveservices.vision.face import FaceClient
from azure.cognitiveservices.vision.face.models import TrainingStatusType, FaceAttributeType
from msrest.authentication import CognitiveServicesCredentials

ENDPOINT = "<YOUR_AZURE_FACEAPI_ENDPOINT>"
KEY = "<YOUR_AZURE_FACEAPI_KEY>"

# Name your person group. Must be lowercase, alphanumeric, or using dash or underscore.
# We use a uuid to avoid name collisions.
PERSON_GROUP_ID = "lumeo-demo"

# Create a client
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

# Create and train a person group and add a person group person to it.
def build_person_group(client, person_group_id, pgp_urls,create_group=True):
    print('Create and build a person group...')
    # Create empty Person Group. Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
    print('Person group ID:', person_group_id)
    if create_group:
        client.large_person_group.create(large_person_group_id = person_group_id, name=person_group_id, recognition_model= "recognition_04")

    for image_url in pgp_urls:
        try:
            pgp_name = image_url.split("/")[-1].split(".")[0]
            new_person = client.large_person_group_person.create(person_group_id, pgp_name)
            client.large_person_group_person.add_face_from_stream(person_group_id, new_person.person_id, urlopen(image_url),detection_model="detection_03")
        except Exception as e:
            print("Error downloading face image from url ({}) : {}".format(image_url,str(e)))

    # Train the person group, after a Person object with many images were added to it.
    client.large_person_group.train(person_group_id)

    # Wait for training to finish.
    while (True):
        training_status = client.large_person_group.get_training_status(person_group_id)
        print("Training status: {}.".format(training_status.status))
        if (training_status.status is TrainingStatusType.succeeded):
            break
        elif (training_status.status is TrainingStatusType.failed):
            client.large_person_group.delete(person_group_id=PERSON_GROUP_ID)
            sys.exit('Training the person group has failed.')
        time.sleep(5)

# Train a person group
person_face_image_urls = ['https://image1.jpg','https://image2.jpg']
build_person_group(face_client,PERSON_GROUP_ID,person_face_image_urls,True)

Set Person User Data

Azure Face API allows you to associate additional attributes along with each registered face/person (aka user_data). Lumeo can extract and display these attributes for matched faces from the Azure Face API. This node will use the following user_data attributes, if present from Azure API:

  • color: Node will display the recognized face with a bounding box in this color.
  • aux_label : Node will display this label along with the recognized face on the video

These (and any other) user_data properties will be added to recognized_faces.user_data field within Lumeo metadata.

{
  "color":"00ffff",
  "aux_label":"FastPass Valid"
}

Here's a code snippet to set this Person User Data using the Azure API

import glob, os, sys, time
from urllib.request import urlopen
from azure.cognitiveservices.vision.face import FaceClient
from azure.cognitiveservices.vision.face.models import TrainingStatusType, FaceAttributeType
from msrest.authentication import CognitiveServicesCredentials

ENDPOINT = "<YOUR_AZURE_FACEAPI_ENDPOINT>"
KEY = "<YOUR_AZURE_FACEAPI_KEY>"

# Assumes that a Large Person Group with this Person Group ID was already created.
PERSON_GROUP_ID = "lumeo-demo"

# Assumes that there is a person in this group with this person id.
PERSON_ID = "421cac29-2b6c-418a-af6c-79181daa92d4"

# Create a client
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(KEY))

# Update Attrributes of the person with the Lumeo format attributes
face_client.large_person_group_person.update(
		PERSON_GROUP_ID,
    PERSON_ID,
    user_data='{"level":"fastpass","color":"00ffff","aux_label":"FastPass Valid"}')