PASS Pro Now Available

Welcome to the next evolution of PASS. Unlock exclusive training, discounts, and networking opportunities designed to accelerate your data career. Learn More >

The official PASS Blog is where you’ll find the latest blog posts from PASS community members and the PASS Board. Contributors share their thoughts and discuss a wide variety of topics spanning PASS and the data community.

Speaker Diversity Analytics with the AI Face API

How do we increase diversity amongst our event speakers?  To improve something, we must first measure it.  The Face API allows us to collect some of these demographics from past events that we may not otherwise be able to collect, which may allow us greater insight into how we can improve these trends.

The Facial Recognition API is part of the Microsoft AI Cognitive Services suite.  The Face service detects human faces in an image and returns the rectangle coordinates of their locations. Optionally, face detection can extract a series of face-related attributes. Examples are head pose, gender, age, emotion, facial hair, and glasses.  The Face API is Azure based and is a subset of the Vision API functionality.  To leverage the Face API an image can be programmatically sent to the API via a number of languages.  Along with the image file, the caller can also submit parameters detailing which subset of return values to send back.

Capabilities of the Face API: 

Face Detection – Coordinates of human faces within image.  Optional face attributes.

Face Verification – Evaluates if two faces belong to the same person.

Find Similar – Finds faces that either match an image or are similar to an image.

Face Grouping – Divides a set of images, so that similar images are grouped.

Person Identification – Identify a detected face against a database of people.

For this example, we will be using the Face Detection functionality and will explore the option of extracting face-related attributes.  For this example, we will request facial attributes for gender and facial hair.  The general assumption being that, humans classified as ‘female’ with heavy facial hair, may possibly be misclassified!  The expected response for gender: male or female, and facialHair: return lengths in three facial hair areas: moustache, beard and sideburns. The length is a number between [0,1]. 0 for no facial hair in this area, 1 for long or very thick facial hairs in this area. 

As an input, I am using a picture taken from a recent Houston Area System Management (HASMUG)– Azure Edition event that included from left to right: Ryan Durbin, Billy York, myself & Jim Reid.

Once we log into the azure portal, we can easily create a Face service by selecting it from the marketplace then choosing our Location, Pricing tier and Resource Group.

 

We can then pass an image to the API.  Below is an example piece of python code that calls the API in Azure Notebooks.  The first two steps of the code assign your API subscription key to the variable Ocp-Apim-Subscription-Key and defines the parameters that are expected as input by the API.

The next pieces of the code will open the image file and assign the file contents to a variable.  Then finally, the image file contents are passed to the API via a POST request.

Taking a look at the JSON Payload from calling the Facial Recognition API, we can see that there are 3 males identified and 1 female. 

Thankfully, it should be noted that my beard and moustache threshold is 0.0.  I was worried that it might not be!  Optionally, we can also experiment with results when using the Find Similar feature or the Facial Attribute emotion.  The Face API is another example of how Artificial Intelligence allows us to classify and label data in bulk.     

Data Privacy & Security

While the capabilities of the Face API continue to evolve, the caveat of course, is how do we utilize this technology without intruding on the expected privacy of event attendees? #AIEthics Please note: permission has been obtained from each of the individuals in the picture above.

As with all of the Cognitive Services resources, developers who use the Face service must be aware of Microsoft's policies on customer data. The Cognitive Services page on the Microsoft Trust Center calls out that Cognitive Services give the programmer control over the storage and deletion of any data stored.  Additionally, Face API documentation further details that with Facial Attributes, no images will be stored. Only the extracted face feature(s) will be stored on server.

Ready to get started with the Face API?  Microsoft Learn has an excellent training lab,

Identify faces and expressions by using the Computer Vision API in Azure Cognitive Services.

References:

Microsoft Cognitive Services Face API Overview

https://docs.microsoft.com/en-us/azure/cognitive-services/face/overview

Microsoft Cognitive Services Face API Documentation

https://docs.microsoft.com/en-us/azure/cognitive-services/face/apireference

alicia moniz
About the author

Alicia Moniz is a Microsoft AI MVP.  She authors the blog HybridDataLakes.com, a blog focused on cloud data learning resources.  

Alicia has been in the Database/BI services industry for 10+ years and is an expert in T-SQL, Data Modeling, Database Administration, Analytics, Data Visualization, and Data Warehousing. Her skill set spans SQL Server 2005 through 2016 and Azure. She has hands on experience architecting, developing, and optimizing BI solutions in the Microsoft ecosystem.

Alicia holds certifications for both AWS/Azure Architect and AWS/Azure Big Data, and is a Microsoft Certified Solution Expert (MCSE): Data Management and Analytics.

1 comments on article "Speaker Diversity Analytics with the AI Face API"

2/27/2020 9:31 AM
Richard Gooding

Hi, Am just curious on how this compares to Clearview AI and their face recognition technology?


Please login or register to post comments.

Theme picker

Back to Top
cage-aids
cage-aids
cage-aids
cage-aids