Click here to Skip to main content
15,867,568 members
Articles / Artificial Intelligence / Tensorflow
Article

AI Age Estimation in the Browser using face-api and Tensorflow.js

Rate me:
Please Sign up or sign in to vote.
5.00/5 (5 votes)
24 Aug 2020CPOL2 min read 13.9K   471   6   7
In this article we’ll predict someone’s gender and age in the browser.
Here we change the dimensions for the video tag, import another model in our index.js file, add drawing to our canvas, and get our predictions.

In the previous article, we learned how to classify a person’s emotions in the browser using face-api.js and Tensorflow.js.

If you haven’t read that article yet, I recommend you do so first as we’ll be proceeding on the assumption that you have some familiarity with face-api.js, and we’ll be building on the code we created for emotion detection.

Gender and Age Detection

We’ve seen how easy it is to predict human facial expressions using face-api.js. But what else can we do with it? Let’s learn to predict someone’s gender and age.

We’re going to make a few changes to our previous code. In the HTML file, we changed the dimensions for the video tag since we’ll need some extra space for drawing to be visible:

HTML
<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
    <script type="application/x-javascript" src="face-api.js"></script>    
  </head>
  <body>
    <h1>Emotions, Age & gender Detection using face-api.js</h1>
    <video autoplay muted id="video" width="400" height="400" style=" margin: auto;"></video>
    <div id="prediction">Loading</div>
  <script type="text/javascript" defer src="index.js"></script>
  </body>
</html>

We also need to import another model in our index.js file:

JavaScript
faceapi.nets.ageGenderNet.loadFromUri('/models')

Add age and gender to the predictions as well:

JavaScript
const detections = await faceapi
      .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceExpressions()
      .withAgeAndGender();

Face-api.js has some drawing capabilities too. Let’s add drawing to our canvas:

JavaScript
const resizedDetections = faceapi.resizeResults(detections, displaySize);
 
faceapi.draw.drawDetections(canvas, resizedDetections);
faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
faceapi.draw.drawFaceExpressions(canvas, resizedDetections);

Now we’re in a position to get our predictions:

JavaScript
resizedDetections.forEach(result => {
      const { age, gender, genderProbability } = result;
      new faceapi.draw.DrawTextField(
        [
          `${faceapi.round(age, 0)} years`,
          `${gender} (${faceapi.round(genderProbability)})`
        ],
        result.detection.box.bottomRight
      ).draw(canvas);
    });

Here’s the final look of index.js file:

JavaScript
const video = document.getElementById('video');
 
Promise.all([
  faceapi.nets.tinyFaceDetector.loadFromUri('/models'),
  faceapi.nets.faceLandmark68Net.loadFromUri('/models'),
  faceapi.nets.faceRecognitionNet.loadFromUri('/models'),
  faceapi.nets.faceExpressionNet.loadFromUri('/models'),
  faceapi.nets.ageGenderNet.loadFromUri('/models')
]).then(startVideo);
 
function startVideo() {
  
  navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia;
  
  if (navigator.getUserMedia) {
   navigator.getUserMedia({  video: true },
      function(stream) {
         var video = document.querySelector('video');
         video.srcObject = stream;
         video.onloadedmetadata = function(e) {
           video.play();
         };
      },
      function(err) {
         console.log(err.name);
      }
   );
} else {
   document.body.innerText ="getUserMedia not supported";
   console.log("getUserMedia not supported");
  }
}
 
video.addEventListener('play', () => {
  const canvas = faceapi.createCanvasFromMedia(video);
  document.body.append(canvas);
  const displaySize = { width: video.width, height: video.height };
  faceapi.matchDimensions(canvas, displaySize);
  setInterval(async () => {
    const predictions = await faceapi
      .detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
      .withFaceLandmarks()
      .withFaceExpressions()
      .withAgeAndGender();
 
    const resizedDetections = faceapi.resizeResults(predictions, displaySize);
    canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height);
    faceapi.draw.drawDetections(canvas, resizedDetections);
    faceapi.draw.drawFaceLandmarks(canvas, resizedDetections);
    faceapi.draw.drawFaceExpressions(canvas, resizedDetections);
    resizedDetections.forEach(result => {
      const { age, gender, genderProbability } = result;
      new faceapi.draw.DrawTextField(
        [
          `${faceapi.round(age, 0)} years`,
          `${gender} (${faceapi.round(genderProbability)})`
        ],
        result.detection.box.bottomRight
      ).draw(canvas);
    });
  }, 100);
});

Image 1

What’s Next?

This series of articles introduced you to TensorFlow.js and helped you get started with machine learning in the browser. We built a project that showed you how to start training your own computer vision AI right in the browser and make it recognize breeds of dogs, human facial expressions, age, and gender. While these are already impressive on their own, this series is only a starting point. There are endless possibilities for AI and ML in the browser. For example, one thing we didn’t do in the series is train an ML model offline and import it into the browser. Feel free to build on top of any of the examples or create something interesting of your own. Don’t forget to share your ideas!

This article is part of the series 'Grumpiness Detection in the Browser with Tensorflow.js View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Student
Ireland Ireland
C# Corner MVP, UGRAD alumni, student, programmer and an author.

Comments and Discussions

 
QuestionLaunch Demo embedded script in "Getting Started - AI Image Classification With TensorFlow.js" Pin
Giorgio Arata29-Sep-20 0:03
professionalGiorgio Arata29-Sep-20 0:03 
AnswerRe: Launch Demo embedded script in "Getting Started - AI Image Classification With TensorFlow.js" Pin
Chris Maunder29-Sep-20 4:52
cofounderChris Maunder29-Sep-20 4:52 
GeneralRe: Launch Demo embedded script in "Getting Started - AI Image Classification With TensorFlow.js" Pin
Giorgio Arata29-Sep-20 6:07
professionalGiorgio Arata29-Sep-20 6:07 
Thank for your magic reply Chris Maunder!

Please help me to submit to CODEPROJECT.COM wizards this packaged script:
cdn.datacamp.com/dcl-react.js.gz

Here below I am linking the GitHub project that would be cool have at disposal in CodeProject articles.

GitHub - datacamp/datacamp-light: Convert any blog or website to an interactive learning platform for data science[^]

I think that it's would be cool and useful to run Python and R scripts example as shown above.

Thank a lot for your feedback.

Regards.
GeneralRe: Launch Demo embedded script in "Getting Started - AI Image Classification With TensorFlow.js" Pin
Chris Maunder3-Oct-20 14:11
cofounderChris Maunder3-Oct-20 14:11 
GeneralRe: Launch Demo embedded script in "Getting Started - AI Image Classification With TensorFlow.js" Pin
Giorgio Arata4-Oct-20 0:18
professionalGiorgio Arata4-Oct-20 0:18 
QuestionYour download doesn't work? Pin
Bill SerGio, The Infomercial King26-Aug-20 2:55
Bill SerGio, The Infomercial King26-Aug-20 2:55 
AnswerRe: Your download doesn't work? Pin
Sean Ewington26-Aug-20 3:35
staffSean Ewington26-Aug-20 3:35 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.