Click here to Skip to main content
15,667,785 members
Articles / Artificial Intelligence / Tensorflow


9 bookmarked

Interpreting Hand Gestures and Sign Language in the Webcam with AI using TensorFlow.js

Rate me:
Please Sign up or sign in to vote.
4.93/5 (11 votes)
15 Jul 2020CPOL3 min read
In this article, we will take photos of different hand gestures via webcam and use transfer learning on a pre-trained MobileNet model to build a computer vision AI that can recognize the various gestures in real time.
Here we look at: Detecting hand gestures, creating our starting point and using it to detect four different categories: None, Rock, Paper, Scissors, and adding adding some American Sign Language (ASL) categories to explore how much harder it is for the AI to detect other gestures.


Daily Counts

This article is part of the series 'Face Touch Detection with Tensorflow.js View All


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
United States United States
Raphael Mun is a tech entrepreneur and educator who has been developing software professionally for over 20 years. He currently runs Lemmino, Inc and teaches and entertains through his Instafluff livestreams on Twitch building open source projects with his community.

Comments and Discussions