Using mmWave Radar Technology for American Sign Language Gesture Recognition

Authors

  • Raghav Tirumale Aspiring Scientists’ Summer Internship Program Intern
  • Farina Faiz Aspiring Scientists’ Summer Internship Program Co-mentor
  • Dr. Parth Pathak Aspiring Scientists’ Summer Internship Program Primary Mentor

DOI:

https://doi.org/10.13021/jssr2022.3403

Abstract

American Sign Language (ASL) is a form of visual communication used by about 500,000 people. Creating a system that allows for automatic ASL recognition would allow people unfamiliar with the language to better interact with this community. Systems using video cameras have been researched in great detail, however, using new mmWave radar technology has certain advantages. mmWave radars give the user more privacy and work better in dark environments when compared to camera-based systems which require constant video feed. We aim to develop a system that can recognize different ASL gestures consistently using data collected from a mmWave radar. A Linux-based system is used to collect a set of point clouds for each gesture from the radar. The data is then processed and a machine-learning model is trained to predict the gestures. Currently, we have configured and tested the radar, collected some preliminary data, and we have written software to process and visualize the data. Going forward we will collect additional datasets and begin training our machine learning model. Our completed system will provide an effective solution that will allow the large ASL community to communicate directly with more people.

Published

2022-12-13

Issue

Section

College of Engineering and Computing: Department of Computer Science

Categories