Using mmWave Radar Technology for American Sign Language Gesture Recognition
DOI:
https://doi.org/10.13021/jssr2022.3403Abstract
American Sign Language (ASL) is a form of visual communication used by about 500,000 people. Creating a system that allows for automatic ASL recognition would allow people unfamiliar with the language to better interact with this community. Systems using video cameras have been researched in great detail, however, using new mmWave radar technology has certain advantages. mmWave radars give the user more privacy and work better in dark environments when compared to camera-based systems which require constant video feed. We aim to develop a system that can recognize different ASL gestures consistently using data collected from a mmWave radar. A Linux-based system is used to collect a set of point clouds for each gesture from the radar. The data is then processed and a machine-learning model is trained to predict the gestures. Currently, we have configured and tested the radar, collected some preliminary data, and we have written software to process and visualize the data. Going forward we will collect additional datasets and begin training our machine learning model. Our completed system will provide an effective solution that will allow the large ASL community to communicate directly with more people.
Published
Issue
Section
Categories
License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.