Skip to content

A CV implementation for real-time ASL detection, completion, and correction for mobile devices.

License

Notifications You must be signed in to change notification settings

Kennethm-spec/ASL-Auto-Complete

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

74 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ASL-Auto-Complete

A CV implementation for real-time ASL detection, completion, and correction for mobile devices.

Authors:

Setup

To install the version of Python libraries used run:

$ pip install -r requirements.txt

Note you may need to install fast-autocomplete's metric library as well:

$ pip install fast-autocomplete[levenshtein]

We tested our system on Python Version 3.7-3.9

Walkthrough

The overall system is composed of MediaPipe's segment detector, data preprocessing sequences, a CNN model, and a front-end Flask Webapp that uses the user's webcam and CNN model to allow the user to type using only ASL signs. The system is lightweight and suitable for mobile devices: IOS and Android.

Data Collection and processing

We use a combination of 3 Kaggle datasets:

We use all the letters from the Synthetic ASL Letters set, numbers 1-4 of the Synthetic ASL Numbers set, and the "space" and "del" classes from the ASL Alphabet dataset. We split it into a training set for both training and validation during training with a total of 900 images per class. We additionally have a test set made of 100 images per class. In total, the datasets require 10 GB of storage which are thus not included in this repository. Instead, we use MediaPipe's hand landmarks extracted from each image and then store them in csv files found in /src/Modeling/data/ which are then saved in training and testing folders. This allows any user to use pre-extracted features to try different models for classification.

Model Demonstration

  1. Clone the repository using
$ git clone https://github.com/Kennethm-spec/ASL-Auto-Complete.git

or download as zip and extract the files.

  1. Navigate to /src/Modeling/train_model.ipynb.
  2. Walk through the steps to use the pre-extracted MediaPipe landmarks from images as previously described.
  3. Feel free to try different models. We include the one used in our project in the model_saves folder which achieves 98% accuracy on the test set.

Webapp Demonstraction (Computer)

  1. Navigate to /src/app/main_computer.py. At the very bottom on line 287, set the 'host' input to your own IPv4 Address.
  2. Then, run the file in the environment that requirments.txt was installed in.
  3. You can then access the web app by going to "https://INSERT_Your_IPv4_Addres:5003" in your local browser
  4. The app allows the user to type by signing letters, autocomplete by signing one of the number signs (1-3), delete characters using the "del" hand sign, and finally make spaces using the "space" hand sign. For reference on these, please take a look at the images in the datasets.

Webapp Demonstration (Mobile)

  1. Navigate to /src/app/main_mobile.py and run the script on a host machine connected to a Wi-Fi network to deploy the Flask environment to the network.
  2. You can then navigate to the IPv4 address of the network+5003 (ex. https://10.100.100.100:5003) on your phone or mobile device and enjoy the app.

Paper

A final evaluation is concluded in our project report:

About

A CV implementation for real-time ASL detection, completion, and correction for mobile devices.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •