Skip to content

A deep learning analysis of American Sign Language images and real-world data.

Notifications You must be signed in to change notification settings

gargimaheshwari/ComputerVision-ASL-recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 

Repository files navigation

Deep Learning for ASL recognition

American Sign Language (ASL) is the primary language used by many deaf individuals in North America, and it is also used by hard-of-hearing and hearing individuals. The language is as rich as spoken languages and employs signs made with the hand, along with facial gestures and bodily postures.

A lot of recent progress has been made towards developing computer vision systems that translate sign language to spoken language. This technology often relies on complex neural network architectures that can detect subtle patterns in streaming video. However, as a first step, towards understanding how to build a translation system, we can reduce the size of the problem by translating individual letters, instead of sentences.

In this notebook I use Convolutional Neural Networks (CNN) to analyze and recognize Americal Sign Language letter symbols.

I begin by loading the data, displaying the images and preprocessing them. I then form a deep learning model with over 1 million parameters and test it on two different sets of data.

Finally, I use the model to plot confusion matrices for both sets of testing data, which show a wealth of information about the workings of the deep learning model.

The data is taken from kaggle.com. The data files can be found here and here.

About

A deep learning analysis of American Sign Language images and real-world data.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published