Skip to content

Detecting and recognizing the various human facial expressions using OpenCV and CNN.

Notifications You must be signed in to change notification settings

sai-janani99/Facial-Expression-Recognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Facial-Expression-Recognition

Detecting and recognizing the various human facial expressions using OpenCV and CNN.

Why-Emotion-Detection?

  • The motivation behind choosing this topic specifically lies in the huge investments large corporations do in feedbacks and surveys but fail to get equitable response on their investments.
  • Emotion Detection through facial gestures is a technology that aims to improve product and services performance by monitoring customer behavior to certain products or service staff by their evaluation

Companies using emotion detection

  • While Disney uses emotion-detection tech to find out opinion on a completed project, other brands have used it to directly inform advertising and digital marketing.
  • Kellogg’s is just one high-profile example, having used Affectiva’s software to test audience reaction to ads for its cereal.
  • Unilever does this, using HireVue AI-powered technology to screen prospective candidates based on factors like body language and mood. In doing so, the company is able to find the person whose personality and characteristics are best suited to the job.

Workflow of our project

Alt text

Facial Emotion Recognition by CNN

Steps involved:

  • Dataset Collection
  • Image Augmentation
  • Feature Extraction
  • Training & Validation

1. Dataset

The data consists of 48x48 pixel grayscale images of faces. The faces have been categorized into facial expression in to one of seven categories (0=Angry, 1=Disgust, 2=Fear, 3=Happy, 4=Sad, 5=Surprise, 6=Neutral).

Alt text

2. Image Augmentation

More data is generated using the training set by applying transformations. It is required if the training set is not sufficient enough to learn representation. The image data is generated by transforming the actual training images by rotation, crop, shifts, shear, zoom, flip, reflection, normalization etc.

3. Feature Extraction

We define our CNN with the following architecture:

  • 4 convolutional layers
  • 2 fully connected layers

The convolutional layers will extract relevant features from the images and the fully connected layers will focus on using these features to classify well our images

Alt text

4. Training & Validation

The model gives 65-66% accuracy on validation set while training the model. The CNN model learns the representation features of emotions from the training images. Below are few epochs of training process with batch size of 128.

Demo

Alt text

About

Detecting and recognizing the various human facial expressions using OpenCV and CNN.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published