Skip to content

Emotion recognition on python + pytorch. Made as diploma work for BSUIR.

License

Notifications You must be signed in to change notification settings

raik199x/Emotion-Recognition

Repository files navigation

Emotion-Recognition

Codacy Badge

Emotion recognition on python + pytorch. Made as diploma work for Belarusian State University of Informatics and Radioelectronics.

Stable (a bit)

Work under project is paused, bugs might be found.

Main ideas that i wanted to implement in this app is already done and this is enough for diploma work, but in future i might continue developing this project.

For those who want accurate emotion recognition model - you wont find it here, but maybe in future...

Preview

settings tab

learning tab

camera tab

cloud tab

Requirements (running from sources)

If you want to run project from sources, you need to follow this requirements.

Python

This project was writtent on python version 3.10.2. Since mega package (which is used to connect to cloud drive) is deprecated, this app wount run on python >=3.12.

Cuda + Cudann (Recommend, but not really required)

This project is able to run on cuda, which makes emotion, face detection and project itself run much faster. The problem is that you need to install it on your own, to check if your installation works you can run project (jump to the running section) and see of model is running on cuda

Project was written and tested on cuda 11.8.

If your os is ubuntu, you can use this guide to install cuda + cudann.

Pytorch

Use the official install guide

IMPORTANT: even if you wont use cuda, you still need to install cuda version of pytorch.

Dlib

Requires for face detection.

For installing dlib cmake is required which can be installed with:

sudo apt install cmake

If you don't use cuda, just run pip3 install dlib.

If you want to install dlib with cuda (note: installing with pip does not enabled cuda for me), visit official dlib site and download dlib.

Enter downloaded directory and write sudo python3 setup.py install

While installation is starting, look at the logs and find something like that:

-- Looking for cuDNN install...
-- Found cuDNN: /usr/lib/x86_64-linux-gnu/libcudnn.so
-- Enabling CUDA support for dlib.  DLIB WILL USE CUDA, compute capabilities: 50

In that case you can continue installation, otherwise check your cuda and cudann installation.

After install is complete, write:

python3
import dlib
dlib.DLIB_USE_CUDA

Output must be True

Required python packages

Just run the following command:

pip3 install -r requirements.txt

IMPORTANT: requirements.txt does not include dlib and pytorch package

Running

General information

To work properly use same working dir for every project run. Otherwise dataset folder and project config will be lost.

Program uses emotion_recognition_data folder to store model and dataset.

If you wish to use pre-trained recognition model, you can download it from here, unarchive it in working directory so program could see it.

You also can try to teach model on your images, just jump to "Developing/Dataset" section to see what you need to do.

Note: currently emotions are hardcoded, so if you want to remove or add emotions you need to modify DeepLearning/dataset_parser.py. Also if you change the model structure, project wont be able to run, so delete old model and run project again and you'll be able to create fresh model.

Running from sources

Clone this repository: git clone https://github.com/raik199x/Emotion-Recognition

Enter into downloaded directory cd Emotion-Recognition

Write python3 main.py to run program.

Using release binaries

Choose suitable for you release version and unarchive folder.

Run emotion_recognition binary as any other linux binary.

If errors occure, check the Troubleshooting section.

Developing

Dataset

By default parser searchers dataset under emotion_recognition_data folder, but you can change it by modifying variable in shared.py.

Dataset folder must contain 2 folders: test and train. Inside those two folders must be folder named with emotion type and contain images 48x48 size that will be parsed and used for learning and testing.

Note: reminder that emotions are hardcoded, check running section.

Translations

You can add add your own translation, to do that you can use 2 ways:

  1. (Easy) if you don't add anything new to code, you can just copy file assets/translations/русский.ts and rewrite translation for your language.
  2. Using PySide toolkit you can generate translation file using next command:
pyside6-lupdate gui/mainwindow.py \
gui/tabs/camera_tab.py \
gui/tabs/chat_tab.py \
gui/tabs/learning_tab.py \
gui/tabs/learning_tab.py \
gui/tabs/settings_tab.py \
gui/tabs/settings_tab.py \
gui/tabs/storage_tab.py \
gui/custom_widgets/abstract_storage_cloud.py \
gui/custom_widgets/add_storage_dialog.py \
gui/custom_widgets/learning_statistics_table.py \
-ts assets/translations/<LANGUAGE_NAME>.ts

If you created new files that required translations, they must be added to the command.

After generating .ts file and translating app, you need to generate .qm file using next command:

pyside6-lrelease assets/translations/<LANGUAGE-NAME>.ts -qm assets/translations/<LANGUAGE-NAME>.qm

and translation was added!

You can add button that will set app to new language or just modify .project_config.ini with your new language

Deploying

For deploying pyinstaller package is used which can be installed using:

pip3 install pyinstaller

Enter directory named pyinstaller/ and run next command:

pyinstaller emotion_recognition.spec

dist/ folder will contain created binary.

Troubleshooting

P - problem. S - solution.


P:

qt.qpa.plugin: From 6.5.0, xcb-cursor0 or libxcb-cursor0 is needed to load the Qt xcb platform plugin.
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "~/.local/lib/python3.9/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vkkhrdisplay, vnc, wayland-egl, wayland

S:

sudo apt install libxcb-cursor0

P:

Could not load library libcublasLt.so.12. Error: libcublasLt.so.12: cannot open shared object file: No such file or directory
Invalid handle. Cannot load symbol cublasLtCreate

S: You need to install libcublasLt.so.12.


P: ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.36' not found (required by /tmp/_MEIslx9PB/libstdc++.so.6)

S:

sudo apt update
sudo apt install libc6

If that did not help, run project using sources. Why

Useful links

Pytorch under a day -- nice introduction into pytorch, for me just required 19/24 hours under 1.5 speed.

Dmitriy Pertsev -- My coach and reviewer

About

Emotion recognition on python + pytorch. Made as diploma work for BSUIR.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages