This project utilizes a sophisticated Convolutional Neural Network, specifically the ResNet model, to analyze facial expressions and predict emotions with a 95% accuracy rate. Based on the emotion detected, the system recommends a selection of music tracks that align with the user's current mood.
We've employed the ResNet model due to its deep residual learning framework which enables the training of deeper networks by leveraging skip connections or shortcuts to jump over some layers.
Please refer to the requirements.sh
script to install all necessary dependencies.
Clone the repository and navigate to the project directory. Run the requirements.sh
script to set up your environment.
To use the system, provide an image of a person's face. The system will predict the emotion and suggest music accordingly.
- Model Performance:
The output is a graphical interface displaying the predicted emotion and a list of music recommendations. Below are examples of the system's output:
Our testing concludes with a 95% accuracy in emotion prediction using the ResNet model. The system effectively maps emotions to music choices, providing an innovative and personalized user experience.
We would like to thank all the contributors to the ResNet architecture and PyTorch community for providing the tools that facilitated the development of this project.