Skip to content

narayan123411/Emotion-Driven-Music-Recommendations

Repository files navigation

Emotion-Based Music Recommendation System

Description

This project utilizes a sophisticated Convolutional Neural Network, specifically the ResNet model, to analyze facial expressions and predict emotions with a 95% accuracy rate. Based on the emotion detected, the system recommends a selection of music tracks that align with the user's current mood.

Model Details

We've employed the ResNet model due to its deep residual learning framework which enables the training of deeper networks by leveraging skip connections or shortcuts to jump over some layers.

Requirements

Please refer to the requirements.sh script to install all necessary dependencies.

Setup Instructions

Clone the repository and navigate to the project directory. Run the requirements.sh script to set up your environment.

Usage

To use the system, provide an image of a person's face. The system will predict the emotion and suggest music accordingly.

Data Pre Processing:

  1. Data Equalizers:

  2. SMOTE Model to Re-Sample:

  1. Model Performance:

Output

The output is a graphical interface displaying the predicted emotion and a list of music recommendations. Below are examples of the system's output:

Conclusion

Our testing concludes with a 95% accuracy in emotion prediction using the ResNet model. The system effectively maps emotions to music choices, providing an innovative and personalized user experience.

Acknowledgments

We would like to thank all the contributors to the ResNet architecture and PyTorch community for providing the tools that facilitated the development of this project.

About

EmoVibe: Personalized Emotion-Driven Music Recommendations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published