This project combines emotion recognition using facial landmarks and hand gesture control to interact with the Spotify API.
This project integrates emotion recognition based on facial landmarks with hand gesture control for interacting with the Spotify API. It uses the Mediapipe library for facial landmark detection and the CVZone library for hand gesture recognition.
- Emotion recognition using facial landmarks.
- Hand gesture control for Spotify playback.
- Automatic playlist selection based on detected mood.
- Clone the repository.
git clone https://github.com/mohittalwar23/Spotipie
- Install the Required packages.
pip install -r requirements2.txt
- Set up the Spotify API credentials:
- You will need a Spotify premium account for this.
- Create a Spotify Developer account at https://developer.spotify.com/ and create a new app.
- Obtain the CLIENT_ID, CLIENT_SECRET, and set the redirect_uri to http://localhost:8080 in the SpotifyOAuth initialization.
Ensure you have the following dependencies installed:
cv2
csv
copy
itertools
numpy
mediapipe
model (custom module)
cvzone
time
spotipy
pyttsx3
threading
Mohit Talwar
Some instances in the first draft were referred from:
In the second draft, an attempt was made to replicate emotion detection with CNN:
However, we switched to using MEDIAPIPE in the third draft.
The entire emotion detection part used in our code is referenced from:
hand-gesture-recognition-using-mediapipe is under Apache V2 License Since Facial Emotion Recognition using Mediapipe was originally refered from Kazuhito Takahashi(https://twitter.com/KzhtTkhs)