# Gesture Recognition System (GRS)
## Overview
The Gesture Recognition System (GRS) is an innovative solution that enables users to control presentations and other media through gestures and speech commands. The system integrates machine learning for gesture recognition and speech-to-text capabilities for seamless interaction. It aims to enhance accessibility and provide an intuitive, touchless user experience in various environments, including business presentations, interactive kiosks, and accessibility-focused applications.
## Features
- **Gesture Recognition**: Recognize custom hand gestures to navigate slides, pause, or perform other controls.
- **Speech Recognition**: Convert spoken commands into actions (e.g., "Next Slide", "Pause").
- **Real-Time Performance**: Low-latency processing for both gestures and speech commands.
- **Customizable**: Easily modify gestures and commands based on specific needs.
- **Cross-Platform**: Works on most modern web browsers using just a camera and microphone.
## Technologies Used
- **Frontend**: React (with Vite for fast builds and hot-reloading)
- **Backend**: Flask (Python)
- **Gesture Recognition**: TensorFlow.js / MediaPipe (for hand gesture tracking)
- **Speech Recognition**: Web Speech API or external libraries
- **Machine Learning**: TensorFlow, Keras, or other relevant frameworks
## Getting Started
### Prerequisites
Before running the project, ensure you have the following installed:
- **Node.js** (>= 18.x)
- **Python** (>= 3.x)
- **pip** (for installing Python packages)
### 1. Clone the Repository
Clone the repository to your local machine:
```bash
git clone https://github.com/skyback12/S07-MINOR-PROJECT-GRS.git
cd S07-MINOR-PROJECT-GRS
Navigate to the frontend
directory and install the required dependencies:
cd frontend
npm install
Navigate to the backend
directory, create a Python virtual environment (optional but recommended), and install the required packages:
cd backend
python -m venv venv
source venv/bin/activate # For Linux/macOS
venv\Scripts\activate # For Windows
pip install -r requirements.txt
Start the React development server:
cd frontend
npm run dev
Start the Flask backend server:
cd backend
python app.py
After both servers are running, navigate to http://localhost:3000
in your browser to use the Gesture Recognition System.
- Gesture Controls: Use predefined or custom hand gestures to interact with the system. For example:
- Swipe left/right to change slides.
- Make a fist to pause or play the presentation.
- Speech Commands: Speak commands to control the system. Some examples include:
- "Next slide"
- "Pause"
- "Start presentation"
Contributions are welcome! If you'd like to contribute, follow the steps below:
- Fork the repository
- Create a new branch (
git checkout -b feature-name
) - Make your changes
- Commit your changes (
git commit -am 'Add feature'
) - Push the branch (
git push origin feature-name
) - Open a pull request with a description of your changes
- Thanks to the contributors of the gesture recognition libraries (e.g., MediaPipe, TensorFlow.js) used in this project.
- Special thanks to the developers of Flask and React for providing excellent frameworks to build the system.