Skip to content

Latest commit

 

History

History
114 lines (78 loc) · 3.46 KB

README.md

File metadata and controls

114 lines (78 loc) · 3.46 KB
# Gesture Recognition System (GRS)

## Overview

The Gesture Recognition System (GRS) is an innovative solution that enables users to control presentations and other media through gestures and speech commands. The system integrates machine learning for gesture recognition and speech-to-text capabilities for seamless interaction. It aims to enhance accessibility and provide an intuitive, touchless user experience in various environments, including business presentations, interactive kiosks, and accessibility-focused applications.

## Features

- **Gesture Recognition**: Recognize custom hand gestures to navigate slides, pause, or perform other controls.
- **Speech Recognition**: Convert spoken commands into actions (e.g., "Next Slide", "Pause").
- **Real-Time Performance**: Low-latency processing for both gestures and speech commands.
- **Customizable**: Easily modify gestures and commands based on specific needs.
- **Cross-Platform**: Works on most modern web browsers using just a camera and microphone.

## Technologies Used

- **Frontend**: React (with Vite for fast builds and hot-reloading)
- **Backend**: Flask (Python)
- **Gesture Recognition**: TensorFlow.js / MediaPipe (for hand gesture tracking)
- **Speech Recognition**: Web Speech API or external libraries
- **Machine Learning**: TensorFlow, Keras, or other relevant frameworks

## Getting Started

### Prerequisites

Before running the project, ensure you have the following installed:

- **Node.js** (>= 18.x)
- **Python** (>= 3.x)
- **pip** (for installing Python packages)

### 1. Clone the Repository

Clone the repository to your local machine:

```bash
git clone https://github.com/skyback12/S07-MINOR-PROJECT-GRS.git
cd S07-MINOR-PROJECT-GRS

2. Install Frontend Dependencies

Navigate to the frontend directory and install the required dependencies:

cd frontend
npm install

3. Set Up the Backend

Navigate to the backend directory, create a Python virtual environment (optional but recommended), and install the required packages:

cd backend
python -m venv venv
source venv/bin/activate  # For Linux/macOS
venv\Scripts\activate     # For Windows
pip install -r requirements.txt

4. Run the Application

Frontend

Start the React development server:

cd frontend
npm run dev

Backend

Start the Flask backend server:

cd backend
python app.py

5. Open the Application

After both servers are running, navigate to http://localhost:3000 in your browser to use the Gesture Recognition System.

Usage

  • Gesture Controls: Use predefined or custom hand gestures to interact with the system. For example:
    • Swipe left/right to change slides.
    • Make a fist to pause or play the presentation.
  • Speech Commands: Speak commands to control the system. Some examples include:
    • "Next slide"
    • "Pause"
    • "Start presentation"

Contributing

Contributions are welcome! If you'd like to contribute, follow the steps below:

  1. Fork the repository
  2. Create a new branch (git checkout -b feature-name)
  3. Make your changes
  4. Commit your changes (git commit -am 'Add feature')
  5. Push the branch (git push origin feature-name)
  6. Open a pull request with a description of your changes

Acknowledgments

  • Thanks to the contributors of the gesture recognition libraries (e.g., MediaPipe, TensorFlow.js) used in this project.
  • Special thanks to the developers of Flask and React for providing excellent frameworks to build the system.