See All is a web application that allows users to analyze and transcribe live video streams using AI services. It enables streamers to broadcast using OBS and interact with an AI that provides real-time feedback and advice about their stream, such as gaming tips for specific video games.
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
Docker and Docker Compose are required to run the project. Docker is used for containerization, ensuring consistent environments across different setups, and Docker Compose simplifies the management of multi-container applications.
- Clone the repository:
git clone https://github.com/rajahafify/see-all.git
- Start the project:
After running this command, the application should be available at
docker-compose up --build
http://localhost:3000
.
See All is built using a microservices architecture with the following main components:
- Web Application (Rails)
- RTMP Server (Nginx-RTMP)
- Stream Processing Service
- AI Interaction Service
- WebSocket Server (Action Cable)
- SQlite
- Job Queue (Sidekiq + Redis)
-
Web Application
- Handles user authentication and stream management
- Provides UI for streamers to interact with AI
- Manages WebSocket connections for real-time updates
-
RTMP Server
- Receives video streams from OBS
- Forwards streams to the Stream Processing Service
-
Stream Processing Service
- Generates frames from incoming video streams
- Analyzes frames for game detection and other relevant information
-
AI Interaction Service
- Processes stream data and user queries
- Generates relevant responses and advice using AI models
- Integrates with external AI APIs (e.g., OpenAI GPT)
-
WebSocket Server
- Facilitates real-time communication between the web app and AI service
-
Database
- Stores user information, stream metadata, and interaction history
-
Job Queue
- Manages background jobs for stream processing and AI interactions
-
AI-powered game advice and interaction
-
Real-time chat between streamer and AI
-
AI-powered game advice and interaction
-
Real-time chat between streamer and AI
- Streamer starts a broadcast using OBS
- RTMP server receives the stream and notifies the web application
- Stream Processing Service begins generating and analyzing frames
- AI Interaction Service processes stream data and awaits user queries
- Streamer interacts with AI through the web interface
- WebSocket server facilitates real-time communication between components
- AI responses are sent back to the streamer in real-time
- Backend: Ruby on Rails
- Frontend: HTML, CSS, JavaScript (with Stimulus.js)
- Streaming: Nginx-RTMP
- AI Integration: Anthropic API
- Database: SQlite
- Job Queue: Sidekiq
- Caching: Redis
- Containerization: Docker
Contributions are welcome! Please follow these guidelines:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Commit your changes.
- Push your changes to your fork.
- Submit a pull request.
For more detailed guidelines, please refer to CONTRIBUTING.md.