Certainly, here's a sample README file for your code to be presented on a GitHub repository:
This repository contains a Python script for automating the generation of videos based on search queries. It combines text generation, image collection, audio synthesis, and video creation to produce informative and engaging videos.
Automated_videogen is a project that leverages advanced programs and algorithms to create videos from textual descriptions. It incorporates various technologies, including natural language processing, image processing, and audio synthesis. With this tool, you can quickly generate videos for educational, promotional, or informative purposes.
Before using this tool, make sure you have the following installed:
- Python 3.x
- Required Python packages (install using
pip
):requests
,audioread
,PIL
,transformers
,gTTS
,moviepy
,icrawler
, andwhisper
. - A Google Custom Search API Key and Custom Search Engine ID.
- Clone this repository to your local machine:
git clone https://github.com/melbinjp/Automated_videogen.git
cd Automated_videogen
- Install the required Python packages using
pip
:
pip install -r requirements.txt
-
Set up your Google Custom Search API Key and Custom Search Engine ID. Place these credentials in a
config.ini
file in the root directory of the project. -
Run the script by executing:
python Automated_videogen.py
- Configure your settings in the
config.ini
file. This file includes parameters like the maximum filename length and file paths.
- Run the script by executing
Automated_videogen.py
. It will prompt you to enter a search query. - The script will search for interesting topics using Google Custom Search and retrieve the top result.
- It will gather media by searching Google Images for related images.
- Audio is generated based on the retrieved text.
- A video is created using the images and audio.
- Subtitles are added to the video.
- The final video is saved in the
output
directory.
- The script uses Google Custom Search to find a relevant topic.
- It collects images from the web based on the search query.
- Text is generated using a language model from the
transformers
library. - Audio is synthesized using Google Text-to-Speech (gTTS).
- The video is created by combining images and audio using
moviepy
. - Subtitles are generated using the Whisper library and added to the video.
- The final video is saved in the
output
directory.
Contributions to this project are welcome. Feel free to open issues and pull requests if you have any ideas for improvement or new features.
This project is licensed under the MIT License - see the LICENSE file for details.