chat.with.multi.resourse.mp4
This chatbot application is built using Python and various libraries, including Streamlit, LangChain, and LangChain Community. It allows users to upload PDF files or provide URLs of PDFs, and then interacts with the extracted text using a language model.
-
Upload multiple PDF files or provide URLs of PDFs.
-
Extract text from the uploaded PDF files and URLs.
-
Store the extracted text in a vector store for efficient retrieval.
-
Perform a Wikipedia search based on the user's question.
-
Perform an Arxiv search based on the user's question.
-
Route the user query to the most relevant datasource (Wikipedia, Arxiv, or vector store).
-
Generate a response based on the retrieved documents from the selected datasource.
-
Clone the repository:
-
Navigate to the project directory:
-
Install the required libraries:
-
Run the application:
-
Create a
.env
file in the project directory and add your Astra DB, Groq API, and other necessary environment variables: -
Run the application:
- Open your web browser and navigate to
http://localhost:8501
. - Upload PDF files or provide URLs of PDFs.
- Ask questions using the chatbot interface.
- The chatbot will extract text from the uploaded PDFs, store it in the Cassandra database, and use the LLM to generate responses.
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.