Skip to content

Latest commit

 

History

History
65 lines (39 loc) · 1.08 KB

File metadata and controls

65 lines (39 loc) · 1.08 KB

LangChain Knowledge Retrieval Assistant

A repository for learning LangChain by building a generative ai application.

This is a web application is using a Pinecone as a vectorsotre and answers questions about LangChain (sources from LangChain official documentation).

Tech Stack

Client: Streamlit

Server Side: LangChain 🦜🔗

Vectorstore: Pinecone 🌲

Environment Variables

To run this project, you will need to add the following environment variables to your .env file

PINECONE_API_KEY OPENAI_API_KEY

Run Locally

Clone the project

  git clone https://github.com/AviTewari/Knowledge-Retrieval-Assistant-using-LLM.git

Go to the project directory

  cd Knowledge-Retrieval-Assistant-using-LLM

Download LangChain Documentation

  mkdir langchain-docs
  wget -r -A.html -P langchain-docs  https://api.python.langchain.com/en/latest

Install dependencies

  pipenv install

Start the flask server

  streamlit run main.py

Running Tests

To run tests, run the following command

  pipenv run pytest .