This project leverages the power of LangChain and Exa-Web Search to create a simplistic web search assistant. Utilizing large language models (LLMs) and advanced search capabilities, it offers users a unique way to retrieve and generate information based on provided queries.
- Exa-Web Search Integration: Harnesses the ExaSearchRetriever for deep web searches, fetching relevant documents based on user queries.
- LangChain for LLMs: Utilizes LangChain for seamless interaction with LLMs for content generation and summarization.
- Source Citing: Automatically cites sources in the generated responses by formatting document highlights and URLs into XML for LLM consumption.
Set up the necessary environment variables.
To start the application locally, run the following command in your terminal:
python server.pyInstall the LangChain CLI if you haven't yet
pip install -U langchain-cli# adding packages from
# https://github.com/langchain-ai/langchain/tree/master/templates
langchain app add $PROJECT_NAME
# adding custom GitHub repo packages
langchain app add --repo $OWNER/$REPO
# or with whole git string (supports other git providers):
# langchain app add git+https://github.com/hwchase17/chain-of-verification
# with a custom api mount point (defaults to `/{package_name}`)
langchain app add $PROJECT_NAME --api_path=/my/custom/path/ragNote: you remove packages by their api path
langchain app remove my/custom/path/ragLangSmith will help us trace, monitor and debug LangChain applications. LangSmith is currently in private beta, you can sign up here. If you don't have access, you can skip this section
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"langchain serveThis project folder includes a Dockerfile that allows you to easily build and host your LangServe app.
To build the image, you simply:
docker build . -t my-langserve-appIf you tag your image with something other than my-langserve-app,
note it for use in the next step.
To run the image, you'll need to include any environment variables necessary for your application.
In the below example, we inject the OPENAI_API_KEY environment
variable with the value set in my local environment
($OPENAI_API_KEY)
We also expose port 8080 with the -p 8080:8080 option.
docker run -e OPENAI_API_KEY=$OPENAI_API_KEY -p 8080:8080 my-langserve-app