Skip to content

davidx33/chat-langchain

This branch is 14 commits behind langchain-ai/chat-langchain:master.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

804b3cd Β· Oct 17, 2024
Aug 15, 2024
Mar 7, 2024
Mar 7, 2024
Oct 17, 2024
Oct 17, 2024
May 29, 2024
Mar 7, 2024
Mar 7, 2024
May 30, 2024
Mar 7, 2024
Aug 15, 2024
Mar 7, 2024
Apr 20, 2023
Jul 23, 2024
Jun 27, 2024
May 27, 2024
Mar 7, 2024
Jul 8, 2024
Oct 17, 2024
Oct 17, 2024
Oct 17, 2024

Repository files navigation

πŸ¦œοΈπŸ”— Chat LangChain

This repo is an implementation of a chatbot specifically focused on question answering over the LangChain documentation. Built with LangChain, LangGraph, and Next.js.

Deployed version: chat.langchain.com

Looking for the JS version? Click here.

The app leverages LangChain and LangGraph's streaming support and async API to update the page in real time for multiple users.

Running locally

This project is now deployed using LangGraph Cloud, which means you won't be able to run it locally (or without a LangGraph Cloud account). If you want to run it WITHOUT LangGraph Cloud, please use the code and documentation from this branch.

Note

This branch does not have the same set of features.

πŸ“š Technical description

There are two components: ingestion and question-answering.

Ingestion has the following steps:

  1. Pull html from documentation site as well as the Github Codebase
  2. Load html with LangChain's RecursiveURLLoader and SitemapLoader
  3. Split documents with LangChain's RecursiveCharacterTextSplitter
  4. Create a vectorstore of embeddings, using LangChain's Weaviate vectorstore wrapper (with OpenAI's embeddings).

Question-Answering has the following steps:

  1. Given the chat history and new user input, determine what a standalone question would be using an LLM.
  2. Given that standalone question, look up relevant documents from the vectorstore.
  3. Pass the standalone question and relevant documents to the model to generate and stream the final answer.
  4. Generate a trace URL for the current chat session, as well as the endpoint to collect feedback.

Documentation

Looking to use or modify this Use Case Accelerant for your own needs? We've added a few docs to aid with this:

  • Concepts: A conceptual overview of the different components of Chat LangChain. Goes over features like ingestion, vector stores, query analysis, etc.
  • Modify: A guide on how to modify Chat LangChain for your own needs. Covers the frontend, backend and everything in between.
  • LangSmith: A guide on adding robustness to your application using LangSmith. Covers observability, evaluations, and feedback.
  • Production: Documentation on preparing your application for production usage. Explains different security considerations, and more.
  • Deployment: How to deploy your application to production. Covers setting up production databases, deploying the frontend, and more.

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 61.7%
  • Python 33.9%
  • HCL 3.4%
  • CSS 0.8%
  • Makefile 0.1%
  • JavaScript 0.1%