A comprehensive collection of examples and utilities for working with LangChain, a framework for building applications powered by large language models (LLMs).
This repository serves as a practical guide and reference for developers looking to leverage the LangChain framework. It contains numerous examples demonstrating various LangChain components and capabilities, from simple chains to complex agents and applications.
The examples cover a wide range of use cases, including:
- Building conversational agents
- Creating document-based question answering systems
- Implementing memory for contextual conversations
- Working with different output formats
- Processing and analyzing various data sources
- Building complete applications
-
Chains: Examples of different chain types and their usage
- Simple Chain - Basic LLMChain example
- Sequential Chain - Chaining multiple LLMs together
- Custom Chain - Creating custom chain implementations
-
Agents: Autonomous agents that can use tools to accomplish tasks
- Google Search Agent - Agent that can search the web
- Wikipedia Agent - Agent that can query Wikipedia
- WolframAlpha Agent - Agent for mathematical computations
- AutoGPT - Implementation of AutoGPT using LangChain
- BabyAGI - Implementation of BabyAGI using LangChain
-
Memory: Different memory implementations for conversational contexts
- Conversation Buffer Memory - Simple memory that stores all messages
- Conversation Buffer Window Memory - Memory with a sliding window
- Conversation Summary Memory - Memory that summarizes past conversations
-
Prompts: Various prompting techniques and templates
- Simple Prompt - Basic prompt usage
- Prompt Templates - Using templates for consistent prompting
- Few-Shot Prompting - Learning from examples
- Role Prompting - Assigning roles to the LLM
-
Output Parsers: Structuring and formatting LLM outputs
- Structured Output Parser - Parsing outputs into structured formats
- Pydantic Output Parser - Using Pydantic for type validation
- Retry Output Parser - Handling parsing failures
-
Text Splitters: Dividing text into manageable chunks
- Character Text Splitter - Splitting by character count
- Recursive Text Splitter - Intelligent recursive splitting
- Token Text Splitter - Splitting based on token count
-
Document Processing: Working with various document types
- DeepLake Document Loader - Loading documents into DeepLake
- DeepLake Document Retriever - Retrieving documents from DeepLake
-
Applications: Complete applications built with LangChain
- Chat with Any Data - A Streamlit app for chatting with your data
- LLM Bot - A chatbot implementation using LangChain
- Python 3.8+
- OpenAI API key (for most examples)
- Various API keys depending on the example (Google, WolframAlpha, etc.)
- Clone the repository:
git clone https://github.com/yourusername/langchain-basics.git
cd langchain-basics
- Install the required dependencies:
pip install -r requirements.txt
- Create a
.env
file in the root directory with your API keys:
OPENAI_API_KEY=your_openai_api_key
GOOGLE_API_KEY=your_google_api_key
GOOGLE_CSE_ID=your_google_cse_id
WOLFRAM_ALPHA_APPID=your_wolfram_alpha_appid
ACTIVELOOP_TOKEN=your_activeloop_token
ACTIVELOOP_ORG_NAME=your_activeloop_org_name
Each Python file in the repository is a standalone example that demonstrates a specific LangChain feature or concept. To run an example, simply execute the Python file:
python simple_chain.py
For the Streamlit applications, navigate to the application directory and run:
cd chat-with-any-data
streamlit run app.py
This project is licensed under the MIT License - see the LICENSE file for details.