This repository contains two Python scripts designed for deploying language models and facilitating chat applications using the Modal, Langchain, Fastapi, VLLM and Hugging Face's Transformers. The scripts demonstrate how to set up and run a Large Language Model (LLM), and how to integrate a chat application with streaming capabilities.
- demo_langchain_hf_vllm.py: Sets up and runs Mistral-7B-Instruct-v0.1 an LLM from MistralAI using LangChain, VLLM and Huggingface. It includes downloading the model, setting up the environment, and running inference.
- chat_token_streaming.py: Implements a chat application using OpenAI API, FastAPI and the LangChain. It includes streaming response capabilities and CORS middleware setup for a web application.
- Python 3.x
- Pip
- Modal SDK
- Hugging Face's Hub and Transformers library
- OpenAI SDK
- FastAPI
- Pydantic
To use these scripts, you need to install the necessary dependencies. Run the following command to install them:
pip install modal huggingface_hub transformers torch openai fastapi pydantic
- Set your Hugging Face token in the environment variable HUGGINGFACE_TOKEN.
- Run the script to download the model and set up the environment.
- The script can be executed to answer predefined questions using the language model.
- Set the necessary environment variables (if any).
- Run the script to start the FastAPI server.
- Use the
/generate
endpoint to interact with the chat application.
Prince Canuma - An MLOPs Engineer and founder at Kulissiwa. Previously, he worked as a ML Engineer at neptune.ai. He is passionate about MLOps, Deep Learning, and Software Engineering.
Contributions to this project are welcome. Please follow the standard procedures for submitting issues and pull requests.
This project is licensed under the MIT License - see the LICENSE file for details.