This repository contains the official implementation of the DriveLLM system, which leverages large language model (LLM) capabilities to enhance autonomous driving decision-making processes. The LLM-service backend is implemented using Fast-API and langchain. The LLM-Autonomous driving bridge is implemented for ROS 1 designed for a modified version of Autoware stack
This contained a 20K dataset generated using self-instruct and is designed for general driving applications. Relevant dataset code on dataset generation, cleaning and formatting is here
Note: change http://localhost:9000/query to http://localhost:8300/query If you are running the service in docker.
Sending requests to the /query endpoint. Here is an example of using curl for service debuging (running locally):
curl -N -X POST 'http://localhost:9000/query' \
-H 'Content-Type: application/json' \
-d '{
"messages": [
{
"role": "user",
"perception":"perception",
"system_health":"system_health",
"weather":"weather",
"location":"location",
"vehicle_state":"vehicle_state",
"control_command":"control_command",
"command":"passenger_command"
}
]
}'
docker-compose up
Build the docker image and run the container.
docker-compose build
docker-compose up
If you need to completely shut down your environment or clean up your resources:
docker-compose down
stops running containers and remove them, along with their associated networks, volumes, and images.
Note: For docker, the service will now be accessible at http://localhost:8300.
- Clone this repository
- Install dependencies:
pip install -r requirements.txt
- Create an .env file and add the following environment variables:
LOGGING_LEVEL=10 # 10-DEBUG, 20-INFO, 30-WARN, 40-ERROR
- Run the application using Uvicorn:
uvicorn main:app --host 0.0.0.0 --port 9000
- The service will now be accessible at http://localhost:9000.