Pyris is an intermediary system that connects the Artemis platform with various Large Language Models (LLMs). It provides a REST API that allows Artemis to interact with different pipelines based on specific tasks.
Currently, Pyris powers Iris, a virtual AI tutor that assists students with their programming exercises on Artemis in a pedagogically meaningful way.
-
Exercise Support: Empowers Iris to provide feedback on programming exercises, enhancing the learning experience for students. Iris analyzes submitted code, feedback, and build logs generated by Artemis to provide detailed insights.
-
Course Content Support: Leverages RAG (Retrieval-Augmented Generation) to enable Iris to provide detailed explanations for course content, making it easier for students to understand complex topics based on instructor-provided learning materials.
-
Competency Generation: Automates the generation of competencies for courses, reducing manual effort in creating Artemis competencies.
-
Python 3.12: Ensure that Python 3.12 is installed.
python --version
-
Docker and Docker Compose: Required for containerized deployment.
Note: If you need to modify the local Weaviate vector database setup, please refer to the Weaviate Documentation.
-
Clone the Pyris Repository
Clone the Pyris repository into a directory on your machine:
git clone https://github.com/ls1intum/Pyris.git Pyris
-
Install Dependencies
Navigate to the Pyris directory and install the required Python packages:
cd Pyris pip install -r requirements.txt
-
Create Configuration Files
-
Create an Application Configuration File
Create an
application.local.yml
file in the root directory. You can use the providedapplication.example.yml
as a base.cp application.example.yml application.local.yml
Example
application.local.yml
:api_keys: - token: "your-secret-token" weaviate: host: "localhost" port: "8001" grpc_port: "50051" env_vars:
-
Create an LLM Config File
Create an
llm-config.local.yml
file in the root directory. You can use the providedllm-config.example.yml
as a base.cp llm-config.example.yml llm-config.local.yml
Example OpenAI Configuration:
- id: "oai-gpt-35-turbo" name: "GPT 3.5 Turbo" description: "GPT 3.5 16k" type: "openai_chat" model: "gpt-3.5-turbo" api_key: "<your_openai_api_key>" tools: [] capabilities: input_cost: 0.5 output_cost: 1.5 gpt_version_equivalent: 3.5 context_length: 16385 vendor: "OpenAI" privacy_compliance: false self_hosted: false image_recognition: false json_mode: true
Example Azure OpenAI Configuration:
- id: "azure-gpt-4-omni" name: "GPT 4 Omni" description: "GPT 4 Omni on Azure" type: "azure_chat" endpoint: "<your_azure_model_endpoint>" api_version: "2024-02-15-preview" azure_deployment: "gpt4o" model: "gpt4o" api_key: "<your_azure_api_key>" tools: [] capabilities: input_cost: 6 output_cost: 16 gpt_version_equivalent: 4.5 # Equivalent GPT version of the model context_length: 128000 vendor: "OpenAI" privacy_compliance: true self_hosted: false image_recognition: true json_mode: true
Explanation of Configuration Parameters
The configuration parameters are used by Pyris's capability system to select the appropriate model for a task.
Parameter Descriptions:
-
api_key
: The API key for the model. -
capabilities
: The capabilities of the model.context_length
: The maximum number of tokens the model can process in a single request.gpt_version_equivalent
: The equivalent GPT version of the model in terms of overall capabilities.image_recognition
: Whether the model supports image recognition (for multimodal models).input_cost
: The cost of input tokens for the model.output_cost
: The cost of output tokens for the model.json_mode
: Whether the model supports structured JSON output mode.privacy_compliance
: Whether the model complies with privacy regulations.self_hosted
: Whether the model is self-hosted.vendor
: The provider of the model (e.g., OpenAI).speed
: The model's processing speed.
-
description
: Additional information about the model. -
id
: Unique identifier for the model across all models. -
model
: The official name of the model as used by the vendor. -
name
: A custom, human-readable name for the model. -
type
: The model type, used to select the appropriate client (e.g.,openai_chat
,azure_chat
,ollama
). -
endpoint
: The URL to connect to the model. -
api_version
: The API version to use with the model. -
azure_deployment
: The deployment name of the model on Azure. -
tools
: The tools supported by the model.
Notes on
gpt_version_equivalent
: Thegpt_version_equivalent
field is subjective and used to compare capabilities of different models using GPT models as a reference. For example:- GPT-4 Omni equivalent: 4.5
- GPT-4 Omni Mini equivalent: 4.25
- GPT-4 equivalent: 4
- GPT-3.5 Turbo equivalent: 3.5
Warning: Most existing pipelines in Pyris require a model with a
gpt_version_equivalent
of 4.5 or higher. It is advised to define models in thellm-config.local.yml
file with agpt_version_equivalent
of 4.5 or higher. -
-
-
Run the Server
Start the Pyris server:
APPLICATION_YML_PATH=./application.local.yml \ LLM_CONFIG_PATH=./llm-config.local.yml \ uvicorn app.main:app --reload
-
Access API Documentation
Open your browser and navigate to http://localhost:8000/docs to access the interactive API documentation.
Deploying Pyris using Docker ensures a consistent environment and simplifies the deployment process.
-
Docker: Install Docker from the official website.
-
Docker Compose: Comes bundled with Docker Desktop or install separately on Linux.
-
Clone the Pyris Repository: If not already done, clone the repository.
-
Create Configuration Files: Create the
application.local.yml
andllm-config.local.yml
files as described in the Local Development Setup section.git clone https://github.com/ls1intum/Pyris.git Pyris cd Pyris
- Development:
docker-compose/pyris-dev.yml
- Production with Nginx:
docker-compose/pyris-production.yml
- Production without Nginx:
docker-compose/pyris-production-internal.yml
-
Start the Containers
docker-compose -f docker-compose/pyris-dev.yml up --build
- Builds the Pyris application.
- Starts Pyris and Weaviate in development mode.
- Mounts local configuration files for easy modification.
-
Access the Application
- Application URL: http://localhost:8000
- API Docs: http://localhost:8000/docs
-
Prepare SSL Certificates
- Place your SSL certificate (
fullchain.pem
) and private key (priv_key.pem
) in the specified paths or update the paths in the Docker Compose file.
- Place your SSL certificate (
-
Start the Containers
docker-compose -f docker-compose/pyris-production.yml up -d
- Pulls the latest Pyris image.
- Starts Pyris, Weaviate, and Nginx.
- Nginx handles SSL termination and reverse proxying.
-
Access the Application
- Application URL:
https://your-domain.com
- Application URL:
-
Start the Containers
docker-compose -f docker-compose/pyris-production-internal.yml up -d
- Pulls the latest Pyris image.
- Starts Pyris and Weaviate.
-
Access the Application
- Application URL: http://localhost:8000
-
Stop the Containers
docker-compose -f <compose-file> down
Replace
<compose-file>
with the appropriate Docker Compose file. -
View Logs
docker-compose -f <compose-file> logs -f <service-name>
Example:
docker-compose -f docker-compose/pyris-dev.yml logs -f pyris-app
-
Rebuild Containers
If you've made changes to the code or configurations:
docker-compose -f <compose-file> up --build
-
Environment Variables
You can customize settings using environment variables:
PYRIS_DOCKER_TAG
: Specifies the Pyris Docker image tag.PYRIS_APPLICATION_YML_FILE
: Path to yourapplication.yml
file.PYRIS_LLM_CONFIG_YML_FILE
: Path to yourllm-config.yml
file.PYRIS_PORT
: Host port for Pyris application (default is8000
).WEAVIATE_PORT
: Host port for Weaviate REST API (default is8001
).WEAVIATE_GRPC_PORT
: Host port for Weaviate gRPC interface (default is50051
).
-
Configuration Files
Modify configuration files as needed:
- Pyris Configuration: Update
application.yml
andllm-config.yml
. - Weaviate Configuration: Adjust settings in
weaviate.yml
. - Nginx Configuration: Modify Nginx settings in
nginx.yml
and related config files.
- Pyris Configuration: Update
-
Port Conflicts
If you encounter port conflicts, change the host ports using environment variables:
export PYRIS_PORT=8080
-
Permission Issues
Ensure you have the necessary permissions for files and directories, especially for SSL certificates.
-
Docker Resources
If services fail to start, ensure Docker has sufficient resources allocated.