Our project aims at solving high demand for intelligent virtual assistants, combining with customizable UI, Live2D, MCP and LLM flexibility. To achieve the goal, our project has flexible UI with Live2D, model switching, status monitoring and speech I/O.
- Architecture: Microservice-based architecture for modularity and scalability.
- Frontend:
- Model selection and backend status monitoring.
- Text/voice I/O with Live2D avatars for enhanced user interaction.
- Backend:
- API Gateway for routing requests.
- MCP Servers for handling tool calls (e.g., time, calendar, maps, RWTH service(Mensa, GYM) ).
- LLM Integration:
- Supports both local and remote LLMs through Ollama (e.g.Qwen, gemma).
- Available LLM demonstration and dynamic model switching.
- CI/CD Pipeline: Automated unit testing integrated with GitLab CI/CD.
- Cross-Platform: Compatible with Windows, macOS, and Linux.
The project is organized with a clear and maintainable structure:
📦 Virtual Personal Assistant
├── 📁 docs
├── 📁 frontend
│ ├── 📁 Electron
│ │ ├── 📁 Live2D
│ │ │ ├── 📁 Framework
│ │ │ ├── 📁 public
│ │ │ └── 📁 src
│ │ ├── 📁 js
│ │ │ ├── index.js
│ │ │ ├── popup.js
│ │ │ ├── preload.js
│ │ │ ├── render-old.js
│ │ │ ├── renderer.js
│ │ │ ├── 📁 markdown
│ │ │ │ ├── marked.min.js
│ │ │ │ └── purify.min.js
│ │ ├── 📁 _test_
│ │ │ ├── index.test.js
│ │ │ ├── preload.test.js
│ │ │ └── renderer.test.js
│ │ ├── 📁 assets
│ │ │ ├── icon.ico
│ │ │ ├── icon.icns
│ │ │ └── icon.png
│ │ ├── index.html
│ │ └── package.json
├── 📁 backend
│ ├── 📁 MCPServer
│ │ ├── 📁 time-and-direction
│ │ │ ├── requirements.txt
│ │ │ └── main.py
│ │ ├── 📁 RWTH
│ │ │ ├── requirements.txt
│ │ │ ├── main.py
│ │ │ ├── gym.py
│ │ │ └── mensa.py
│ │ ├── 📁 research
│ │ │ └── main.py
│ ├── 📁 api-gateway
│ │ ├── api_gateway.py
│ │ └── 📁 client
│ │ └── vpa_client.py
├── 📁 tests
│ ├── test_api_gateway.py
│ ├── test_mcp_client.py
│ ├── test_MCPServer_research.py
│ ├── test_MCPServer_RWTH.py
│ └── test_MCPServer_time-and-direction.py
├── 📁 google-config
├── requirements.in
├── run.py
├── gitlab-ci.yml
└── README.md- Python
3.10+ - Node.JS
20 - Electron
25 - uv
0.7.17 - remote server deployed LLMs
- API keys for Deepseek
- Other dependencies listed in
requirements.txtandpackage.json
-
Clone the repository:
git clone git@git.rwth-aachen.de:i5/teaching/bllma-lab/ss2025/virtual-personal-assistant.git
-
Using the
uvtool (Link):cd Backend- for the folders
api-gateway,MCPServer/research,MCPServer/RWTH,MCPServer/time-and-direction, do the following:
-
Install dependencies (thanks to
uvthis can be skipped):cd <Folder> uv sync # Running the project will also install it to a .venv: uv run <filename.py>
-
Electron SetUp
"node": "23.4.0", "npm": "11.3.0"
Linux/MacOS
curl https://get.volta.sh | bashWindows
winget install Volta.VoltaAfter this step, restart VSCode, then open a new terminal, test these commands:
volta --version node -v npm -v
Success if you see version instead of error.
cd FrontEnd\Electron npm install npm run start
- login into the HPC account and go to terminal
cd /hpcwork/yw701564/./startOllama.sh- In the same location, start a new terminal with
./ngrok http 11434to get ngrok URL
-
start HPC server and obtain ngrok URL (for example
https://207f-134-61-46-206.ngrok-free.app/) -
start the API Gateway, MCP Server, Electron Frontend with
run.pyscript:uv run run.pyafter starting, URL for remote HPC server or local ollama can be entered into the terminal:Enter the ngrok URL without '/' at the end :https://207f-134-61-46-206.ngrok-free.appor
Enter the ngrok URL without '/' at the end : http://localhost:11434
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them with clear, descriptive commit messages.
- Push your branch to your fork.
- Submit a merge request/pull request to the
mainbranch of the original repository.
-
This application uses the awesome Electron framework.
-
It utilizes the power of OpenAI, Ollama and open-source LLMs for natural language processing.
-
It integrates Live2D for interactive avatars.
- Nguyen, Toan
- Group Coordinator & Backend development Lead
- Designed the project’s microservice architecture and decoupled the code-base to fit this architecture.
- Developed the backend components
- API gateway
- MCP Client, LLM Client, and Agentic workflow for tool calls
- Prompt-engineering and text processing to support agentic workflow
- Backbone design and implementation of MCP Server tools logic
- Set up build & deployment: Dockerize Backend and Live2D + Electron app bundling
- Midterm and Final Presentation and Demo
- Krümpelmann, Finnegan
- Frontend Development Lead
- Designed and implemented the frontend interface and core functionalities.
- Integrated interactive Live2D characters and enhanced user interaction.
- Implementation of emotion changes and its integration into the agentic workflow
- Developed ASR (Automatic Speech Recognition) and TTS (Text-to- Speech) features to improve usability.
- Integrate all the frontend features together to get a better visual effect.
- Midterm and Final Presentation
- Zhou, Dengyang
- LLM Deployment & Integration
- Deployed large language models (LLMs) on remote servers and implemented communication between the server and the local application.
- Contributed to the application’s connection logic and ensured robust end-to-end communication.
- Implemented the MCP server RWTH with the functions: query the mensa menu based on date, type or ingredient, and query the occupation of GYM.
- Midterm and final presentation
- Main author of repo READMEs and technical report
- A demonstration video of the application can be found in this GitHub directory (vpa-demo.mp4).