Helps software development students generate project ideas based on their skills and interests. The final version of this product will run on the following stack:
- Python/FastAPI for prompt-management and handling to LLM provider server
- React front end with Mantine component library
-
At least one of the following:
-
Fast computer with > 8GB of RAM to run the offline LLM inference server
or
-
An API key with one of the supported LLM services (see below)
-
-
Python version 3.11 or greater installed
This package requires that you have an LLM inference server setup. Jan is a recommended desktop GUI application that runs an LLM inference on your computer, no internet connection needed. Ollama is another popular option, but its API is not directly compatible with the OpenAI python library and is therefore not supported at this time.
⚠️ The default model in JanAi,phi-2-3b
, requires 8GB of RAM and closing all your other apps, otherwise your computer will grind to a halt.
Settings -> Model: choose Phi-2 3B Q8 (requires 8GB of RAM, closed applications, and a fast computer)
Settings -> Advanced: Enable API Server
You can run this package using OpenAI's, AnyScale's, or TogetherAi's servers, but you will need to obtain an API key and add it in your .env
file. See Step 6 below.
You can get $25 in free credits with TogetherAi, and $10 in free credits from Anyscale when you sign up for their services. No payment details needed.
- Install Python version >= 3.11
git clone
the repo andcd
into the project directory- Set up a virtual environment
python3 -m venv .venv
- Activate the virtual environment (macOS/Linux)
source .venv/bin/activate
- Install the requirements
pip install -r requirements.txt
- (Optional) Rename your
.env
file if you plan on using cloud LLMs
mv .env.example .env
- (Optional) Add your OpenAI/AnyScale keys to your
.env
file - (Optional) Open
main_cli.py
ormain.py
and switch your LLM provider from Jan to another listed in theconfig.toml
file.
config = Config("JanAi") # Switch "JanAi" to "OpenAI" etc
- (Optional) To use a terminal command line, execute the CLI entrypoint
python3 main_cli.py
- Start the FastAPI endpoint
uvicorn main:app --reload
- Start the web service
cd frontend
Install web dependencies
npm install
Start the web server
npm run dev
Use a tool like Postman to test the endpoints.
http://127.0.0.1:8000/prompt
If you are developing the frontend and need a mocked response to avoid pinging an LLM inference server, use the test endpoint.
http://127.0.0.1:8000/test
JSON input:
{
"unknown_tech": ["Backend", "databases", "Java"],
"known_tech": ["Javascript", "React", "Typescript", "Vue"],
"topics": ["Dancing", "Cooking"]
}
Example JSON response:
{
"project_title": "Dance Recipe App",
"description": "An app that combines dancing and cooking by providing users with fun dance routines while preparing recipes, creating an interactive and enjoyable cooking experience.",
"technical_requirements": [
"Develop the app using React and Typescript for the frontend to ensure a robust and efficient user interface.",
"Incorporate Vue.js for interactive and visually engaging dance routine display and user interaction.",
"Utilize JavaScript for implementing audio playback functionality and dance routine synchronization with recipe steps."
],
"user_stories": [
"As a user, I can browse through a variety of recipes and select one to prepare.",
"As a user, I can view and follow along with a dance routine that complements the cooking process for a selected recipe.",
"As a user, I can pause, replay, or adjust the speed of the dance routine to match my preference and cooking pace.",
"As a user, I can see the ingredients and cooking steps for a recipe while simultaneously watching and following the dance routine.",
"As a user, I can track my progress and save my favorite recipes and dance routines for future use."
]
}
- Build skeletal code for abstracting LLM providers and clients from OpenAI implementation
- Add multiple LLM API urls and models
- Add Pydantic models and validation
- Create toml file to store endpoint configs
- Refactor
main_cli.py
to emulate a FastAPI entrypoint - Add Pydantic error handling
- Add retry logic for failed validation
- Convert
main_cli.py
to FastAPImain.py
- Add streaming support
- Add a mocking capability to ease frontend development
- Add rate limiting middleware
- Create caching schema
- Create blank React project with Vite
- Build barebones app shell with Mantine
- Create form elements with Mantine/Forms
- Create list of technologies/topics to choose from
- Implement fetch to Uvicorn API
- Dockerize the project
- Deploy to Cloudflare to
sxflynn.net
subdomain
Anyone is welcome to contribute bug fixes and ideas as an issue. Unless it's a quick fix or a documentation enhancement, please report your idea as an issue before submitting a PR.