A set of tools I'm experimenting with to help myself learn to write better zero-shot prompts. Warning: code is horrible and messy. 😳
I find this tool most useful after the early rapid iteration stage when I'm ready to test a few different prompt strategies. Once I’ve selected the final set of candidates, I can also use it to compare their performance across different scenarios.
- Tweak the prompts and the variables directly in prompts/prompts.json
- Run the dev server
- Enter OpenAI Key on top
- Enter variables for each set of scenarios
- Click Run to test the different prompts variations across different scnearios!
Variables used across different prompt variations will be detected and shared, so you only need to enter them once—just ensure they have the same name like patch_url
.
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
Open http://localhost:3000 to see the result.