No description provided
As prompting fro data analysis & vis is quite specific, a small side collection of prompts for this purpose
A few prompts that I am storing in a repo for the purpose of running controlled experiments comparing and benchmarking different LLMs for defined use-cases
Experiments in evaluating various prompting strategies and LLM performance generally
First entry documentation repository for notes mostly unedited from LLMs
Interesting GPT outputs demonstrating specific capabilities
Large language models (LLMs) explaining themselves - and how to make best use of them (prompt engineering). Often insightful, though accuracy not guaranteed!
Experimenting to test the effect of puncutation in prompts on inference quality
Experiment: two AI agents, each one thinks the other is a liar...