Skip to content

We present the first systematic study on the scaling property of raw agents instantiated by LLMs. We find that performance scales with the increase in the number of agents, using the simple(st) way of sampling and voting. Our method is called Agent Forest, as a tribute to the classic Random Forest.

Notifications You must be signed in to change notification settings

MoreAgentsIsAllYouNeed/AgentForest

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

More Agents Is All You Need

Preliminary Setup

Our framework supports a range of Large Language Models, including the GPT series hosted on Azure and various open-source LLMs. To integrate these models into our experimental setup, users must define the necessary API keys and model deployment IP addresses as environment variables:

# For GPT series LLMs hosted on Azure
export OPENAI_KEY="YOUR_OPENAI_KEY"
export OPENAI_IP="YOUR_OPENAI_IP"

# For open-source LLMs
export LLM_IP="YOUR_OPENSOURCED_LLM_IP"

Before conducting human evaluation experiments, it is essential to install the required dependencies. Installation instructions are available at the following link: human-eval.

Datasets

The datasets utilized in our experiments are located within the ./datasets directory:

  • Chess problems for move validation are in ./dataset/chess_dataset.
  • The Massive Multitask Language Understanding (MMLU) problems are in ./dataset/mmlu_dataset.
  • Mathematical problems are found in ./dataset/math_dataset.
  • Grade School Math (GSM) 8K problems are located in ./dataset/gsm_dataset.

Running Experiments

To execute the experiments, navigate to the ./script directory and use the provided shell script: sh run.sh {AGENT_NUM} {MODEL} {QTYPE}, where:

  • {AGENT_NUM} is the number of LLM agents to instantiate.
  • {MODEL} specifies the LLM to use, with support for both OpenAI-GPT series and open-source LLMs.
  • {QTYPE} denotes the type of questions to be processed, with options including MATH, GSM, MMLU, Chess, and HumanEval.

Citation

If you find the paper or the source code useful to your projects, please cite the following bibtex:

@article{
      li2024more,
      title={More agents is all you need},
      author={Li, Junyou and Zhang, Qin and Yu, Yangbin and Fu, Qiang and Ye, Deheng},
      journal={Transactions on Machine Learning Research},
      year={2024},
      url={https://openreview.net/forum?id=bgzUSZ8aeg},
      note={}
}

About

We present the first systematic study on the scaling property of raw agents instantiated by LLMs. We find that performance scales with the increase in the number of agents, using the simple(st) way of sampling and voting. Our method is called Agent Forest, as a tribute to the classic Random Forest.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published