Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One reference on LLM Agents playing Trust Games #4

Open
canyuchen opened this issue Mar 26, 2024 · 0 comments
Open

One reference on LLM Agents playing Trust Games #4

canyuchen opened this issue Mar 26, 2024 · 0 comments

Comments

@canyuchen
Copy link

canyuchen commented Mar 26, 2024

Congratulations on your recent solid survey paper and impressive paper list!

We have a related paper on LLM Agents playing Trust Games.

Can Large Language Model Agents Simulate Human Trust Behaviors?

  • arxiv : https://arxiv.org/abs/2402.04559
  • code : https://github.com/camel-ai/agent-trust
  • project website : https://www.camel-ai.org/research/agent-trust
  • We discover the trust behaviors of LLM agents under the framework of Trust Games, and the high behavioral alignment between LLM agents and humans regarding the trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents.
  • abstract: Large Language Model (LLM) agents have been increasingly adopted as simulation tools to model humans in applications such as social science. However, one fundamental question remains: can LLM agents really simulate human behaviors? In this paper, we focus on one of the most critical behaviors in human interactions, trust, and aim to investigate whether or not LLM agents can simulate human trust behaviors. We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents. In addition, we probe into the biases in agent trust and the differences in agent trust towards agents and humans. We also explore the intrinsic properties of agent trust under conditions including advanced reasoning strategies and external manipulations. We further offer important implications of our discoveries for various scenarios where trust is paramount. Our study provides new insights into the behaviors of LLM agents and the fundamental analogy between LLMs and humans.
@canyuchen canyuchen changed the title One reference on LLMs playing Trust Games One reference on LLM Agents playing Trust Games Mar 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant