Skip to content

Commit

Permalink
Update TRANSPARENCY_FAQS.md
Browse files Browse the repository at this point in the history
Co-authored-by: Mark Sze <66362098+marklysze@users.noreply.github.com>
  • Loading branch information
qingyun-wu and marklysze authored Nov 22, 2024
1 parent 4fd9c32 commit 22a8aff
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion TRANSPARENCY_FAQS.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Additionally, AG2's multi-agent framework may amplify or introduce additional ri
- Security & unintended consequences: The use of multi-agent conversations and automation in complex tasks may have unintended consequences. Especially, allowing LLM agents to make changes in external environments through code execution or function calls, such as installing packages, could pose significant risks. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes, including keeping a human in the loop for decision making.

## What operational factors and settings allow for effective and responsible use of AG2?
- Code execution: AG2 recommends using docker containers so that code execution can happen in a safer manner. Users can use function call instead of free-form code to execute pre-defined functions only. That helps increase the reliability and safety. Users can customize the code execution environment to tailor to their requirements.
- Code execution: AG2 recommends using docker containers so that code execution can happen in a safer manner. Users can use function calls instead of free-form code to execute pre-defined functions only, increasing reliability and safety. Users can also tailor the code execution environment to their requirements.
- Human involvement: AG2 prioritizes human involvement in multi agent conversation. The overseers can step in to give feedback to agents and steer them in the correct direction. Users can get a chance to confirm before code is executed.
- Agent modularity: Modularity allows agents to have different levels of information access. Additional agents can assume roles that help keep other agents in check. For example, one can easily add a dedicated agent to play the role of safeguard.
- LLMs: Users can choose the LLM that is optimized for responsible use. The default LLM is GPT-4 which inherits the existing RAI mechanisms and filters from the LLM provider. Caching is enabled by default to increase reliability and control cost. We encourage developers to review [OpenAI’s Usage policies](https://openai.com/policies/usage-policies) and [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct) when using GPT-4.
Expand Down

0 comments on commit 22a8aff

Please sign in to comment.