Skip to content
This repository has been archived by the owner on Jun 9, 2024. It is now read-only.

Skeleton-plugin Code structure helper #180

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,8 @@ You can also see the plugins here:
| Telegram | A smoothly working Telegram bot that gives you all the messages you would normally get through the Terminal. | [autogpt_plugins/telegram](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/telegram) |
| Twitter | Auto-GPT is capable of retrieving Twitter posts and other related content by accessing the Twitter platform via the v1.1 API using Tweepy. | [autogpt_plugins/twitter](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/twitter) |
| Wikipedia Search | This allows Auto-GPT to use Wikipedia directly. | [autogpt_plugins/wikipedia_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wikipedia_search) |
| WolframAlpha Search | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/wolframalpha_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/wolframalpha_search)|
| WolframAlpha Search | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/wolframalpha_search](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/skeleton)|
| Skeleton Plugin | This allows AutoGPT to use WolframAlpha directly. | [autogpt_plugins/skeleton](https://github.com/Significant-Gravitas/Auto-GPT-Plugins/tree/master/src/autogpt_plugins/skeleton)|
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wolfram Alpha

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Put this in the right place alphabetically


Some third-party plugins have been created by contributors that are not included in this repository. For more information about these plugins, please visit their respective GitHub pages.

Expand Down
94 changes: 94 additions & 0 deletions src/autogpt_plugins/skeleton/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# AutoGPT Skeleton Plugin
This plugin is based on the AutoGPT Planner Plugin

## Getting Started

After you clone this repo from the original repo add it to the plugins folder of your AutoGPT repo and then run AutoGPT.

Remember to also update your `.env` to include

ALLOWLISTED_PLUGINS=SkeletonPlugin
SKELETON_MODEL=gpt-4
SKELETON_TOKEN_LIMIT=7500
SKELETON_TEMPERATURE=0.3

## New Commands

This plugin adds many new commands, here's the list:

```python
prompt.add_command(
"list_code_structure",
"List the current code structure",
{},
list_code_structure,
)

prompt.add_command(
"update_code_structure",
"Update the code structure with descriptions of new files",
{},
update_code_structure,
)

prompt.add_command(
"force_update_code_structure",
"Force update the code structure with new descriptions for all files",
{},
force_update_code_structure,
)

prompt.add_command(
"create_file",
"Creates a new file with a given name and optional initial content",
{
"file_name": "<string>",
"initial_content": "<optional string>",
},
create_file,
)

prompt.add_command(
"write_to_file",
"Writes to a specified file",
{
"file_name": "<string>",
"content": "<string>",
},
write_to_file,
)

prompt.add_command(
"create_directory",
"Creates a new directory",
{
"directory_name": "<string>",
},
create_directory,
)

prompt.add_command(
"change_directory",
"Changes the current directory",
{
"directory_name": "<string>",
},
change_directory,
)

prompt.add_command(
"list_files",
"Lists all the files in the current directory",
{},
list_files,
)

```

## New Config Options

By default, the plugin uses whatever your `FAST_LLM_MODEL` environment variable is set to. If none is set it will fall back to `gpt-3.5-turbo`. You can set it individually to a different model by setting the environment variable `SKELETON_MODEL` to the model you want to use (example: `gpt-4`).

Similarly, the token limit defaults to the `FAST_TOKEN_LIMIT` environment variable. If none is set it will fall back to `1500`. You can set it individually to a different limit for the plugin by setting `SKELETON_TOKEN_LIMIT` to the desired limit (example: `7500`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably encourage smart over fast, but that's not super clear the results of that


The temperature used defaults to the `TEMPERATURE` environment variable. If none is set it will fall back to `0.5`. You can set it individually to a different temperature for the plugin by setting `SKELETON_TEMPERATURE` to the desired temperature (example: `0.3`).
271 changes: 271 additions & 0 deletions src/autogpt_plugins/skeleton/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,271 @@
"""This is a skeleton plugin for Auto-GPT which can be used as a template, which keeps track of file structures and
creates files and directories. It also has a simple command to list the files and directories in the current directory.

built by @wladastic on github"""

from typing import Any, Dict, List, Optional, Tuple, TypedDict, TypeVar

from auto_gpt_plugin_template import AutoGPTPluginTemplate

from .skeleton import list_code_structure
from .skeleton import update_code_structure
from .skeleton import force_update_code_structure
from .skeleton import create_file
from .skeleton import create_directory
from .skeleton import replace_line_in_file
from .skeleton import write_to_file


PromptGenerator = TypeVar("PromptGenerator")


class Message(TypedDict):
role: str
content: str


class SkeletonPlugin(AutoGPTPluginTemplate):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeStructurePlugin seems better and more clear, lets use that everywhere if possible

"""
Skeleton plugin for Auto-GPT which can keep track of code structures and create files and directories.
"""

def __init__(self):
super().__init__()
self._name = "Code-Structure-Plugin"
self._version = "0.0.1"
self._description = ""

def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator:
"""This method is called just after the generate_prompt is called,
but actually before the prompt is generated.
Args:
prompt (PromptGenerator): The prompt generator.
Returns:
PromptGenerator: The prompt generator.
"""

prompt.add_command(
"list_code_structure",
"List the current code structure",
{},
list_code_structure,
)

prompt.add_command(
"update_code_structure",
"Update the code structure with descriptions of new files",
{},
update_code_structure,
)

prompt.add_command(
"force_update_code_structure",
"Force update the code structure with new descriptions for all files",
{},
force_update_code_structure,
)

prompt.add_command(
"create_file",
"Creates a new file with a given name and optional initial content",
{
"file_name": "<string>",
"initial_content": "<optional string>",
},
create_file,
)

prompt.add_command(
"write_to_file",
"Writes to a specified file",
{
"file_name": "<string>",
"content": "<string>",
},
write_to_file,
)

prompt.add_command(
"create_directory",
"Creates a new directory",
{
"directory_name": "<string>",
},
create_directory,
)

prompt.add_command(
"replace_lines_in_file",
"Replaces a line in a file",
{
"file_name": "<string>",
"line_number": "<int>",
"content": "<string>",
},
replace_line_in_file,
)

return prompt

def can_handle_post_prompt(self) -> bool:
"""This method is called to check that the plugin can
handle the post_prompt method.
Returns:
bool: True if the plugin can handle the post_prompt method."""
return True

def can_handle_on_response(self) -> bool:
"""This method is called to check that the plugin can
handle the on_response method.
Returns:
bool: True if the plugin can handle the on_response method."""
return False

def on_response(self, response: str, *args, **kwargs) -> str:
"""This method is called when a response is received from the model."""
pass

def can_handle_on_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the on_planning method.
Returns:
bool: True if the plugin can handle the on_planning method."""
return False

def on_planning(
self, prompt: PromptGenerator, messages: List[Message]
) -> Optional[str]:
"""This method is called before the planning chat completion is done.
Args:
prompt (PromptGenerator): The prompt generator.
messages (List[str]): The list of messages.
"""
pass

def can_handle_post_planning(self) -> bool:
"""This method is called to check that the plugin can
handle the post_planning method.
Returns:
bool: True if the plugin can handle the post_planning method."""
return False

def post_planning(self, response: str) -> str:
"""This method is called after the planning chat completion is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
pass

def can_handle_pre_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_instruction method.
Returns:
bool: True if the plugin can handle the pre_instruction method."""
return False

def pre_instruction(self, messages: List[Message]) -> List[Message]:
"""This method is called before the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
List[Message]: The resulting list of messages.
"""
pass

def can_handle_on_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the on_instruction method.
Returns:
bool: True if the plugin can handle the on_instruction method."""
return False

def on_instruction(self, messages: List[Message]) -> Optional[str]:
"""This method is called when the instruction chat is done.
Args:
messages (List[Message]): The list of context messages.
Returns:
Optional[str]: The resulting message.
"""
pass

def can_handle_post_instruction(self) -> bool:
"""This method is called to check that the plugin can
handle the post_instruction method.
Returns:
bool: True if the plugin can handle the post_instruction method."""
return False

def post_instruction(self, response: str) -> str:
"""This method is called after the instruction chat is done.
Args:
response (str): The response.
Returns:
str: The resulting response.
"""
pass

def can_handle_pre_command(self) -> bool:
"""This method is called to check that the plugin can
handle the pre_command method.
Returns:
bool: True if the plugin can handle the pre_command method."""
return False

def pre_command(
self, command_name: str, arguments: Dict[str, Any]
) -> Tuple[str, Dict[str, Any]]:
"""This method is called before the command is executed.
Args:
command_name (str): The command name.
arguments (Dict[str, Any]): The arguments.
Returns:
Tuple[str, Dict[str, Any]]: The command name and the arguments.
"""
pass

def can_handle_post_command(self) -> bool:
"""This method is called to check that the plugin can
handle the post_command method.
Returns:
bool: True if the plugin can handle the post_command method."""
return False

def post_command(self, command_name: str, response: str) -> str:
"""This method is called after the command is executed.
Args:
command_name (str): The command name.
response (str): The response.
Returns:
str: The resulting response.
"""
pass

def can_handle_chat_completion(
self, messages: Dict[Any, Any], model: str, temperature: float, max_tokens: int
) -> bool:
"""This method is called to check that the plugin can
handle the chat_completion method.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
bool: True if the plugin can handle the chat_completion method."""
return False

def handle_chat_completion(
self, messages: List[Message], model: str, temperature: float, max_tokens: int
) -> str:
"""This method is called when the chat completion is done.
Args:
messages (List[Message]): The messages.
model (str): The model name.
temperature (float): The temperature.
max_tokens (int): The max tokens.
Returns:
str: The resulting response.
"""
pass
Loading