This repository has been archived by the owner on Jun 9, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 561
Skeleton-plugin Code structure helper #180
Open
Wladastic
wants to merge
9
commits into
Significant-Gravitas:master
Choose a base branch
from
Wladastic:skeleton-plugin
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
Show all changes
9 commits
Select commit
Hold shift + click to select a range
3124779
setup chat-id for telegram plugin
Wladastic 2719daa
fix getUpdates method getting stuck
Wladastic 4749a9b
removed unused imports
Wladastic e24f143
init skeleton plugin
Wladastic 17c0b7c
remove list_files and change_directory as not working
Wladastic 81d5272
fix list code structure
Wladastic a62e6d2
add replace line
Wladastic b7da2d7
fix params
Wladastic bcc2d9b
pr changes
Wladastic File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,94 @@ | ||
# AutoGPT Skeleton Plugin | ||
This plugin is based on the AutoGPT Planner Plugin | ||
|
||
## Getting Started | ||
|
||
After you clone this repo from the original repo add it to the plugins folder of your AutoGPT repo and then run AutoGPT. | ||
|
||
Remember to also update your `.env` to include | ||
|
||
ALLOWLISTED_PLUGINS=SkeletonPlugin | ||
SKELETON_MODEL=gpt-4 | ||
SKELETON_TOKEN_LIMIT=7500 | ||
SKELETON_TEMPERATURE=0.3 | ||
|
||
## New Commands | ||
|
||
This plugin adds many new commands, here's the list: | ||
|
||
```python | ||
prompt.add_command( | ||
"list_code_structure", | ||
"List the current code structure", | ||
{}, | ||
list_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"update_code_structure", | ||
"Update the code structure with descriptions of new files", | ||
{}, | ||
update_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"force_update_code_structure", | ||
"Force update the code structure with new descriptions for all files", | ||
{}, | ||
force_update_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"create_file", | ||
"Creates a new file with a given name and optional initial content", | ||
{ | ||
"file_name": "<string>", | ||
"initial_content": "<optional string>", | ||
}, | ||
create_file, | ||
) | ||
|
||
prompt.add_command( | ||
"write_to_file", | ||
"Writes to a specified file", | ||
{ | ||
"file_name": "<string>", | ||
"content": "<string>", | ||
}, | ||
write_to_file, | ||
) | ||
|
||
prompt.add_command( | ||
"create_directory", | ||
"Creates a new directory", | ||
{ | ||
"directory_name": "<string>", | ||
}, | ||
create_directory, | ||
) | ||
|
||
prompt.add_command( | ||
"change_directory", | ||
"Changes the current directory", | ||
{ | ||
"directory_name": "<string>", | ||
}, | ||
change_directory, | ||
) | ||
|
||
prompt.add_command( | ||
"list_files", | ||
"Lists all the files in the current directory", | ||
{}, | ||
list_files, | ||
) | ||
|
||
``` | ||
|
||
## New Config Options | ||
|
||
By default, the plugin uses whatever your `FAST_LLM_MODEL` environment variable is set to. If none is set it will fall back to `gpt-3.5-turbo`. You can set it individually to a different model by setting the environment variable `SKELETON_MODEL` to the model you want to use (example: `gpt-4`). | ||
|
||
Similarly, the token limit defaults to the `FAST_TOKEN_LIMIT` environment variable. If none is set it will fall back to `1500`. You can set it individually to a different limit for the plugin by setting `SKELETON_TOKEN_LIMIT` to the desired limit (example: `7500`). | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'd probably encourage smart over fast, but that's not super clear the results of that |
||
|
||
The temperature used defaults to the `TEMPERATURE` environment variable. If none is set it will fall back to `0.5`. You can set it individually to a different temperature for the plugin by setting `SKELETON_TEMPERATURE` to the desired temperature (example: `0.3`). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,271 @@ | ||
"""This is a skeleton plugin for Auto-GPT which can be used as a template, which keeps track of file structures and | ||
creates files and directories. It also has a simple command to list the files and directories in the current directory. | ||
|
||
built by @wladastic on github""" | ||
|
||
from typing import Any, Dict, List, Optional, Tuple, TypedDict, TypeVar | ||
|
||
from auto_gpt_plugin_template import AutoGPTPluginTemplate | ||
|
||
from .skeleton import list_code_structure | ||
from .skeleton import update_code_structure | ||
from .skeleton import force_update_code_structure | ||
from .skeleton import create_file | ||
from .skeleton import create_directory | ||
from .skeleton import replace_line_in_file | ||
from .skeleton import write_to_file | ||
|
||
|
||
PromptGenerator = TypeVar("PromptGenerator") | ||
|
||
|
||
class Message(TypedDict): | ||
role: str | ||
content: str | ||
|
||
|
||
class SkeletonPlugin(AutoGPTPluginTemplate): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. CodeStructurePlugin seems better and more clear, lets use that everywhere if possible |
||
""" | ||
Skeleton plugin for Auto-GPT which can keep track of code structures and create files and directories. | ||
""" | ||
|
||
def __init__(self): | ||
super().__init__() | ||
self._name = "Code-Structure-Plugin" | ||
self._version = "0.0.1" | ||
self._description = "" | ||
|
||
def post_prompt(self, prompt: PromptGenerator) -> PromptGenerator: | ||
"""This method is called just after the generate_prompt is called, | ||
but actually before the prompt is generated. | ||
Args: | ||
prompt (PromptGenerator): The prompt generator. | ||
Returns: | ||
PromptGenerator: The prompt generator. | ||
""" | ||
|
||
prompt.add_command( | ||
"list_code_structure", | ||
"List the current code structure", | ||
{}, | ||
list_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"update_code_structure", | ||
"Update the code structure with descriptions of new files", | ||
{}, | ||
update_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"force_update_code_structure", | ||
"Force update the code structure with new descriptions for all files", | ||
{}, | ||
force_update_code_structure, | ||
) | ||
|
||
prompt.add_command( | ||
"create_file", | ||
"Creates a new file with a given name and optional initial content", | ||
{ | ||
"file_name": "<string>", | ||
"initial_content": "<optional string>", | ||
}, | ||
create_file, | ||
) | ||
|
||
prompt.add_command( | ||
"write_to_file", | ||
"Writes to a specified file", | ||
{ | ||
"file_name": "<string>", | ||
"content": "<string>", | ||
}, | ||
write_to_file, | ||
) | ||
|
||
prompt.add_command( | ||
"create_directory", | ||
"Creates a new directory", | ||
{ | ||
"directory_name": "<string>", | ||
}, | ||
create_directory, | ||
) | ||
|
||
prompt.add_command( | ||
"replace_lines_in_file", | ||
"Replaces a line in a file", | ||
{ | ||
"file_name": "<string>", | ||
"line_number": "<int>", | ||
"content": "<string>", | ||
}, | ||
replace_line_in_file, | ||
) | ||
|
||
return prompt | ||
|
||
def can_handle_post_prompt(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the post_prompt method. | ||
Returns: | ||
bool: True if the plugin can handle the post_prompt method.""" | ||
return True | ||
|
||
def can_handle_on_response(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the on_response method. | ||
Returns: | ||
bool: True if the plugin can handle the on_response method.""" | ||
return False | ||
|
||
def on_response(self, response: str, *args, **kwargs) -> str: | ||
"""This method is called when a response is received from the model.""" | ||
pass | ||
|
||
def can_handle_on_planning(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the on_planning method. | ||
Returns: | ||
bool: True if the plugin can handle the on_planning method.""" | ||
return False | ||
|
||
def on_planning( | ||
self, prompt: PromptGenerator, messages: List[Message] | ||
) -> Optional[str]: | ||
"""This method is called before the planning chat completion is done. | ||
Args: | ||
prompt (PromptGenerator): The prompt generator. | ||
messages (List[str]): The list of messages. | ||
""" | ||
pass | ||
|
||
def can_handle_post_planning(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the post_planning method. | ||
Returns: | ||
bool: True if the plugin can handle the post_planning method.""" | ||
return False | ||
|
||
def post_planning(self, response: str) -> str: | ||
"""This method is called after the planning chat completion is done. | ||
Args: | ||
response (str): The response. | ||
Returns: | ||
str: The resulting response. | ||
""" | ||
pass | ||
|
||
def can_handle_pre_instruction(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the pre_instruction method. | ||
Returns: | ||
bool: True if the plugin can handle the pre_instruction method.""" | ||
return False | ||
|
||
def pre_instruction(self, messages: List[Message]) -> List[Message]: | ||
"""This method is called before the instruction chat is done. | ||
Args: | ||
messages (List[Message]): The list of context messages. | ||
Returns: | ||
List[Message]: The resulting list of messages. | ||
""" | ||
pass | ||
|
||
def can_handle_on_instruction(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the on_instruction method. | ||
Returns: | ||
bool: True if the plugin can handle the on_instruction method.""" | ||
return False | ||
|
||
def on_instruction(self, messages: List[Message]) -> Optional[str]: | ||
"""This method is called when the instruction chat is done. | ||
Args: | ||
messages (List[Message]): The list of context messages. | ||
Returns: | ||
Optional[str]: The resulting message. | ||
""" | ||
pass | ||
|
||
def can_handle_post_instruction(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the post_instruction method. | ||
Returns: | ||
bool: True if the plugin can handle the post_instruction method.""" | ||
return False | ||
|
||
def post_instruction(self, response: str) -> str: | ||
"""This method is called after the instruction chat is done. | ||
Args: | ||
response (str): The response. | ||
Returns: | ||
str: The resulting response. | ||
""" | ||
pass | ||
|
||
def can_handle_pre_command(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the pre_command method. | ||
Returns: | ||
bool: True if the plugin can handle the pre_command method.""" | ||
return False | ||
|
||
def pre_command( | ||
self, command_name: str, arguments: Dict[str, Any] | ||
) -> Tuple[str, Dict[str, Any]]: | ||
"""This method is called before the command is executed. | ||
Args: | ||
command_name (str): The command name. | ||
arguments (Dict[str, Any]): The arguments. | ||
Returns: | ||
Tuple[str, Dict[str, Any]]: The command name and the arguments. | ||
""" | ||
pass | ||
|
||
def can_handle_post_command(self) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the post_command method. | ||
Returns: | ||
bool: True if the plugin can handle the post_command method.""" | ||
return False | ||
|
||
def post_command(self, command_name: str, response: str) -> str: | ||
"""This method is called after the command is executed. | ||
Args: | ||
command_name (str): The command name. | ||
response (str): The response. | ||
Returns: | ||
str: The resulting response. | ||
""" | ||
pass | ||
|
||
def can_handle_chat_completion( | ||
self, messages: Dict[Any, Any], model: str, temperature: float, max_tokens: int | ||
) -> bool: | ||
"""This method is called to check that the plugin can | ||
handle the chat_completion method. | ||
Args: | ||
messages (List[Message]): The messages. | ||
model (str): The model name. | ||
temperature (float): The temperature. | ||
max_tokens (int): The max tokens. | ||
Returns: | ||
bool: True if the plugin can handle the chat_completion method.""" | ||
return False | ||
|
||
def handle_chat_completion( | ||
self, messages: List[Message], model: str, temperature: float, max_tokens: int | ||
) -> str: | ||
"""This method is called when the chat completion is done. | ||
Args: | ||
messages (List[Message]): The messages. | ||
model (str): The model name. | ||
temperature (float): The temperature. | ||
max_tokens (int): The max tokens. | ||
Returns: | ||
str: The resulting response. | ||
""" | ||
pass |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wolfram Alpha
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Put this in the right place alphabetically