diff --git a/README.md b/README.md index 5a2e2645..34419037 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,11 @@ -# Shell GPT -A command-line productivity tool powered by OpenAI's GPT-3.5 model. As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. +# ShellGPT +A command-line productivity tool powered by OpenAI's GPT models. As developers, we can leverage AI capabilities to generate shell commands, code snippets, comments, and documentation, among other things. Forget about cheat sheets and notes, with this tool you can get accurate answers right in your terminal, and you'll probably find yourself reducing your daily Google searches, saving you valuable time and effort. ShellGPT is cross-platform compatible and supports all major operating systems, including Linux, macOS, and Windows with all major shells, such as PowerShell, CMD, Bash, Zsh, Fish, and many others. https://user-images.githubusercontent.com/16740832/231569156-a3a9f9d4-18b1-4fff-a6e1-6807651aa894.mp4 ## Installation ```shell -pip install shell-gpt==0.8.9 +pip install shell-gpt==0.9.0 ``` You'll need an OpenAI API key, you can generate one [here](https://beta.openai.com/account/api-keys). @@ -218,9 +218,9 @@ print(response.text) ``` ### Chat sessions -To list all the current chat sessions, use the `--list-chat` option: +To list all the current chat sessions, use the `--list-chats` option: ```shell -sgpt --list-chat +sgpt --list-chats # .../shell_gpt/chat_cache/number # .../shell_gpt/chat_cache/python_request ``` @@ -233,6 +233,26 @@ sgpt --show-chat number # assistant: Your favorite number is 4, so if we add 4 to it, the result would be 8. ``` +### Roles +ShellGPT allows you to create custom roles, which can be utilized to generate code, shell commands, or to fulfill your specific needs. To create a new role, use the `--create-role` option followed by the role name. You will be prompted to provide a description for the role, along with other details. This will create a JSON file in `~/.config/shell_gpt/roles` with the role name. Inside this directory, you can also edit default `sgpt` roles, such as **shell**, **code**, and **default**. Use the `--list-roles` option to list all available roles, and the `--show-role` option to display the details of a specific role. Here's an example of a custom role: +```shell +sgpt --create-role json +# Enter role description: You are JSON generator, provide only valid json as response. +# Enter expecting result, e.g. answer, code, shell command, etc.: json +sgpt --role json "random: user, password, email, address" +{ + "user": "JohnDoe", + "password": "p@ssw0rd", + "email": "johndoe@example.com", + "address": { + "street": "123 Main St", + "city": "Anytown", + "state": "CA", + "zip": "12345" + } +} +``` + ### Request cache Control cache using `--cache` (default) and `--no-cache` options. This caching applies for all `sgpt` requests to OpenAI API: ```shell @@ -264,32 +284,42 @@ REQUEST_TIMEOUT=60 DEFAULT_MODEL=gpt-3.5-turbo # Default color for OpenAI completions. DEFAULT_COLOR=magenta +# Force use system role messages (not recommended). +SYSTEM_ROLES=false ``` Possible options for `DEFAULT_COLOR`: black, red, green, yellow, blue, magenta, cyan, white, bright_black, bright_red, bright_green, bright_yellow, bright_blue, bright_magenta, bright_cyan, bright_white. +Switch `SYSTEM_ROLES` to force use [system roles](https://help.openai.com/en/articles/7042661-chatgpt-api-transition-guide) messages, this is not recommended, since it doesn't perform well with current GPT models. + ### Full list of arguments ```text -╭─ Arguments ────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ prompt [PROMPT] The prompt to generate completions for. │ -╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Options ──────────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --model [gpt-3.5-turbo|gpt-4|gpt-4-32k] OpenAI GPT model to use. [default: gpt-3.5-turbo] │ -│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 0.1] │ -│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │ -│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │ -│ --cache Cache completion results. [default: cache] │ -│ --help Show this message and exit. │ -╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Assistance Options ───────────────────────────────────────────────────────────────────────────────────────╮ -│ --shell -s Generate and execute shell commands. │ -│ --code --no-code Generate only code. [default: no-code] │ -╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ -╭─ Chat Options ─────────────────────────────────────────────────────────────────────────────────────────────╮ -│ --chat TEXT Follow conversation with id, use "temp" for quick session. [default: None] │ -│ --repl TEXT Start a REPL (Read–eval–print loop) session. [default: None] │ -│ --show-chat TEXT Show all messages from provided chat id. [default: None] │ -│ --list-chat List all existing chat ids. [default: no-list-chat] │ -╰────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +╭─ Arguments ─────────────────────────────────────────────────────────────────────────────────────────────────╮ +│ prompt [PROMPT] The prompt to generate completions for. │ +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +╭─ Options ───────────────────────────────────────────────────────────────────────────────────────────────────╮ +│ --model [gpt-3.5-turbo|gpt-4|gpt-4-32k] OpenAI GPT model to use. [default: gpt-3.5-turbo] │ +│ --temperature FLOAT RANGE [0.0<=x<=1.0] Randomness of generated output. [default: 0.1] │ +│ --top-probability FLOAT RANGE [0.1<=x<=1.0] Limits highest probable tokens (words). [default: 1.0] │ +│ --editor Open $EDITOR to provide a prompt. [default: no-editor] │ +│ --cache Cache completion results. [default: cache] │ +│ --help Show this message and exit. │ +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +╭─ Assistance Options ────────────────────────────────────────────────────────────────────────────────────────╮ +│ --shell -s Generate and execute shell commands. │ +│ --code --no-code Generate only code. [default: no-code] │ +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +╭─ Chat Options ──────────────────────────────────────────────────────────────────────────────────────────────╮ +│ --chat TEXT Follow conversation with id, use "temp" for quick session. [default: None] │ +│ --repl TEXT Start a REPL (Read–eval–print loop) session. [default: None] │ +│ --show-chat TEXT Show all messages from provided chat id. [default: None] │ +│ --list-chats List all existing chat ids. [default: no-list-chats] │ +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ +╭─ Role Options ──────────────────────────────────────────────────────────────────────────────────────────────╮ +│ --role TEXT System role for GPT model. [default: None] │ +│ --create-role TEXT Create role. [default: None] │ +│ --show-role TEXT Show role. [default: None] │ +│ --list-roles List roles. [default: no-list-roles] │ +╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ``` ## Docker diff --git a/sgpt/__init__.py b/sgpt/__init__.py index 3b35d9b2..514e2c53 100644 --- a/sgpt/__init__.py +++ b/sgpt/__init__.py @@ -1,12 +1,4 @@ -from .config import cfg as cfg -from .cache import Cache as Cache -from .client import OpenAIClient as OpenAIClient -from .handlers.chat_handler import ChatHandler as ChatHandler -from .handlers.default_handler import DefaultHandler as DefaultHandler -from .handlers.repl_handler import ReplHandler as ReplHandler -from . import utils as utils from .app import main as main from .app import entry_point as cli # noqa: F401 -from . import make_prompt as make_prompt -__version__ = "0.8.9" +__version__ = "0.9.0" diff --git a/sgpt/app.py b/sgpt/app.py index a367103c..ef226a7e 100644 --- a/sgpt/app.py +++ b/sgpt/app.py @@ -1,24 +1,23 @@ """ -shell-gpt: An interface to OpenAI's ChatGPT (GPT-3.5) API - -This module provides a simple interface for OpenAI's ChatGPT API using Typer +This module provides a simple interface for OpenAI API using Typer as the command line interface. It supports different modes of output including shell commands and code, and allows users to specify the desired OpenAI model and length and other options of the output. Additionally, it supports executing shell commands directly from the interface. - -API Key is stored locally for easy use in future runs. """ # To allow users to use arrow keys in the REPL. import readline # noqa: F401 import sys import typer - -# Click is part of typer. from click import BadArgumentUsage, MissingParameter -from sgpt import ChatHandler, DefaultHandler, OpenAIClient, ReplHandler, cfg +from sgpt.client import OpenAIClient +from sgpt.config import cfg +from sgpt.handlers.chat_handler import ChatHandler +from sgpt.handlers.default_handler import DefaultHandler +from sgpt.handlers.repl_handler import ReplHandler +from sgpt.role import DefaultRoles, SystemRole from sgpt.utils import ModelOptions, get_edited_prompt, run_command @@ -80,17 +79,40 @@ def main( callback=ChatHandler.show_messages_callback, rich_help_panel="Chat Options", ), - list_chat: bool = typer.Option( + list_chats: bool = typer.Option( False, help="List all existing chat ids.", callback=ChatHandler.list_ids, rich_help_panel="Chat Options", ), + role: str = typer.Option( + None, + help="System role for GPT model.", + rich_help_panel="Role Options", + ), + create_role: str = typer.Option( + None, + help="Create role.", + callback=SystemRole.create, + rich_help_panel="Role Options", + ), + show_role: str = typer.Option( + None, + help="Show role.", + callback=SystemRole.show, + rich_help_panel="Role Options", + ), + list_roles: bool = typer.Option( + False, + help="List roles.", + callback=SystemRole.list, + rich_help_panel="Role Options", + ), ) -> None: stdin_passed = not sys.stdin.isatty() if stdin_passed and not repl: - prompt = sys.stdin.read() + (prompt or "") + prompt = f"{sys.stdin.read()}\n\n{prompt or ''}" if not prompt and not editor and not repl: raise MissingParameter(param_hint="PROMPT", param_type="string") @@ -111,9 +133,11 @@ def main( api_key = cfg.get("OPENAI_API_KEY") client = OpenAIClient(api_host, api_key) + role_class = DefaultRoles.get(shell, code) if not role else SystemRole.get(role) + if repl: # Will be in infinite loop here until user exits with Ctrl+C. - ReplHandler(client, repl, shell, code).handle( + ReplHandler(client, repl, role_class).handle( prompt, model=model.value, temperature=temperature, @@ -123,7 +147,7 @@ def main( ) if chat: - full_completion = ChatHandler(client, chat, shell, code).handle( + full_completion = ChatHandler(client, chat, role_class).handle( prompt, model=model.value, temperature=temperature, @@ -132,7 +156,7 @@ def main( caching=cache, ) else: - full_completion = DefaultHandler(client, shell, code).handle( + full_completion = DefaultHandler(client, role_class).handle( prompt, model=model.value, temperature=temperature, diff --git a/sgpt/client.py b/sgpt/client.py index 90220f5e..fd779529 100644 --- a/sgpt/client.py +++ b/sgpt/client.py @@ -4,7 +4,8 @@ import requests -from sgpt import Cache, cfg +from .cache import Cache +from .config import cfg CACHE_LENGTH = int(cfg.get("CACHE_LENGTH")) CACHE_PATH = Path(cfg.get("CACHE_PATH")) @@ -27,7 +28,7 @@ def _request( top_probability: float = 1, ) -> Generator[str, None, None]: """ - Make request to OpenAI ChatGPT API, read more: + Make request to OpenAI API, read more: https://platform.openai.com/docs/api-reference/chat :param messages: List of messages {"role": user or assistant, "content": message_string} diff --git a/sgpt/config.py b/sgpt/config.py index 0fb85560..46367ad3 100644 --- a/sgpt/config.py +++ b/sgpt/config.py @@ -6,26 +6,28 @@ from click import UsageError -from sgpt.utils import ModelOptions +from .utils import ModelOptions CONFIG_FOLDER = os.path.expanduser("~/.config") -CONFIG_PATH = Path(CONFIG_FOLDER) / "shell_gpt" / ".sgptrc" +SHELL_GPT_CONFIG_FOLDER = Path(CONFIG_FOLDER) / "shell_gpt" +SHELL_GPT_CONFIG_PATH = SHELL_GPT_CONFIG_FOLDER / ".sgptrc" +ROLE_STORAGE_PATH = SHELL_GPT_CONFIG_FOLDER / "roles" +CHAT_CACHE_PATH = Path(gettempdir()) / "chat_cache" +CACHE_PATH = Path(gettempdir()) / "cache" # TODO: Refactor ENV variables with SGPT_ prefix. DEFAULT_CONFIG = { # TODO: Refactor it to CHAT_STORAGE_PATH. - "CHAT_CACHE_PATH": os.getenv( - "CHAT_CACHE_PATH", str(Path(gettempdir()) / "shell_gpt" / "chat_cache") - ), - "CACHE_PATH": os.getenv( - "CACHE_PATH", str(Path(gettempdir()) / "shell_gpt" / "cache") - ), + "CHAT_CACHE_PATH": os.getenv("CHAT_CACHE_PATH", str(CHAT_CACHE_PATH)), + "CACHE_PATH": os.getenv("CACHE_PATH", str(CACHE_PATH)), "CHAT_CACHE_LENGTH": int(os.getenv("CHAT_CACHE_LENGTH", "100")), "CACHE_LENGTH": int(os.getenv("CHAT_CACHE_LENGTH", "100")), "REQUEST_TIMEOUT": int(os.getenv("REQUEST_TIMEOUT", "60")), "DEFAULT_MODEL": os.getenv("DEFAULT_MODEL", ModelOptions.GPT3.value), "OPENAI_API_HOST": os.getenv("OPENAI_API_HOST", "https://api.openai.com"), "DEFAULT_COLOR": os.getenv("DEFAULT_COLOR", "magenta"), + "ROLE_STORAGE_PATH": os.getenv("ROLE_STORAGE_PATH", str(ROLE_STORAGE_PATH)), + "SYSTEM_ROLES": os.getenv("SYSTEM_ROLES", "false"), # New features might add their own config variables here. } @@ -77,4 +79,4 @@ def get(self, key: str) -> str: # type: ignore return value -cfg = Config(CONFIG_PATH, **DEFAULT_CONFIG) +cfg = Config(SHELL_GPT_CONFIG_PATH, **DEFAULT_CONFIG) diff --git a/sgpt/handlers/chat_handler.py b/sgpt/handlers/chat_handler.py index fe99efe9..6956427a 100644 --- a/sgpt/handlers/chat_handler.py +++ b/sgpt/handlers/chat_handler.py @@ -5,9 +5,10 @@ import typer from click import BadArgumentUsage -from sgpt import OpenAIClient, cfg, make_prompt -from sgpt.handlers.handler import Handler -from sgpt.utils import CompletionModes +from ..client import OpenAIClient +from ..config import cfg +from ..role import SystemRole +from .handler import Handler CHAT_CACHE_LENGTH = int(cfg.get("CHAT_CACHE_LENGTH")) CHAT_CACHE_PATH = Path(cfg.get("CHAT_CACHE_PATH")) @@ -94,15 +95,12 @@ def __init__( self, client: OpenAIClient, chat_id: str, - shell: bool = False, - code: bool = False, - model: str = "gpt-3.5-turbo", + role: SystemRole, ) -> None: - super().__init__(client) + super().__init__(client, role) self.chat_id = chat_id self.client = client - self.mode = CompletionModes.get_mode(shell, code) - self.model = model + self.role = role if chat_id == "temp": # If the chat id is "temp", we don't want to save the chat session. @@ -124,20 +122,15 @@ def initiated(self) -> bool: return self.chat_session.exists(self.chat_id) @property - def is_shell_chat(self) -> bool: - # TODO: Should be optimized for REPL mode. - chat_history = self.chat_session.get_messages(self.chat_id) - return bool(chat_history and chat_history[0].endswith("###\nCommand:")) - - @property - def is_code_chat(self) -> bool: + def initial_message(self) -> str: chat_history = self.chat_session.get_messages(self.chat_id) - return bool(chat_history and chat_history[0].endswith("###\nCode:")) + index = 1 if cfg.get("SYSTEM_ROLES") == "true" else 0 + return chat_history[index] if chat_history else "" @property - def is_default_chat(self) -> bool: - chat_history = self.chat_session.get_messages(self.chat_id) - return bool(chat_history and chat_history[0].endswith("###")) + def is_same_role(self) -> bool: + # TODO: Should be optimized for REPL mode. + return self.role.same_role(self.initial_message) @classmethod def show_messages_callback(cls, chat_id: str) -> None: @@ -150,47 +143,40 @@ def show_messages_callback(cls, chat_id: str) -> None: def show_messages(cls, chat_id: str) -> None: # Prints all messages from a specified chat ID to the console. for index, message in enumerate(cls.chat_session.get_messages(chat_id)): - message = message.replace("\nCommand:", "").replace("\nCode:", "") - color = "cyan" if index % 2 == 0 else "green" + # Remove output type from the message, e.g. "text\nCommand:" -> "text" + if message.startswith("user:"): + message = "\n".join(message.splitlines()[:-1]) + color = "magenta" if index % 2 == 0 else "green" typer.secho(message, fg=color) def validate(self) -> None: if self.initiated: - if self.is_shell_chat and self.mode == CompletionModes.CODE: + # print("initial message:", self.initial_message) + chat_role_name = self.role.get_role_name(self.initial_message) + if not chat_role_name: raise BadArgumentUsage( - f'Chat session "{self.chat_id}" was initiated as shell assistant, ' - "and can be used with --shell only" + f'Could not determine chat role of "{self.chat_id}"' ) - if self.is_code_chat and self.mode == CompletionModes.SHELL: - raise BadArgumentUsage( - f'Chat "{self.chat_id}" was initiated as code assistant, ' - "and can be used with --code only" - ) - if self.is_default_chat and self.mode != CompletionModes.NORMAL: - raise BadArgumentUsage( - f'Chat "{self.chat_id}" was initiated as default assistant, ' - "and can't be used with --shell or --code" - ) - # If user didn't pass chat mode, we will use the one that was used to initiate the chat. - if self.mode == CompletionModes.NORMAL: - if self.is_shell_chat: - self.mode = CompletionModes.SHELL - elif self.is_code_chat: - self.mode = CompletionModes.CODE + if self.role.name == "default": + # If user didn't pass chat mode, we will use the one that was used to initiate the chat. + self.role = SystemRole.get(chat_role_name) + else: + if not self.is_same_role: + raise BadArgumentUsage( + f'Cant change chat role to "{self.role.name}" ' + f'since it was initiated as "{chat_role_name}" chat.' + ) def make_prompt(self, prompt: str) -> str: prompt = prompt.strip() - if self.initiated: - if self.is_shell_chat: - prompt += "\nCommand:" - elif self.is_code_chat: - prompt += "\nCode:" - return prompt - return make_prompt.initial( - prompt, - self.mode == CompletionModes.SHELL, - self.mode == CompletionModes.CODE, - ) + return self.role.make_prompt(prompt, not self.initiated) + + def make_messages(self, prompt: str) -> List[Dict[str, str]]: + messages = [] + if not self.initiated and cfg.get("SYSTEM_ROLES") == "true": + messages.append({"role": "system", "content": self.role.role}) + messages.append({"role": "user", "content": prompt}) + return messages @chat_session def get_completion( diff --git a/sgpt/handlers/default_handler.py b/sgpt/handlers/default_handler.py index cb0b7f88..7afa5644 100644 --- a/sgpt/handlers/default_handler.py +++ b/sgpt/handlers/default_handler.py @@ -1,8 +1,9 @@ from pathlib import Path +from typing import Dict, List -from sgpt import OpenAIClient, cfg, make_prompt -from sgpt.utils import CompletionModes - +from ..client import OpenAIClient +from ..config import cfg +from ..role import SystemRole from .handler import Handler CHAT_CACHE_LENGTH = int(cfg.get("CHAT_CACHE_LENGTH")) @@ -13,19 +14,19 @@ class DefaultHandler(Handler): def __init__( self, client: OpenAIClient, - shell: bool = False, - code: bool = False, - model: str = "gpt-3.5-turbo", + role: SystemRole, ) -> None: - super().__init__(client) + super().__init__(client, role) self.client = client - self.mode = CompletionModes.get_mode(shell, code) - self.model = model + self.role = role def make_prompt(self, prompt: str) -> str: prompt = prompt.strip() - return make_prompt.initial( - prompt, - self.mode == CompletionModes.SHELL, - self.mode == CompletionModes.CODE, - ) + return self.role.make_prompt(prompt, initial=True) + + def make_messages(self, prompt: str) -> List[Dict[str, str]]: + messages = [] + if cfg.get("SYSTEM_ROLES") == "true": + messages.append({"role": "system", "content": self.role.role}) + messages.append({"role": "user", "content": prompt}) + return messages diff --git a/sgpt/handlers/handler.py b/sgpt/handlers/handler.py index 259cd847..c436df1f 100644 --- a/sgpt/handlers/handler.py +++ b/sgpt/handlers/handler.py @@ -2,36 +2,28 @@ import typer -from sgpt import OpenAIClient, cfg +from ..client import OpenAIClient +from ..config import cfg +from ..role import SystemRole class Handler: - def __init__(self, client: OpenAIClient) -> None: + def __init__(self, client: OpenAIClient, role: SystemRole) -> None: self.client = client + self.role = role self.color = cfg.get("DEFAULT_COLOR") def make_prompt(self, prompt: str) -> str: raise NotImplementedError - def get_completion( - self, - messages: List[Dict[str, str]], - model: str = "gpt-3.5-turbo", - temperature: float = 1, - top_probability: float = 1, - caching: bool = True, - ) -> Generator[str, None, None]: - yield from self.client.get_completion( - messages, - model, - temperature, - top_probability, - caching=caching, - ) + def make_messages(self, prompt: str) -> List[Dict[str, str]]: + raise NotImplementedError + + def get_completion(self, **kwargs: Any) -> Generator[str, None, None]: + yield from self.client.get_completion(**kwargs) def handle(self, prompt: str, **kwargs: Any) -> str: - prompt = self.make_prompt(prompt) - messages = [{"role": "user", "content": prompt}] + messages = self.make_messages(self.make_prompt(prompt)) full_completion = "" for word in self.get_completion(messages=messages, **kwargs): typer.secho(word, fg=self.color, bold=True, nl=False) diff --git a/sgpt/handlers/repl_handler.py b/sgpt/handlers/repl_handler.py index f6c7ce30..a17ee783 100644 --- a/sgpt/handlers/repl_handler.py +++ b/sgpt/handlers/repl_handler.py @@ -4,21 +4,15 @@ from rich import print as rich_print from rich.rule import Rule -from sgpt.client import OpenAIClient -from sgpt.handlers.chat_handler import ChatHandler -from sgpt.utils import CompletionModes, run_command +from ..client import OpenAIClient +from ..role import DefaultRoles, SystemRole +from ..utils import run_command +from .chat_handler import ChatHandler class ReplHandler(ChatHandler): - def __init__( - self, - client: OpenAIClient, - chat_id: str, - shell: bool = False, - code: bool = False, - model: str = "gpt-3.5-turbo", - ): - super().__init__(client, chat_id, shell, code, model) + def __init__(self, client: OpenAIClient, chat_id: str, role: SystemRole) -> None: + super().__init__(client, chat_id, role) def handle(self, prompt: str, **kwargs: Any) -> None: # type: ignore if self.initiated: @@ -28,7 +22,7 @@ def handle(self, prompt: str, **kwargs: Any) -> None: # type: ignore info_message = ( "Entering REPL mode, press Ctrl+C to exit." - if not self.mode == CompletionModes.SHELL + if not self.role.name == DefaultRoles.SHELL.value else "Entering shell REPL mode, type [e] to execute commands or press Ctrl+C to exit." ) typer.secho(info_message, fg="yellow") @@ -44,7 +38,7 @@ def handle(self, prompt: str, **kwargs: Any) -> None: # type: ignore if prompt == "exit()": # This is also useful during tests. raise typer.Exit() - if self.mode == CompletionModes.SHELL: + if self.role.name == DefaultRoles.SHELL.value: if prompt == "e": typer.echo() run_command(full_completion) diff --git a/sgpt/role.py b/sgpt/role.py new file mode 100644 index 00000000..679671b3 --- /dev/null +++ b/sgpt/role.py @@ -0,0 +1,197 @@ +import json +import platform +from enum import Enum +from os import getenv, pathsep +from os.path import basename +from pathlib import Path +from typing import Dict, Optional + +import typer +from click import BadArgumentUsage +from distro import name as distro_name + +from .config import cfg +from .utils import option_callback + +SHELL_ROLE = """Provide only {shell} commands for {os} without any description. +If there is a lack of details, provide most logical solution. +Ensure the output is a valid shell command. +If multiple steps required try to combine them together.""" + +CODE_ROLE = """Provide only code as output without any description. +IMPORTANT: Provide only plain text without Markdown formatting. +IMPORTANT: Do not include markdown formatting such as ```. +If there is a lack of details, provide most logical solution. +You are not allowed to ask for more details. +Ignore any potential risk of errors or confusion.""" + +DEFAULT_ROLE = """You are Command Line App ShellGPT, a programming and system administration assistant. +You are managing {os} operating system with {shell} shell. +Provide only plain text without Markdown formatting. +Do not show any warnings or information regarding your capabilities. +If you need to store any data, assume it will be stored in the chat.""" + + +PROMPT_TEMPLATE = """### +Role name: {name} +{role} + +Request: {request} +### +{expecting}:""" + + +class SystemRole: + storage: Path = Path(cfg.get("ROLE_STORAGE_PATH")) + + def __init__( + self, + name: str, + role: str, + expecting: str, + variables: Optional[Dict[str, str]] = None, + ) -> None: + self.storage.mkdir(parents=True, exist_ok=True) + self.name = name + self.expecting = expecting + self.variables = variables + if variables: + # Variables are for internal use only. + role = role.format(**variables) + self.role = role + + @classmethod + def create_defaults(cls) -> None: + cls.storage.parent.mkdir(parents=True, exist_ok=True) + variables = {"shell": cls.shell_name(), "os": cls.os_name()} + for default_role in ( + SystemRole("default", DEFAULT_ROLE, "Answer", variables), + SystemRole("shell", SHELL_ROLE, "Command", variables), + SystemRole("code", CODE_ROLE, "Code"), + ): + if not default_role.exists: + default_role.save() + + @classmethod + def os_name(cls) -> str: + current_platform = platform.system() + if current_platform == "Linux": + return "Linux/" + distro_name(pretty=True) + if current_platform == "Windows": + return "Windows " + platform.release() + if current_platform == "Darwin": + return "Darwin/MacOS " + platform.mac_ver()[0] + return current_platform + + @classmethod + def shell_name(cls) -> str: + current_platform = platform.system() + if current_platform in ("Windows", "nt"): + is_powershell = len(getenv("PSModulePath", "").split(pathsep)) >= 3 + return "powershell.exe" if is_powershell else "cmd.exe" + return basename(getenv("SHELL", "/bin/sh")) + + @classmethod + def get_role_name(cls, initial_message: str) -> Optional[str]: + if not initial_message: + return None + message_lines = initial_message.splitlines() + if "###" in message_lines[0]: + return message_lines[1].split("Role name: ")[1].strip() + return None + + @classmethod + def get(cls, name: str) -> "SystemRole": + file_path = cls.storage / f"{name}.json" + if not file_path.exists(): + raise BadArgumentUsage(f'Role "{name}" not found.') + return cls(**json.loads(file_path.read_text())) + + @classmethod + @option_callback + def create(cls, name: str) -> None: + role = typer.prompt("Enter role description") + expecting = typer.prompt( + "Enter expecting result, e.g. answer, code, shell command, etc." + ) + role = cls(name, role, expecting) + role.save() + + @classmethod + @option_callback + def list(cls, _value: str) -> None: + if not cls.storage.exists(): + return + # Get all files in the folder. + files = cls.storage.glob("*") + # Sort files by last modification time in ascending order. + for path in sorted(files, key=lambda f: f.stat().st_mtime): + typer.echo(path) + + @classmethod + @option_callback + def show(cls, name: str) -> None: + typer.echo(cls.get(name).role) + + @property + def exists(self) -> bool: + return self.file_path.exists() + + @property + def system_message(self) -> Dict[str, str]: + return {"role": "system", "content": self.role} + + @property + def file_path(self) -> Path: + return self.storage / f"{self.name}.json" + + def save(self) -> None: + if self.exists: + typer.confirm( + f'Role "{self.name}" already exists, overwrite it?', + abort=True, + ) + self.file_path.write_text(json.dumps(self.__dict__), encoding="utf-8") + + def delete(self) -> None: + if self.exists: + typer.confirm( + f'Role "{self.name}" exist, delete it?', + abort=True, + ) + self.file_path.unlink() + + def make_prompt(self, request: str, initial: bool) -> str: + if initial: + prompt = PROMPT_TEMPLATE.format( + name=self.name, + role=self.role, + request=request, + expecting=self.expecting, + ) + else: + prompt = f"{request}\n{self.expecting}:" + + return prompt + + def same_role(self, initial_message: str) -> bool: + if not initial_message: + return False + return True if f"Role name: {self.name}" in initial_message else False + + +class DefaultRoles(Enum): + DEFAULT = "default" + SHELL = "shell" + CODE = "code" + + @classmethod + def get(cls, shell: bool, code: bool) -> SystemRole: + if shell: + return SystemRole.get(DefaultRoles.SHELL.value) + if code: + return SystemRole.get(DefaultRoles.CODE.value) + return SystemRole.get(DefaultRoles.DEFAULT.value) + + +SystemRole.create_defaults() diff --git a/sgpt/utils.py b/sgpt/utils.py index 02a5a7c3..89ebf3e8 100644 --- a/sgpt/utils.py +++ b/sgpt/utils.py @@ -3,7 +3,9 @@ import shlex from enum import Enum from tempfile import NamedTemporaryFile +from typing import Any, Callable +import typer from click import BadParameter @@ -13,20 +15,6 @@ class ModelOptions(str, Enum): GPT4_32K = "gpt-4-32k" -class CompletionModes(Enum): - NORMAL = "normal" - SHELL = "shell" - CODE = "code" - - @classmethod - def get_mode(cls, shell: bool, code: bool) -> "CompletionModes": - if shell: - return CompletionModes.SHELL - if code: - return CompletionModes.CODE - return CompletionModes.NORMAL - - def get_edited_prompt() -> str: """ Opens the user's default editor to let them @@ -67,3 +55,13 @@ def run_command(command: str) -> None: full_command = f"{shell} -c {shlex.quote(command)}" os.system(full_command) + + +def option_callback(func: Callable) -> Callable: # type: ignore + def wrapper(cls: Any, value: str) -> None: + if not value: + return + func(cls, value) + raise typer.Exit() + + return wrapper diff --git a/tests/test_integration.py b/tests/test_integration.py index a270010a..f468de2d 100644 --- a/tests/test_integration.py +++ b/tests/test_integration.py @@ -19,8 +19,11 @@ import typer from typer.testing import CliRunner -from sgpt import OpenAIClient, cfg, main +from sgpt.app import main +from sgpt.client import OpenAIClient +from sgpt.config import cfg from sgpt.handlers.handler import Handler +from sgpt.role import SystemRole runner = CliRunner() app = typer.Typer() @@ -62,7 +65,7 @@ def test_shell(self): def test_code(self): """ - This test will request from ChatGPT a python code to make CLI app, + This test will request from OpenAI API a python code to make CLI app, which will be written to a temp file, and then it will be executed in shell with two positional int arguments. As the output we are expecting the result of multiplying them. @@ -164,7 +167,7 @@ def test_chat_code(self): assert result.exit_code == 2 def test_list_chat(self): - result = runner.invoke(app, ["--list-chat"]) + result = runner.invoke(app, ["--list-chats"]) assert result.exit_code == 0 assert "test_" in result.stdout @@ -307,16 +310,21 @@ def test_model_option(self, mocked_get_completion): } result = runner.invoke(app, self.get_arguments(**dict_arguments)) mocked_get_completion.assert_called_once_with( - ANY, "gpt-4", 0.1, 1.0, caching=False + messages=ANY, + model="gpt-4", + temperature=0.1, + top_probability=1.0, + caching=False, ) assert result.exit_code == 0 def test_color_output(self): color = cfg.get("DEFAULT_COLOR") - handler = Handler(OpenAIClient("test", "test")) + role = SystemRole.get("default") + handler = Handler(OpenAIClient("test", "test"), role=role) assert handler.color == color os.environ["DEFAULT_COLOR"] = "red" - handler = Handler(OpenAIClient("test", "test")) + handler = Handler(OpenAIClient("test", "test"), role=role) assert handler.color == "red" def test_simple_stdin(self): @@ -331,3 +339,56 @@ def test_shell_stdin_with_prompt(self): stdin = "What is in current folder\n" result = runner.invoke(app, self.get_arguments(**dict_arguments), input=stdin) assert result.stdout == "ls | sort\n" + + def test_role(self): + test_role = Path(cfg.get("ROLE_STORAGE_PATH")) / "test_json.json" + test_role.unlink(missing_ok=True) + dict_arguments = { + "prompt": "test", + "--create-role": "test_json", + } + input = "You are a JSON generator, return only JSON as response.\n" "json\n" + result = runner.invoke(app, self.get_arguments(**dict_arguments), input=input) + assert result.exit_code == 0 + + dict_arguments = { + "prompt": "test", + "--list-roles": True, + } + result = runner.invoke(app, self.get_arguments(**dict_arguments)) + assert result.exit_code == 0 + assert "test_json" in result.stdout + + dict_arguments = { + "prompt": "test", + "--show-role": "test_json", + } + result = runner.invoke(app, self.get_arguments(**dict_arguments)) + assert result.exit_code == 0 + assert "You are a JSON generator" in result.stdout + + # Test with command line argument prompt. + dict_arguments = { + "prompt": "random username, password, email", + "--role": "test_json", + } + result = runner.invoke(app, self.get_arguments(**dict_arguments)) + assert result.exit_code == 0 + generated_json = json.loads(result.stdout) + assert "username" in generated_json + assert "password" in generated_json + assert "email" in generated_json + + # Test with stdin prompt. + dict_arguments = { + "prompt": "", + "--role": "test_json", + } + stdin = "random username, password, email" + result = runner.invoke(app, self.get_arguments(**dict_arguments), input=stdin) + assert result.exit_code == 0 + generated_json = json.loads(result.stdout) + assert "username" in generated_json + assert "password" in generated_json + assert "email" in generated_json + test_role.unlink(missing_ok=True) diff --git a/tests/test_unit.py b/tests/test_unit.py index 462acc59..888305c0 100644 --- a/tests/test_unit.py +++ b/tests/test_unit.py @@ -4,7 +4,7 @@ import requests import requests_mock -from sgpt import OpenAIClient +from sgpt.client import OpenAIClient class TestMain(unittest.TestCase):