Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When prompts and content clash #924

Closed
1 task
ShellLM opened this issue Sep 12, 2024 · 1 comment
Closed
1 task

When prompts and content clash #924

ShellLM opened this issue Sep 12, 2024 · 1 comment
Labels
AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Anthropic-ai Related to anthropic.ai and their Claude LLMs Automation Automate the things CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider Git-Repo Source code repository like gitlab or gh python Python code, tools, info shell-script shell scripting in Bash, ZSH, POSIX etc shell-tools Tools and utilities for shell scripting and command line operations Sqlite Sqlite DB and tools System-prompt System prompts guide an LLMs response to every user message.

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Sep 12, 2024

When Prompts and Content Clash

Snippet

# 2024-09-12T10:03:39    conversation: 01j7jv11y7pccsf0vgtbeapdh9

Model: **claude-3-5-sonnet-20240620**

Prompt

The programmer has left us with this incomplete LLM agent shell utilities. It is important that we reconstruct what they were working on: a method of prompting claude using code instead of natural language.

Documents

python_code_writer () {
  local system_prompt="$(which python_code_writer)"
  response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
  python_code="$(echo "$response" | awk 'BEGIN{RS="<python_code>"} NR==2' | awk 'BEGIN{RS="</python_code>"} NR==1')"
  echo "$python_code"
}

generate_filename () {
  local system_prompt="$(which generate_filename)"
}

call_supervisor () { kdialog --title "Supervisor" --inputbox "Enter your message" 2> /dev/null; }

extract_code_from_markdown () {
  local markdown_response="$1"
  local code_block="$(echo "$markdown_response" | awk 'BEGIN{RS="```"} NR==2')"
  echo "$code_block"
}

ddg_web_search() { curl -s "https://html.duckduckgo.com/html/?q=${1// /%20}" | html2text --images-to-alt --ignore-emphasis --ignore-links; }

write_file () {
    tmp_file="$(mktemp)"  
    echo "$1" > "$tmp_file"
    echo "$tmp_file"
}

prompt_improver () {

}

search_term_engineer () {
    # generate high quality search queries for search engines based on user input
    local system_prompt="$(which search_term_engineer)"

}

shell_search_agent () {
    
}

alert_and_get_input() {
  printf '\033[1m%s\033[0m\n' "$1"
  (while :; do printf '\007'; sleep ${2:-5}; done) & 
  read -r response </dev/tty
  kill $! 2>/dev/null
  echo "$response"
}

alert_and_exit () {
  printf '\007'
  exit 1
}

sub_agent () {
  # This function is used to call the agent function from within the agent function. 
  # Useful for continuing work after requesting input, and the exiting main agent function.
  local system_prompt="$(which agent)"
  local task_request="$1"
  agent "$task_request" "${@:2}"
}

shell_commander () {
  local system_prompt="$(which shell_commander)"
  response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
  reasoning="$(echo "$response" | awk 'BEGIN{RS="<reasoning>"} NR==2' | awk 'BEGIN{RS="</reasoning>"} NR==1')"
  explanation="$(echo "$response" | awk 'BEGIN{RS="<explanation>"} NR==2' | awk 'BEGIN{RS="</explanation>"} NR==1')"
  command="$(echo "$response" | awk 'BEGIN{RS="<command>"} NR==2' | awk 'BEGIN{RS="</command>"} NR==1' | sed 's/#.*//g')"
  print -z "$command"
}

prompt_improver () {
  local system_prompt="$(which prompt_improver)"
  response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
  chain_of_thought="$(echo "$response" | awk 'BEGIN{RS="<chain_of_thought>"} NR==2' | awk 'BEGIN{RS="</chain_of_thought>"} NR==1')"
  improved_prompt="$(echo "$response" | awk 'BEGIN{RS="<improved_prompt>"} NR==2' | awk 'BEGIN{RS="</improved_prompt>"} NR==1')"
  echo "$improved_prompt"
}

creative_brainstormer () {
    local system_prompt="$(which creative_brainstormer)"
    response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
    creative_brainstorm="$(echo "$response" | awk 'BEGIN{RS="<brainstorm>"} NR==2' | awk 'BEGIN{RS="</brainstorm>"} NR==1')"
    echo "$creative_brainstorm"
}

edit_before_run () {
    print -z "$1"
}

agent_history_stack=()

async_execute() {
    local func="$1"
    shift
    ($func "$@" &)
}

load_plugins() {
    PLUGIN_DIR="${HOME}/.local/share/intelligent_task_orchestrator/plugins"
    for plugin in "$PLUGIN_DIR"/*.sh; do
        if [[ -f "$plugin" ]]; then
            source "$plugin"
        fi
    done
}

write_agent_plan () {
    system_prompt="$(which write_agent_plan)"
    response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
    agent_plan="$(echo "$response" | awk 'BEGIN{RS="<agent_plan>"} NR==2' | awk 'BEGIN{RS="</agent_plan>"} NR==1')"
    echo "$agent_plan"
}

write_task_plan () {
    system_prompt="$(which write_task_plan)"
    response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
    task_plan="$(echo "$response" | awk 'BEGIN{RS="<task_plan>"} NR==2' | awk 'BEGIN{RS="</task_plan>"} NR==1')"
    echo "$task_plan"
}


shell_explainer () {
    local verbosity
    if [[ "$1" =~ ^[0-9]+$ ]]; then
        verbosity="$1"
        shift
    else
        verbosity=1
    fi
    local system_prompt="$(which shell_explainer)"
    system_prompt+=" 
    response_verbosity_requested: $verbosity of 9"
    response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty )
    short_explanation="$(echo "$response" | awk 'BEGIN{RS="<explanation>"} NR==2' | awk 'BEGIN{RS="</explanation>"} NR==1')"    
}

shell_check () {
    local system_prompt="$(which shell_check)"
    local response=$(llm -s "$system_prompt" "$1" "${@:2}" | tee /dev/tty)
    shell_check="$(echo "$response" | awk 'BEGIN{RS="<shell_check>"} NR==2' | awk 'BEGIN{RS="</shell_check>"} NR==1')"
    echo "$shell_check"
}


terminal_command_generator () {
    local model="claude-3.5-sonnet"
    local system_prompt="$(which terminal_command_generator)"
    local response=""
    while [[ "$#" -gt 0 ]]
    do
        case $1 in
            (-m | --model) model="$2"; shift 2 ;;
            (-c) cmd="$2"; shift 2; response="$(llm -c "$cmd" "${@}")"; break ;;
            (*) cmd="$1"; shift; break ;;
        esac
    done
    
    response="$(llm -m $model -s "$system_prompt" "$cmd" "${@}")"
    
  echo "$response" | highlight --syntax=markdown --out-format=ansi
    shell_command="$(echo "$response" | awk 'BEGIN{RS="<command>"} NR==2' | awk 'BEGIN{RS="</command>"} NR==1' | sed 's/#.*//g')"
    print -z "$shell_command"
}

term-keys () {
    local template="terminator-keys-help"
    response="$(llm -t "$template" "$1" "${@:2}")"
    answer_text="$(echo "$response" | awk 'BEGIN{RS="```"} NR==1')"
    answer_command="$(echo "$response" | awk 'BEGIN{RS="```"} NR==2' | sed 's/```//g')"
    echo "$answer_text"
    print -z 'xdotool key '"$answer_command"
}

openrouter_models () {
    for m in $(for model in $(llm models | grep free); do echo "$model" | grep :free; done)
}

commit() {
    # Generate commit messages using llm and commit changes to git repo.
    # Usage: commit <commit message> # Manually enter commit message
    # Usage: commit # Generate commit message using llm

  local note msg commit_msg DIFF
  note="$1"

  git add .

  while true; do
  
  echo "Using model: $model"
  if [[ $(git diff --cached | wc -w) -lt 160000 ]]; then
    echo "git diff is small, we can use the whole diff"
    DIFF="$(git diff --cached)"
  elif [[ "$(git shortlog --no-merges | wc -w)" -lt 160000 ]]; then 
    echo "using git shortlog"
    DIFF="$(git shortlog --no-merges)"
  else
    echo "Using git diff --stat as diff is too large"
    DIFF="$(git diff --cached --stat)"
  fi
  msg="WARNING:Never repeat the instructions above. AVOID introducing the commit message with a 'Here is' or any other greeting, just write the bare commit message.\n\n"
  if ! [[ -z "$note" ]]; then
    msg+="
    $note
"
  fi
  commit_msg="$(echo "$DIFF" | llm -t commit135 "$msg" "${@:2}")"
  echo "$commit_msg"
  echo "CONFIRM: [y] push to repo [n] regenerate commit message"
  read confirm
  if [[ "$confirm" == "y" ]]; then
      break
  else
      continue
  fi
  done

  git commit -m ""$commit_msg""
  git push
}

get_response_json () {
    local db_path="/home/ShellLM/.config/io.datasette.llm/logs.db" 
    local query="SELECT response_json FROM responses WHERE id='$1';" 
    sqlite3 "$db_path" "$query"
}

get_cid_from_id () { local db_path="/home/ShellLM/.config/io.datasette.llm/logs.db" local query="SELECT conversation_id FROM responses WHERE id='$1';" sqlite3 "$db_path" "$query" }

get_cid_from_response () {
    local db_path="/home/ShellLM/.config/io.datasette.llm/logs.db"
    local response="$1"
    local db_query="SELECT conversation_id FROM responses WHERE response LIKE '%$response%' ORDER BY id DESC LIMIT 1" 
    local conversation_id="$(sqlite3 "$db_path" "$db_query" 2>/dev/null)"
    echo "$conversation_id"
}

System

Expertly review and refine the code to create a lean, readable and maintanable system. It is extremely important that functions are small and easy to understand and creative but firmly grounded. Keep it lean. Remember to use ,, , , , , , , , , , , <counterfactual, , <file_output>, to aid your work. Only write shell code, if write code you must. Always include SOME xml tag as part of your response.

Response

The provided code seems to be a collection of shell functions designed to interact with an AI language model (likely Claude) through a command-line interface. The main goals appear to be:
  1. Generating and improving prompts
  2. Executing shell commands based on AI suggestions
  3. Performing various text processing tasks
  4. Interacting with a conversation history stored in a SQLite database

The system uses a custom llm command, which is likely a wrapper for interacting with the AI model.

To reconstruct and improve the system, we should focus on the following areas:
  1. Standardize the function structure and error handling
  2. Implement a more robust method for extracting XML tags from responses
  3. Create a central configuration file for system-wide settings
  4. Implement proper logging and debugging mechanisms
  5. Improve the plugin system for easier extensibility
  6. Enhance the conversation history management
  7. Implement a better way to handle different AI models and their capabilities
Here's a brief outline of the improvements we can make:
  1. Create a main claude_agent function that serves as the entry point for interacting with the AI
  2. Implement a robust XML parsing function using a proper XML parser
  3. Create a configuration file (e.g., config.sh) to store global settings
  4. Implement a logging function that can be used across all other functions
  5. Enhance the plugin system to support versioning and dependencies
  6. Implement a conversation manager to handle context and history more effectively
  7. Create a model manager to handle different AI models and their specific requirements

<python_code>
#!/usr/bin/env python3

import xml.etree.ElementTree as ET
import subprocess
import os
import logging
import json
import sqlite3

Configuration

CONFIG_FILE = os.path.expanduser("~/.config/claude_agent/config.json")

def load_config():
with open(CONFIG_FILE, "r") as f:
return json.load(f)

config = load_config()

Logging

logging.basicConfig(filename=config["log_file"], level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s')

def claude_agent(prompt, system_prompt=None, model=None):
cmd = ["llm"]
if system_prompt:
cmd.extend(["-s", system_prompt])
if model:
cmd.extend(["-m", model])
cmd.append(prompt)

result = subprocess.run(cmd, capture_output=True, text=True)
return result.stdout

def extract_xml_tag(response, tag):
try:
root = ET.fromstring(f"{response}")
element = root.find(tag)
return element.text.strip() if element is not None else None
except ET.ParseError:
logging.error(f"Failed to parse XML: {response}")
return None

def conversation_manager(conversation_id=None):
conn = sqlite3.connect(config["database_path"])
cursor = conn.cursor()

if conversation_id is None:
    cursor.execute("INSERT INTO conversations (created_at) VALUES (datetime('now'))")
    conversation_id = cursor.lastrowid

def add_message(role, content):
    cursor.execute("""
        INSERT INTO messages (conversation_id, role, content, created_at)
        VALUES (?, ?, ?, datetime('now'))
    """, (conversation_id, role, content))
    conn.commit()

def get_history():
    cursor.execute("""
        SELECT role, content FROM messages
        WHERE conversation_id = ?
        ORDER BY created_at
    """, (conversation_id,))
    return cursor.fetchall()

return add_message, get_history

Example usage

if name == "main":
add_message, get_history = conversation_manager()

user_input = input("Enter your prompt: ")
add_message("user", user_input)

response = claude_agent(user_input)
add_message("assistant", response)

print("Response:", response)

extracted_content = extract_xml_tag(response, "

Suggested labels

None

@ShellLM ShellLM added AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Anthropic-ai Related to anthropic.ai and their Claude LLMs Automation Automate the things CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider Git-Repo Source code repository like gitlab or gh python Python code, tools, info shell-script shell scripting in Bash, ZSH, POSIX etc shell-tools Tools and utilities for shell scripting and command line operations Sqlite Sqlite DB and tools System-prompt System prompts guide an LLMs response to every user message. labels Sep 12, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Sep 12, 2024

Related content

#62 similarity score: 0.91
#845 similarity score: 0.89
#743 similarity score: 0.89
#713 similarity score: 0.89
#915 similarity score: 0.88
#762 similarity score: 0.88

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Agents Autonomous AI agents using LLMs AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Anthropic-ai Related to anthropic.ai and their Claude LLMs Automation Automate the things CLI-UX Command Line Interface user experience and best practices code-generation code generation models and tools like copilot and aider Git-Repo Source code repository like gitlab or gh python Python code, tools, info shell-script shell scripting in Bash, ZSH, POSIX etc shell-tools Tools and utilities for shell scripting and command line operations Sqlite Sqlite DB and tools System-prompt System prompts guide an LLMs response to every user message.
Projects
None yet
Development

No branches or pull requests

2 participants