-
-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhanced Task Processing #333
Comments
Integration of Decision Waypoints for Enhanced Task Evaluation and User InteractionTo further enhance the system's capability in managing complex and high-stakes tasks, decision waypoints are integrated into the task processing workflow. These waypoints enable the system to assess the complexity and importance of each sub-action item based on the specific use case and relevant best practices. When sub-action items involve critical design choices with significant consequences or when instructions are ambiguous, the system proactively engages the user to determine the appropriate course of action. This ensures that the system maintains both autonomy and alignment with user intentions, especially in scenarios requiring informed decision-making. Purpose and Advantages
Implementation FrameworkDefault Querying of Best PracticesBy default, the system queries the LLM for industry best practices relevant to each sub-action item. This ensures that task execution adheres to established standards, promoting consistency and excellence. Implementation: def get_best_practices(description):
prompt = f"""
Identify and provide industry best practices for performing the following task effectively and efficiently.
Task:
{description}
Best Practices:
"""
best_practices = run_ollama_prompt(prompt)
return best_practices Complexity and Importance Evaluation Based on Best PracticesThe system evaluates each sub-action item's complexity and importance by analyzing both the task description and the associated best practices. This comprehensive assessment ensures a thorough understanding of each sub-action item's significance and potential impact. Implementation: COMPLEXITY_THRESHOLD = 7 # Scale of 1 to 10
IMPORTANCE_THRESHOLD = 7 # Scale of 1 to 10
def evaluate_complexity_and_importance(description, best_practices):
prompt = f"""
Based on the following task and its best practices, evaluate the task's complexity and importance on a scale of 1 to 10.
Task:
{description}
Best Practices:
{best_practices}
Response:
- Complexity (1-10):
- Importance (1-10):
"""
response = run_ollama_prompt(prompt)
complexity = extract_value(response, "Complexity")
importance = extract_value(response, "Importance")
return complexity, importance
def extract_value(response, label):
match = re.search(f"{label} \\(1-10\\):\\s*(\\d+)", response)
if match:
return int(match.group(1))
else:
return 0 # Default value if not found Decision Waypoints and User ConsultationWhen a sub-action item's complexity or importance exceeds predefined thresholds, the system assesses whether to proceed autonomously or seek user input. In cases where best practices indicate potential risks or critical design choices, the system prompts the user to decide the course of action, ensuring informed and deliberate outcomes. Implementation: def should_consider_user_input(complexity, importance, best_practices):
risk_factors = analyze_risks(best_practices)
return (complexity >= COMPLEXITY_THRESHOLD or importance >= IMPORTANCE_THRESHOLD or risk_factors)
def analyze_risks(best_practices):
prompt = f"""
Analyze the following best practices and identify any potential risks or critical design choices that may have significant consequences if not properly addressed.
Best Practices:
{best_practices}
Risks and Critical Design Choices:
"""
risks = run_ollama_prompt(prompt)
return bool(risks.strip()) # Returns True if risks are identified
def seek_user_decision(description, best_practices):
prompt = f"""
The task below has been evaluated for complexity and importance based on the provided best practices. It may involve significant risks or critical design choices with substantial consequences.
Task:
{description}
Best Practices:
{best_practices}
Please choose how to proceed:
1. Provide additional instructions or preferences to guide the task execution.
2. Allow the system to proceed based on the current best practices.
3. Abort the task due to identified risks.
Your Decision:
"""
user_decision = get_user_input(prompt)
return user_decision
def get_user_input(prompt):
# Placeholder for user interaction mechanism
print(prompt)
decision = input("Enter your choice (1/2/3): ").strip()
return decision Handling User DecisionsBased on the user's input at decision waypoints, the system adapts its processing strategy to align with the user's preferences and the sub-action item's requirements. Implementation: def handle_user_decision(decision, task_node):
if decision == "1":
clarification = get_user_clarification(task_node.description)
task_node.description += "\n" + clarification
task_node.context = get_required_context(task_node.description)
elif decision == "2":
# Proceed with best practices without additional input
task_node.context = get_best_practices(task_node.description)
elif decision == "3":
# Abort the task processing
task_node.result = "Task aborted by user due to identified risks."
else:
# Handle invalid input
print("Invalid choice. Aborting task for safety.")
task_node.result = "Task aborted due to invalid user input." Dynamic Picky Level DeterminationInstead of using an adjustable picky variable set by the user, the system dynamically determines its sensitivity in seeking user input based on the context and analysis of each task. This adaptive approach leverages the LLM's capabilities to assess when user intervention is most beneficial, ensuring a balance between system autonomy and necessary oversight. Implementation: def determine_picky_level(complexity, importance, best_practices):
prompt = f"""
Given the task's complexity of {complexity} and importance of {importance}, along with the following best practices, determine the appropriate picky level on a scale of 1 to 10. A higher picky level means the system is more inclined to seek user intervention.
Task Complexity: {complexity}
Task Importance: {importance}
Best Practices:
{best_practices}
Determine the Picky Level (1-10):
"""
picky_level = run_ollama_prompt(prompt).strip()
try:
picky_level = int(picky_level)
picky_level = max(1, min(picky_level, 10)) # Ensures level is between 1 and 10
except ValueError:
picky_level = 5 # Default value if parsing fails
return picky_level
def should_seek_user_intervention(complexity, importance, best_practices):
base_condition = should_consider_user_input(complexity, importance, best_practices)
if not base_condition:
return False
picky_level = determine_picky_level(complexity, importance, best_practices)
# Query the LLM to determine the necessity based on picky level
if 4 <= picky_level < 7:
prompt = f"""
Given the task's complexity of {complexity} and importance of {importance}, and the determined picky level of {picky_level}, should the system seek user intervention? Respond with "Yes" or "No".
Task Complexity: {complexity}
Task Importance: {importance}
Picky Level: {picky_level}
Response:
"""
decision = run_ollama_prompt(prompt).strip().lower()
return decision == "yes"
elif picky_level >= 7:
return True
elif picky_level < 4:
return False
return False Integration of Decision Waypoints into Task ProcessingThe core Implementation: def process_task(task_node):
if task_node.depth == 0:
# For the main task, retrieve action items without evaluating complexity and importance
task_node.context = get_required_context(task_node.description)
action_items = get_action_items(task_node.description, task_node.context)
else:
# For sub-action items, retrieve best practices and evaluate complexity and importance
best_practices = get_best_practices(task_node.description)
complexity, importance = evaluate_complexity_and_importance(task_node.description, best_practices)
if should_seek_user_intervention(complexity, importance, best_practices):
decision = seek_user_decision(task_node.description, best_practices)
handle_user_decision(decision, task_node)
if task_node.result.startswith("Task aborted"):
return # Abort further processing for this sub-action item
else:
task_node.context = get_best_practices(task_node.description)
if task_node.result == "":
if task_node.depth >= MAX_DEPTH:
task_node.result = execute_task(task_node.description, task_node.context)
return
action_items = get_action_items(task_node.description, task_node.context)
if action_items:
interconnections = highlight_interconnections(action_items, task_node.context)
task_node.interconnections = interconnections
for action_item_desc in action_items:
sub_task_node = TaskNode(action_item_desc, depth=task_node.depth + 1)
task_node.sub_tasks.append(sub_task_node)
process_task(sub_task_node)
task_node.result = integrate_action_items(task_node)
else:
task_node.result = execute_task(task_node.description, task_node.context) Workflow Summary
Example Scenario with Enhanced Decision WaypointsUser Input: user_input = """
Develop a scalable e-commerce platform with integrated payment and inventory management systems.
""" Processing Steps:
Conclusion: By allowing the LLM to dynamically determine the picky level, the system enhances its adaptability and responsiveness to varying task complexities and importance levels. This approach ensures that user intervention is sought judiciously, maintaining an optimal balance between system autonomy and necessary oversight. The integration of dynamically determined decision waypoints facilitates more intelligent and context-aware task processing, aligning execution strategies with both best practices and user-specific requirements. |
Action Item Processing System Using Ollama
Self-thinking Thesis with Code Implementation
Introduction
Optimizing task processing is crucial in artificial intelligence and task automation for achieving efficiency and accuracy. This code example leverages a system of self-querying, context optimization, action item integration, adjustable thinking depth, and summarization without losing essential details. Utilizing the LLMs with Ollama, the system dynamically enhances task management by intelligently decomposing tasks into action items, optimizing context, and integrating action items to produce coherent and concise outputs.
The system also highlights interconnections between action items and could be extended to include auto-debugging and error-response back-checking, further improving efficiency and reliability.
System Components and Implementation
1. Self-Querying for Task Decomposition into Action Items
The system intelligently decomposes complex tasks into manageable action items by self-querying the language model.
Implementation:
2. Context Optimization and Cleaning
By focusing solely on the essential information required for each action item, the system ensures that context is optimized and cleansed of irrelevant data.
Implementation:
3. Highlighting Interconnections Between Action Items
The system identifies dependencies or relationships between action items, ensuring their integration leads to coherent and reasonable overall solutions. Interconnections are highlighted to emphasize their importance.
Implementation:
4. Adjustable Thinking Depth with Automatic Simplification
Users can set the depth of recursive processing to balance detail and efficiency. The system automatically stops decomposing tasks if they are simple enough.
Implementation:
5. Comprehensive Final Responses with Summarization
The system summarizes final outputs to be concise while retaining all important information, ensuring comprehensive responses without losing essential details.
Implementation:
6. Task Representation with
TaskNode
ClassThe
TaskNode
class represents each task or action item, holding relevant data for processing.Implementation:
7. Auto Debugging and Error-Response Backchecking
To enhance reliability, the system integrates auto-debugging by capturing errors that occur during the execution of code suggested by the LLM. When a task produces errors, the system captures the debug output and feeds it back into the LLM for analysis and suggestions on how to correct the issue. This iterative process provides automatic error-handling, feedback, and correction.
Implementation:
Capturing Code Execution Errors: The system runs the LLM-suggested code (e.g., Python code) and captures any runtime or compilation errors.
Feeding Errors Back to the LLM: When an error occurs, the system constructs a new prompt that provides the LLM with the code and the associated error message, asking for suggestions to fix the issue.
Iterative Debugging: After receiving corrections from the LLM, the system attempts to rerun the code with the suggested fixes. This creates an automated feedback loop that continuously refines the code until it executes successfully or reaches a terminal state.
Considerations:
subprocess.run
call.Example Usage
To illustrate the use case of this system and demonstrate how it operates in practice, let's consider a comprehensive coding example.
Scenario:
A software engineer wants to develop a secure web application that includes user authentication, data encryption, and a RESTful API for database interactions. The engineer inputs the following task description:
Example Usage Code:
Expected Output:
Explanation:
Action Items Decomposition: The system has broken down the complex primary task into manageable action items, such as implementing user authentication, data encryption, and developing a RESTful API.
Context Optimization: For each action item, the system has retrieved the necessary context to ensure relevant information is used during processing.
Interconnections Highlighted: The system has identified and highlighted interconnections between action items, emphasizing how they are related and dependent on each other.
Sub-Task Processing: Each action item is processed recursively, respecting the adjustable thinking depth, and further broken down if necessary.
Integration and Summarization: The results of the action items are integrated and summarized, providing concise yet comprehensive outputs.
Assessment: The system assesses the coherence and reasonability of the action items, ensuring that the final plan is consistent and covers all important aspects.
Auto Debugging and Error Handling: The implementation includes error checking during the execution of code, with the capability to automatically capture errors, feed them back to the LLM for debugging suggestions, and attempt corrections to resolve any issues that arise.
Demonstrated Use Case:
This example showcases how the system can handle complex coding tasks typical in software development. By automatically decomposing tasks into action items, optimizing context, highlighting interconnections, and integrating results, the system assists developers in generating detailed implementation plans that are both actionable and aligned with best practices. The inclusion of auto debugging and error response back checking enhances reliability, leading to fewer inputs required for the same optimized coding result.
Appendix: Full Code Listing
The text was updated successfully, but these errors were encountered: