Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backend): add custom prompt #885

Merged
merged 1 commit into from
Aug 7, 2023
Merged

Conversation

mamadoudicko
Copy link
Contributor

Description

Activate custom prompt feature (backend)

@mamadoudicko mamadoudicko temporarily deployed to preview August 7, 2023 14:30 — with GitHub Actions Inactive
@vercel
Copy link

vercel bot commented Aug 7, 2023

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
docs 🔄 Building (Inspect) Visit Preview Aug 7, 2023 2:30pm
quivrapp 🔄 Building (Inspect) Visit Preview Aug 7, 2023 2:30pm

@github-actions
Copy link
Contributor

github-actions bot commented Aug 7, 2023

Risk Level 2 - /home/runner/work/quivr/quivr/backend/core/llm/qa_base.py

  1. The get_prompt method could be improved by handling the case where brain.prompt_id is not None but get_prompt_by_id(brain.prompt_id) returns None. This could lead to a NoneType error when trying to access brain_prompt_object.content. Consider adding an additional check to ensure brain_prompt_object is not None before trying to access its content attribute.
if brain and brain.prompt_id:
    brain_prompt_object = get_prompt_by_id(brain.prompt_id)
    if brain_prompt_object:
        brain_prompt = brain_prompt_object.content
  1. The generate_stream method is quite long and complex. Consider breaking it down into smaller, more manageable functions. This would improve readability and maintainability of the code.

  2. The generate_stream method uses asyncio.create_task to create a background task. However, it does not appear to handle cancellation of this task if an error occurs or if the parent task is cancelled. Consider adding error handling and cancellation logic to ensure the task is properly cleaned up.

  3. The generate_stream method uses json.dumps to serialize streamed_chat_history.to_dict() to a JSON formatted string. However, it does not handle the case where streamed_chat_history.to_dict() contains non-serializable types. Consider adding error handling or ensuring that streamed_chat_history.to_dict() only contains serializable types.


🐛💡🔧


Powered by Code Review GPT

StanGirard pushed a commit that referenced this pull request Sep 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants