Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/add streaming #18

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Feature/add streaming #18

wants to merge 4 commits into from

Conversation

shouryashashank
Copy link
Contributor

@shouryashashank shouryashashank commented Nov 27, 2024

Summary by CodeRabbit

  • New Features

    • Introduced optional attributes for message roles and content, enhancing flexibility in message handling.
    • Added a streaming capability for chat completions, allowing real-time responses.
  • Bug Fixes

    • Adjusted endpoint logic to handle streaming requests properly.
  • Documentation

    • Updated method signatures to reflect new functionality and parameters.
  • Chores

    • Updated the version requirement for the predacons package in dependencies.

Copy link

coderabbitai bot commented Nov 27, 2024

Walkthrough

The changes in this pull request involve modifications to the Message, Choice, and Conversation classes in chat_class.py, making certain attributes optional and adding dynamic initialization. In main.py, the chat_completions endpoint is updated to handle streaming responses based on a request parameter. Additionally, a new asynchronous function completions_stream is introduced in service/chat.py to facilitate streaming chat completions. These updates enhance the flexibility and functionality of the chat service.

Changes

File Change Summary
chat_class.py - Changed role and content in Message from mandatory to optional str.
- Changed message in Choice from mandatory to optional Message and added new optional delta attribute.
- Added custom __init__ method in Conversation for dynamic initialization.
main.py - Updated chat_completions endpoint to handle streaming responses based on a "stream" key in the request body.
service/chat.py - Added asynchronous function completions_stream for streaming chat completions, utilizing predacons.chat_generate with streaming enabled.
requirements.txt - Updated predacons package version from >=0.0.126 to >=0.0.128.

Possibly related PRs

  • Feature/upgrade #16: The changes in main.py regarding the chat completions endpoint and the new embeddings endpoint may relate to the overall structure and functionality of the chat service, but they do not directly modify the same classes or attributes as the main PR's changes in chat_class.py.

Poem

🐇 In the world of chat, where words take flight,
Roles and contents now shine so bright.
Streaming responses, a river of thought,
Conversations flow, with new ideas caught.
With choices and deltas, we dance and we play,
A rabbit's delight in the chat's new ballet! 🌟


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between fa3ae7e and 776892a.

📒 Files selected for processing (1)
  • requirements.txt (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • requirements.txt

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Outside diff range and nitpick comments (7)
service/chat.py (4)

92-92: Use capitalized environment variable for SYSTEM_FINGERPRINT.

Environment variable names are conventionally capitalized. Consider modifying the variable name to enhance readability and maintain consistency.

Apply this diff:

-system_fingerprint = os.getenv('system_fingerprint')
+system_fingerprint = os.getenv('SYSTEM_FINGERPRINT')
🧰 Tools
🪛 Ruff (0.8.0)

92-92: Use capitalized environment variable SYSTEM_FINGERPRINT instead of system_fingerprint

(SIM112)


100-101: Remove unused variables fast_gen and draft_model.

The variables fast_gen and draft_model are assigned but never used, which may lead to confusion. Removing unused variables helps keep the code clean and maintainable.

Apply this diff:

-    fast_gen = model_dict.use_fast_generation
-    draft_model = model_dict.draft_model_name
🧰 Tools
🪛 Ruff (0.8.0)

100-100: Local variable fast_gen is assigned to but never used

Remove assignment to unused variable fast_gen

(F841)


101-101: Local variable draft_model is assigned to but never used

Remove assignment to unused variable draft_model

(F841)


89-150: Consider adding error handling in the streaming loop.

In the completions_stream function, adding error handling within the streaming loop can help manage potential exceptions that may occur during streaming, such as network errors or unexpected interruptions.

Apply this diff to include error handling:

     thread.start()
 
-    for response in stream:
+    try:
+        for response in stream:
             filter_results = ContentFilterResults(
                 hate=FilterCategory(filtered=False, severity="safe"),
                 self_harm=FilterCategory(filtered=False, severity="safe"),
                 sexual=FilterCategory(filtered=False, severity="safe"),
                 violence=FilterCategory(filtered=False, severity="safe")
             )
             prompt_filter_results = PromptFilterResults(
                 prompt_index=0,
                 content_filter_results=filter_results
             )
             choice = Choice(
                 content_filter_results=filter_results,
                 finish_reason=None,
                 index=0,
                 logprobs=None,
                 delta=Message(role="assistant", content=response)
             )
 
             chat_response = ChatResponse(
                 choices=[choice],
                 created=int(time.time()),
                 id=await generate_cmpl_id(),
                 model=model_dict.model_name,
                 object="chat.completion.chunk",
                 prompt_filter_results=[prompt_filter_results],
                 system_fingerprint=system_fingerprint,
                 usage=Usage(completion_tokens=1, prompt_tokens=1, total_tokens=1)
             )
 
             chat_response_json = json.dumps(chat_response, default=lambda o: o.__dict__)
             yield f"data: {chat_response_json}\n\n"
+    except Exception as e:
+        # Handle exceptions appropriately
+        error_response = {"error": str(e)}
+        yield f"data: {json.dumps(error_response)}\n\n"
+        thread.join()
+        return
🧰 Tools
🪛 Ruff (0.8.0)

92-92: Use capitalized environment variable SYSTEM_FINGERPRINT instead of system_fingerprint

(SIM112)


100-100: Local variable fast_gen is assigned to but never used

Remove assignment to unused variable fast_gen

(F841)


101-101: Local variable draft_model is assigned to but never used

Remove assignment to unused variable draft_model

(F841)


89-150: Ensure proper cleanup of the thread after streaming completes.

In the completions_stream function, after the streaming is done, it's good practice to ensure that the started thread is properly terminated or joined to prevent any resource leaks.

Apply this diff:

     yield "data: [DONE]\n\n"
+    thread.join()
🧰 Tools
🪛 Ruff (0.8.0)

92-92: Use capitalized environment variable SYSTEM_FINGERPRINT instead of system_fingerprint

(SIM112)


100-100: Local variable fast_gen is assigned to but never used

Remove assignment to unused variable fast_gen

(F841)


101-101: Local variable draft_model is assigned to but never used

Remove assignment to unused variable draft_model

(F841)

chat_class.py (2)

21-26: Consider a more type-safe initialization approach.

While the current implementation provides flexibility, it bypasses dataclass's built-in type checking. Consider using dataclasses.replace() or explicit field initialization to maintain type safety.

Here's a safer alternative:

    def __init__(self, **kwargs):
        valid_keys = {field.name for field in self.__dataclass_fields__.values()}
        filtered_kwargs = {k: v for k, v in kwargs.items() if k in valid_keys}
-        for key, value in filtered_kwargs.items():
-            setattr(self, key, value)
+        # Initialize with defaults first
+        super().__init__()
+        # Then use replace to maintain type checking
+        if filtered_kwargs:
+            self = dataclasses.replace(self, **filtered_kwargs)

45-46: LGTM! Well-structured for streaming support.

The addition of delta and making message optional properly supports both streaming and non-streaming scenarios. This follows common patterns in streaming APIs where updates are delivered incrementally.

Consider documenting that:

  • In streaming mode, delta contains incremental updates
  • In non-streaming mode, message contains the complete response
  • Both fields should not be populated simultaneously
main.py (1)

71-72: Document the streaming capability in OpenAPI specification

The streaming functionality represents a significant API change that should be documented.

Add FastAPI documentation decorators:

@app.post(
    "/openai/deployments/{model}/chat/completions",
    dependencies=[Depends(get_api_key)],
    responses={
            "description": "Successful response",
            "content": {
                "application/json": {"description": "Regular response"},
                "text/event-stream": {"description": "Streaming response"}
            }
        }
    }
)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 408b39f and fa3ae7e.

📒 Files selected for processing (3)
  • chat_class.py (3 hunks)
  • main.py (2 hunks)
  • service/chat.py (1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.0)
service/chat.py

82-82: Redefinition of unused generate_cmpl_id from line 16

(F811)


92-92: Use capitalized environment variable SYSTEM_FINGERPRINT instead of system_fingerprint

(SIM112)


100-100: Local variable fast_gen is assigned to but never used

Remove assignment to unused variable fast_gen

(F841)


101-101: Local variable draft_model is assigned to but never used

Remove assignment to unused variable draft_model

(F841)

🔇 Additional comments (4)
service/chat.py (1)

89-150: Verify that the delta attribute in Choice is correctly used.

Ensure that the Choice class supports the delta attribute and that it aligns with the expected schema. This is important for maintaining consistency in the response format.

Run the following script to check if delta is a valid attribute of Choice:

✅ Verification successful

The delta attribute in Choice is correctly used

Based on the code inspection, the Choice class in chat_class.py explicitly defines delta as an optional Message attribute: delta: Optional[Message] = None. This matches the usage in the review code where delta is being set with a Message instance. The implementation is consistent with the expected schema.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the `delta` attribute in the `Choice` class definition.

# Test: Search for the `Choice` class definition and check for the `delta` attribute.

ast-grep --pattern '
class Choice {
    $$$
    delta: $_
    $$$
}' --python

Length of output: 265


Script:

#!/bin/bash
# Let's try to find the Choice class definition using ripgrep
rg "class Choice" -A 10

Length of output: 443


Script:

#!/bin/bash
# Let's also check for any imports or usage of Choice class
rg "Choice" -A 2

Length of output: 995


Script:

#!/bin/bash
# Let's try ast-grep without the --python flag
ast-grep --pattern 'class Choice {
  $$$
}'

Length of output: 45

🧰 Tools
🪛 Ruff (0.8.0)

92-92: Use capitalized environment variable SYSTEM_FINGERPRINT instead of system_fingerprint

(SIM112)


100-100: Local variable fast_gen is assigned to but never used

Remove assignment to unused variable fast_gen

(F841)


101-101: Local variable draft_model is assigned to but never used

Remove assignment to unused variable draft_model

(F841)

chat_class.py (1)

6-7: LGTM! Optional fields align with streaming requirements.

The change to make role and content optional is appropriate for streaming support where messages might be constructed incrementally.

Let's verify that these optional fields are handled properly in the codebase:

✅ Verification successful

Optional Message fields are properly implemented and safely used

Based on the verification results:

  • The Message class is correctly defined with Optional fields
  • The class is used safely in service/chat.py with explicit values for both role and content
  • The Message type is properly handled as optional in parent classes (e.g., delta: Optional[Message] = None)
  • No direct access to role/content without initialization was found, reducing null-reference risks
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check for proper null handling of Message fields
rg -A 3 "Message\(" 
rg "message\.(role|content)"

Length of output: 908


Script:

#!/bin/bash
# Let's check for None/null handling patterns around Message class
rg -B 2 -A 2 "None.*Message|Message.*None"

# Let's also check the Message class implementation and its usage patterns
ast-grep --pattern 'class Message'

Length of output: 294


Script:

#!/bin/bash
# Let's examine the full Message class implementation and its usage context
rg -B 5 -A 10 "class Message"

# Let's also check for any validation or error handling around Message fields
rg -B 2 -A 2 "if.*message\.(role|content)"

Length of output: 618

main.py (2)

9-9: LGTM: Import statement is correctly placed

The StreamingResponse import is appropriately added and necessary for the new streaming functionality.


71-72: Verify streaming implementation compatibility

Please ensure that the streaming implementation is compatible with OpenAI's streaming format for client compatibility.

✅ Verification successful

Let me gather more information about the streaming implementation.


Let me check the complete response format and structure.


Streaming implementation follows OpenAI's format

The implementation correctly follows OpenAI's streaming format with:

  • Server-sent events (SSE) with text/event-stream media type
  • JSON responses prefixed with data:
  • Proper message structure with delta field containing role and content
  • Final [DONE] event
  • Correct object type chat.completion.chunk for streaming
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check if the streaming implementation follows OpenAI's format
rg -A 10 "completions_stream" service/chat.py

Length of output: 469


Script:

#!/bin/bash
# Search for streaming response format in the chat service
rg -A 20 "completions_stream" service/chat.py

# Look for yield statements in the streaming implementation
ast-grep --pattern 'yield $_'

# Check for any OpenAI compatibility related code
rg -i "openai|completion" service/chat.py

Length of output: 1747


Script:

#!/bin/bash
# Check the ChatResponse and related models
rg -A 10 "class ChatResponse" service/chat.py

# Look for the streaming response structure
rg -B 5 -A 10 "chat_response_json" service/chat.py

# Check for any delta implementation in the response
rg -B 5 -A 5 "delta" service/chat.py

Length of output: 1116

Comment on lines +82 to +87
async def generate_cmpl_id():
prefix = 'cmpl-'
id_length = 22
id_chars = string.ascii_letters + string.digits
random_id = ''.join(random.choice(id_chars) for _ in range(id_length))
return prefix + random_id
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Duplicate definition of generate_cmpl_id function.

The function generate_cmpl_id is redefined at lines 82-87, which duplicates the earlier definition at line 16. This can cause confusion and potential errors. Consider removing the duplicate function or consolidating any differences into a single definition.

Apply this diff to remove the duplicate definition:

-async def generate_cmpl_id():
-    prefix = 'cmpl-'
-    id_length = 22
-    id_chars = string.ascii_letters + string.digits
-    random_id = ''.join(random.choice(id_chars) for _ in range(id_length))
-    return prefix + random_id
🧰 Tools
🪛 Ruff (0.8.0)

82-82: Redefinition of unused generate_cmpl_id from line 16

(F811)

main.py Outdated
Comment on lines +71 to +72
if body.get("stream"):
return StreamingResponse(ChatService.completions_stream(body, predacons_models[model], api_version), media_type="text/event-stream")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding request body validation

The request body is passed directly to the service without validation. Consider using a Pydantic model to validate the request structure.

Example implementation:

from pydantic import BaseModel, Field

class ChatCompletionsRequest(BaseModel):
    stream: bool = Field(default=False, description="Whether to stream the response")
    # Add other required fields based on your API contract

async def chat_completions(
    request: ChatCompletionsRequest,
    model: str,
    api_version: str = Query(default=None, alias="api-version")
):
    # Your implementation here

⚠️ Potential issue

Add validation and error handling for the stream parameter

The stream parameter is accessed directly without type checking or validation. This could lead to runtime errors if the parameter is present but not a boolean.

Consider adding validation like this:

-    if body.get("stream"):
+    stream = body.get("stream", False)
+    if not isinstance(stream, bool):
+        raise HTTPException(
+            status_code=status.HTTP_400_BAD_REQUEST,
+            detail="The 'stream' parameter must be a boolean"
+        )
+    if stream:

Committable suggestion skipped: line range outside the PR's diff.


⚠️ Potential issue

Add error handling for streaming response

The streaming response should be wrapped in a try-catch block to handle potential errors during streaming.

Consider adding error handling:

     if body.get("stream"):
-        return StreamingResponse(ChatService.completions_stream(body, predacons_models[model], api_version), media_type="text/event-stream")
+        try:
+            stream_generator = ChatService.completions_stream(body, predacons_models[model], api_version)
+            return StreamingResponse(
+                stream_generator,
+                media_type="text/event-stream",
+                headers={"Cache-Control": "no-cache", "Connection": "keep-alive"}
+            )
+        except Exception as e:
+            raise HTTPException(
+                status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
+                detail=f"Streaming error: {str(e)}"
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if body.get("stream"):
return StreamingResponse(ChatService.completions_stream(body, predacons_models[model], api_version), media_type="text/event-stream")
if body.get("stream"):
try:
stream_generator = ChatService.completions_stream(body, predacons_models[model], api_version)
return StreamingResponse(
stream_generator,
media_type="text/event-stream",
headers={"Cache-Control": "no-cache", "Connection": "keep-alive"}
)
except Exception as e:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Streaming error: {str(e)}"
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant