Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make full Request object available in chat #1213

Open
dokterbob opened this issue Aug 14, 2024 · 11 comments
Open

Make full Request object available in chat #1213

dokterbob opened this issue Aug 14, 2024 · 11 comments
Labels
architecture Overall project architecture. backend Pertains to the Python backend. enhancement New feature or request evaluate-with-priority What's needed to address this one?

Comments

@dokterbob
Copy link
Collaborator

dokterbob commented Aug 14, 2024

Is your feature request related to a problem?
This would address, in one go:
#393
#144
#964
#1002
#1103
#1140

It might also work towards solving:
#1056

  • It should allow devs to put whatever request data they need in user_session.
  • It allows using query parameters, cookies and arbitrary HTTP headers.
  • We might be able to deprecate things like languages and other headers copied in verbatim.

Describe the solution you'd like
Accept request : [starlette.requests.Request](https://www.starlette.io/requests/) as an optional parameter to on_chat_start().

Describe alternatives you've considered
See issues/PR above. The proposed solution is more generic, requires less maintenance (e.g. no specific support for various use cases required).

Open question: are there any reasons (security or otherwise) not to do this?

@dokterbob dokterbob added enhancement New feature or request needs-triage evaluate-with-priority What's needed to address this one? backend Pertains to the Python backend. labels Aug 14, 2024
@dokterbob
Copy link
Collaborator Author

@willydouhard Would love your ideas on this one, should be really easy to implement.

Are there any security or other implications?

@cta2106
Copy link

cta2106 commented Aug 14, 2024

Much needed feature imo. Only thing stopping me from deploying Chainlit for a number of clients at the enterprise level.

@comediansd
Copy link

I need this very urgend to send user links for special training bots

@nethi
Copy link

nethi commented Aug 26, 2024

@willydouhard would this PR #1260 help ? For HTTP hooks, I am using ContextVar like this in server.py

# Create a context variable to store the request
request_context_var: ContextVar[Request] = ContextVar("request_context_var")

def get_current_request() -> Request:
    return request_context_var.get()

class RequestContextMiddleware(BaseHTTPMiddleware):
    async def dispatch(self, request: Request, call_next):
        # Set the request in the context variable
        request_context_var.set(request)
        # Process the request and get the response
        response = await call_next(request)
        return response
    
# Add request contextvar middleware to the FastAPI application
app.add_middleware(RequestContextMiddleware)

@dokterbob
Copy link
Collaborator Author

@willydouhard would this PR #1260 help ? For HTTP hooks, I am using ContextVar like this in server.py

# Create a context variable to store the request
request_context_var: ContextVar[Request] = ContextVar("request_context_var")

def get_current_request() -> Request:
    return request_context_var.get()

class RequestContextMiddleware(BaseHTTPMiddleware):
    async def dispatch(self, request: Request, call_next):
        # Set the request in the context variable
        request_context_var.set(request)
        # Process the request and get the response
        response = await call_next(request)
        return response
    
# Add request contextvar middleware to the FastAPI application
app.add_middleware(RequestContextMiddleware)

I like this approach and hadn't thought of it!

@nethi I'd love to hear your thoughts about the pros/cons of using context variables here versus adding an optional request: Request parameter to on_chat_start().

My rationale for the optional parameter (only in on_chat_start()) is that on_chat_start() actually coincides with a HTTP request (the start of the WebSocket) whereas having it available elsewhere might lead to a confusing/counter-intuitive developer experience (e.g. the browser URL might diverge from the request path). In addition, having it as a parameter in on_chat_start() might read to more readable and testable code (by being functional rather than relying on context).

Love to hear your feedback on this, it's a feature many users ask for and from where I stand either the functional approach or the context var approach are acceptable.

Curious what @willydouhard's view is on this.

@bfinamore
Copy link

+1 for passing the Request object in on_chat_start(). seems more flexible and explicit than the context approach for this specific goal (ie outside of the asgi server goals that #1260 addresses).

That said, having reliable path/query access is a pretty important feature that a LOT of people seem to be waiting on so would be awesome to get Request/path/query param exposure added asap one way or another

@dokterbob dokterbob added architecture Overall project architecture. and removed needs-triage labels Aug 27, 2024
@nethi
Copy link

nethi commented Aug 29, 2024

@willydouhard would this PR #1260 help ? For HTTP hooks, I am using ContextVar like this in server.py

# Create a context variable to store the request
request_context_var: ContextVar[Request] = ContextVar("request_context_var")

def get_current_request() -> Request:
    return request_context_var.get()

class RequestContextMiddleware(BaseHTTPMiddleware):
    async def dispatch(self, request: Request, call_next):
        # Set the request in the context variable
        request_context_var.set(request)
        # Process the request and get the response
        response = await call_next(request)
        return response
    
# Add request contextvar middleware to the FastAPI application
app.add_middleware(RequestContextMiddleware)

I like this approach and hadn't thought of it!

@nethi I'd love to hear your thoughts about the pros/cons of using context variables here versus adding an optional request: Request parameter to on_chat_start().

My rationale for the optional parameter (only in on_chat_start()) is that on_chat_start() actually coincides with a HTTP request (the start of the WebSocket) whereas having it available elsewhere might lead to a confusing/counter-intuitive developer experience (e.g. the browser URL might diverge from the request path). In addition, having it as a parameter in on_chat_start() might read to more readable and testable code (by being functional rather than relying on context).

Love to hear your feedback on this, it's a feature many users ask for and from where I stand either the functional approach or the context var approach are acceptable.

Curious what @willydouhard's view is on this.

@dokterbob For my use case (of serving multiple no-code agents from single chainlit agent), I had two different issues to deal with:

a) Get access to agent_id in Http related call backs (such as set_chat_profiles decorator)
b) Get access to agent_id in websocket related call backs (such as on_chat_start decorator)

For (b), I would agree that request parameter in on_chat_start would be ideal, but it may mean contract change today.

for (a), explicit request parameter works too. But again means change to existing contract.

Btw, in my PR, ContextVar approach is used only for (a). For (b), it exposes two session variables:
http_path = cl.user_session.get("http_path")
http_query = cl.user_session.get("http_query")

I am not very opinionated about either of these approaches at this point as I can understand backward compatibility angle - I can rework the PR based on comments/feedback accordingly.

one question though - on_chat_start callback is from within websocket context, which is handled by socketio.ASGIApp. Will it have access to starlette.requests.Request object ?

@dokterbob dokterbob changed the title Make Request object available in on_chat_start() Make full Request object available in chat Sep 3, 2024
@dokterbob
Copy link
Collaborator Author

Having had initial discussion with Willy, my tentative view (anticipating more reflection) is: if we ensure there's a single hook (e.g. independent of auth) where we've access to the Request of the index view as well the user session, implementers can do whatever they like in terms of copying stuff from the request to the session -- without us having to provide explicit support for it.

Alternatively, we might always make the entire (initial) request available on the session, but as the WebSocket (corresponding with on_chat_start()) is a different request from both auth and index, this might confuse developers.

Happy to hear your comments and feedback. I will probably sleep a bit on it and take good care to extensively read and (in the process) clean up the code to make sure we really do this well in the first go.

@nethi
Copy link

nethi commented Nov 29, 2024

@dokterbob I would like to see some closure to this requirement. Given socketio & awsgi doesn't deal with startlette requests, I am wondering if this is even possible to do today.

How about making entire WSGI environ dict available in on_chat_start instead (as WSGI environ is sort of equivalent to request object)?

@nethi
Copy link

nethi commented Nov 29, 2024

@dokterbob I would like to see some closure to this requirement. Given socketio & awsgi doesn't deal with startlette requests, I am wondering if this is even possible to do today.

How about making entire WSGI environ dict available in on_chat_start instead (as WSGI environ is sort of equivalent to request object)?

I figured out how to make FastAPI request available in socketio connect() event handler. It seems to work fine (changes shown below).

@sio.on("connect")
async def connect(sid, environ, auth=None):
   ...  ...   ...
   
   from fastapi import Request
   scope = sio.get_environ(sid)["asgi.scope"]
   request = Request(scope=scope)
   
    print(f"Request URL: {request.url}")
    print(f"Headers: {request.headers}")

Passing request object as on_chat_start() callback may mean change in contract ? Do we want to do it that way or store request object in session ?

@ahmadmayahi
Copy link

This PR is the only one I need to start shipping my chainlit products.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
architecture Overall project architecture. backend Pertains to the Python backend. enhancement New feature or request evaluate-with-priority What's needed to address this one?
Projects
None yet
Development

No branches or pull requests

6 participants