You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just as a session is created in the pipelines' get() method, it should be "closed" as well.
This is especially beneficial if there are any external or other objects that should only live as long as the pipeline should live.
The main issue here being that the session itself is continuously serialized between the server's Python instance and the redis cache, and hence local (or other) objects related to the lifetime of a pipeline cannot be trusted to attach to the lifetime of a Python session object.
The text was updated successfully, but these errors were encountered:
I don't think this makes sense (anymore - or even then). The session is created and "closed" in the time it takes to run the pipeline's get() method.
The only thing I can see is to have a call back to the service to say "the pipelines has finished, you can clear the redis cache of this session object now". And if the same pipeline's get() method is then called again, the whole thing starts over - OR one could cache the get() method's return value in this repository for some time? An lru_cache or something? Or maybe not.
Just as a session is created in the pipelines'
get()
method, it should be "closed" as well.This is especially beneficial if there are any external or other objects that should only live as long as the pipeline should live.
The main issue here being that the session itself is continuously serialized between the server's Python instance and the redis cache, and hence local (or other) objects related to the lifetime of a pipeline cannot be trusted to attach to the lifetime of a Python session object.
The text was updated successfully, but these errors were encountered: