-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regression: RedisStorage in v0.6.0 #466
Comments
Hmm, we might not need to use async_channel. We could just compare the worker task count with the buffer size, This would be a quick fix too. If you can apply a quick PR that would be nice, |
I'll try that ;) |
TBH I don't know how to connect the I can set a cond like this |
|
OK thanks for your help !!! |
The thing I don't clearly understand if the definition of Indeed if instead of EDIT: IMO buffer_size should not be specific to redis. |
Meanwhile, could you check something for me? |
I should have said its backend specific since some backends like messagequeues already handle this for themselves. |
Same behavior with tower::CallAll. All tasks are fetched from |
Ok, I already have something in mind. Hopefully its a single change :). Give me a few hrs |
Hey, I have a small layer added to worker which checks if worker is ready in the branch |
Hello, Still not working, but I think you forget to push some changes ? Because on this branch I only see the layer, but not how the AtomicBool is updated. Edit: my bad I see it!! |
I need you to do something like:
In redis storage. Are you sure its not working? |
It's getting better, with : is_ready: Arc::new(AtomicBool::new(true)), and if worker.is_ready() {
fetch_next().await
} Fetch occurs only when the worker is not busy. |
Good!, I might need to use |
Another weired behaviour but may be I'll open another dedicated issue is when I send a SIGTERM/SIGINT while using this code: ...
monitor.run_with_signal(shutdown_signal()).await?;
pub(crate) async fn shutdown_signal() -> Result<(), io::Error> {
let mut sigterm = sigterm();
let mut sigint = sigint();
select! {
biased;
_ = sigterm.recv() => info!("SIGTERM signal. Exit now !!"),
_ = sigint.recv() => info!("SIGINT signal. Exit now !!"),
}
Ok(())
} Then the worker finish to handle all tasks and then exit, even the tasks that is still in the Redis active queue. (Following my previous comment all 3 tasks are done). Note: this code works with apalis v0.5.5. |
Aah, Its pretty related. I guess we should stop calling next for the stream
when shutdown is called.
I will provide the fix tomorrow.
|
Hey, I think this has been resolved with the current push. |
Also I have created this issue tower-rs/tower#801 which would also relates to this. |
Hello,
|
Lol, look at what I did:
Start sets it to false 😢 |
haha thx I'll test against the fix ;) |
Everything works as expected !!! My test:
Nothing to say except Good job !!!! Thanks for your help !! |
Hello @geofmureithi Sorry to bother you, I just wanted to know when you're gonna release a patch version ? Thank you |
@AzHicham a new version has been released |
Hello,
I started to test the latest release 0.6.0 with Redis and found a regression.
I noticed that all tasks present in the redis queue
NAMESPACE:active
are fetched right away by a worker even though I'm using the following config:I think the regression is coming from this line :
apalis/packages/apalis-redis/src/storage.rs
Line 482 in 094938d
IMO, we should have a safety here in order to not fetch from Redis in case the buffer is full ??
WDYT ?
EDIT: In order to do that I think we need to use Sender/Receiver providing methode like
is_full
? or at leastlen
&size
A good candidate could be async_channel ?
The text was updated successfully, but these errors were encountered: