-
-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running schedules on specific cluster #610
Comments
There is no cluster name parameter for async_task. You may need to start multiple APIs with different queue name settings to have them queue to separate clusters or use celery. |
Hmm, I don't see a queue name settings in for cluster. Did you mean having multiple brokers? |
@Ikszad I meant the "name" key in Q_CLUSTER configuration dictionary. You can have two different settings (using os.getenv("YOURPROJECT_CLUSTER_NAME") or split to different files for DJANGO_SETTINGS_MODULE) basically run the project with different environment variables setting Q_CLUSTER["name"] differently. From my experience, project just wasn't meant for multiple queues/clusters that's why I suggested celery as a better fit for your need. |
@nurettin Yeah you are right. It was not expected at the beginning and indeed I am wondering about moving to celery. Just regarding your idea I thought about the same solution but docs state here that I shouldn't use different cluster names.
I guess I would need to set up a separate queue for that |
@Ikszad Sure, but and the comment is about having them work on the same cluster/queue. We aren't trying to have them working on the same queue. The point is to separate the queue so your longer running tasks don't block the rest. When I traced the behavior, it looks like cluster name is used for naming the queue. I also monitored its behavior on redis and rabbitmq and it works as I've described. |
I have a new issue in 1.3.6 that seems related. My code: if Schedule.objects.filter(args=device.id): If I put to a var Error in formatting: OperationalError: no such column: django_q_schedule.cluster It's unclear what I am doing wrong. |
Did you find a better solution to your problem? Or did you use Celery? |
Hello!
I have a setup with four AWS EC2 instances each running two containers - Django and qcluster.
Each uses the same broker (Djagno ORM), secret key, and cluster name.
Now, I have a number of long processing tasks that I would like to schedule at once without blocking all the clusters (eg. on two out of four clusters) so some other short async tasks can be performed.
My idea was to pass
cluster
argument when creating a schedule, but docs say here that thecluster name
need to be the same with multiple clusters setup.Is there a way I can achieve my goal?
Thanks!
The text was updated successfully, but these errors were encountered: