-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"status" command results in error with too many jobs #234
Comments
What do you suggest? Kinda hard to solve this.... |
can we break request in batches of (say) 50 jobs per call? On Mon, Apr 2, 2012 at 6:00 PM, Markus Binsteiner <
|
Not easily at all. I'd say this should be considered if we decide to (re-)add batch support to gricli/grisu, but on its own I think it'd be too big a change for only a limited number of users. Those users could for example use a local backend, which would speed up things for them anyway, since a local backend is quicker than a ws-based one... |
I remember I was getting that one quite a few times when running the large jobs. Would vote to have a proper fix ... when Gricli was managing a large number of jobs, things were falling apart (random errors). But haven't tried running a large batch recently.... |
Like I said: we'd need to implement proper, stable batch support. At the moment we are using loads of single jobs created by outside scripts to deal with batches of jobs. It's just not possible to cater for that in a way that is viable. If we had batch support in Grisu we could "hide" them in the list of jobs and only list the "parent" job. And get more details on that if necessary. |
Aha. OK, I agree proper batch support would be the real solution. I'm just not sure we should settle for saying "Grisu doesn't support large amounts of jobs" - that just makes our infrastructure flaky.... |
We did a lot of work to make backend stable. It should support 10,000 jobs On Thu, Apr 19, 2012 at 4:31 PM, vladimir-mencl-eresearch <
|
Ah, right. I see. Sorry, misunderstood. Totally forgot about this command :-) Yes, I think that should be possible. Will do. |
Hm. Actually. Thinking about it, not all that easy, will require some change to the serviceinterface. What about having a status command in the API? I guess that would be useful, and it could be processed on the backend itself. Might have to play with how to implement it (wether to use cached job statuses and such), but that would be easier.... |
but that command would still take a lot of time when user has lots of jobs. On Thu, Apr 19, 2012 at 4:55 PM, Markus Binsteiner <
|
Not sure I understand what you mean. You are saying, whenever another call is made (or every 5 minutes), all job statuses should be updated, and when the status call is made, only a current snapshot of all jobs (with partly cached/outdated statuses) is used? |
yeah, maybe not every 5 minutes, but on event notifications (we do have On Fri, Apr 20, 2012 at 8:23 AM, Markus Binsteiner <
|
On 20/04/12 08:36, yhal003 wrote:
|
our old friend "413 Entity Too Large" happens because method runs for too long.
The text was updated successfully, but these errors were encountered: