You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been training my classifier and then batch processing several images in the bucket. The issue is that i often miscalculate how much time i should allocate to batch processing, and wrongly assume i have enough time left in my session. One possible solution to this would be to submit batch processing jobs as seperate HPC jobs. The benefit of this is that they would not need to be interactive, and could therefore be submitted with very large time allocations since they will be killed when they finish processing. Would this be viable?
The text was updated successfully, but these errors were encountered:
That makes a whole lot of sense, and I think it's doable. I'd have to think about how it would fit the architecture and it might take some time to get it ready. I'll keep this is mind but give it low priority unless you find it to be essential t your workflow.
In the meantime I could also optimize the use of the compute workers by the jobs; Right now I think they are trying so hard not to get in the way of the interactive tasks (the live training) that they might end up running for way longer than they should.
I have been training my classifier and then batch processing several images in the bucket. The issue is that i often miscalculate how much time i should allocate to batch processing, and wrongly assume i have enough time left in my session. One possible solution to this would be to submit batch processing jobs as seperate HPC jobs. The benefit of this is that they would not need to be interactive, and could therefore be submitted with very large time allocations since they will be killed when they finish processing. Would this be viable?
The text was updated successfully, but these errors were encountered: