You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When submitting a job to a partition such as Harvard's gpu_requeue, sometimes, the job gets killed and requeued. It would be desirable for careless to continue where it left off in this case, rather than starting over! E.g. some flag could be added to the careless call that means, "before starting, inspect the contents of the out directory for a partial run and if you find it, continue from there?"
I have no idea how easy or hard this would be to implement (or if it exists already?). If it does exist, amazing, and if not, I figured I would mention it. I was kind of assuming that this would be the default behavior, and I was a little bummed when my job was killed and started over!
The text was updated successfully, but these errors were encountered:
i have often thought that i should implement model checkpointing. for a variety of reasons, this has historically been challenging to do. however, as of version 0.2.3, it is possible to save and load structure factors and scale parameters. it would not be overly painful to implement a flag that writes the parameters to disk every so often (something like 1,000 training steps seems an okay default). to resume a job one could then use the --scale-file and --structure-factor-file flags to resume the job. i will note that some state will be lost in the optimizer. i have no idea if that is a material concern.
definitely a good suggestion. i need to think about it more.
This would require a lot of work to do in a satisfying way, but the process is pretty much what I've been going through over on the abismal serialization branch. Essentially every layer and model needs to have the following 3 methods
it can be tricky to get this stuff right, but a few pointers
for very simple layers you can just set self.built=True in the constructor like i did here.
for to_config you can use the keras serializer to handle objects that you have implemented with the the above methods. see this serialization example and corresponding deserialization example.
When submitting a job to a partition such as Harvard's
gpu_requeue
, sometimes, the job gets killed and requeued. It would be desirable for careless to continue where it left off in this case, rather than starting over! E.g. some flag could be added to the careless call that means, "before starting, inspect the contents of the out directory for a partial run and if you find it, continue from there?"I have no idea how easy or hard this would be to implement (or if it exists already?). If it does exist, amazing, and if not, I figured I would mention it. I was kind of assuming that this would be the default behavior, and I was a little bummed when my job was killed and started over!
The text was updated successfully, but these errors were encountered: