You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for this effort. I think the roadmap looks pretty good.
LightningModule: Ideally, all we need to do is to let create_model return a LightningModule (first PR). This shouldn't break anything as far as I can tell. We can then start adding train_step, val_step and test_step, and configure_optimizers as part of this newly created LightningModule (second PR).
As far as I can tell, load_ckpt and save_ckpt could be completely dropped by using PL in-house implementations. Correct me if I am wrong, but I think this has to be done after the LightningModule and its training pipeline have been properly integrated.
🚀 The feature, motivation and pitch
PyTorch Lightning Integration
GraphGym training experience can be improved for scalability, mixed precision support, logging and checkpoints with PyTorch Lightning integration.
LightningModule
load_ckpt
andsave_ckpt
with PL checkpoint save and load methodTrainer
and theLightningModule
implementationLightningDataset
,LightningNodeData
andLightningLinkData
modulesThe text was updated successfully, but these errors were encountered: