-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Small fix painn #298
Closed
Closed
Small fix painn #298
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
update load_existing_model
add unscale_features_by_num_nodes_config
Add scaled by num_nodes option for feature prediction in variable graph size
* fix error name bugs * rm tracking error of node sum * reserved length for error lists
* Treat warnings as errors; Ignore deprecation warnings from tpl * fixup unscale assert
* Split device and device_name functions * Move data to device in loading rather than training (does NOT affect performance)
* frequency relaxed * frequency relaxed Co-authored-by: Massimiliano Lupo Pasini <7ml@ornl.gov>
* mapping of degree tensor to GPU * formatting fixed Co-authored-by: Massimiliano Lupo Pasini <7ml@ornl.gov>
data.batch is remapped to the same device as data.x
* added dashed red diagonal and use of empty dots in scatterplot * formatting fixed Co-authored-by: Massimiliano Lupo Pasini <7ml@ornl.gov>
* save train/val/test to pkls when total provided and reorganize data loading * remove raw in config
* fix an indexing bug in denormalization * add min/max loading from pkl
* wip: init profile * wip: add profile routine * wip: add profile routine * format fixed * Create profiler class * minor fix * Minor changes for merging * Move profile block to an upper level Co-authored-by: Massimiliano Lupo Pasini <7ml@ornl.gov> Co-authored-by: Massimiliano Lupo Pasini <massimiliano.lupo.pasini@gmail.com>
Co-authored-by: User <user@localadmins-Air.homenet.telecomitalia.it>
optimize get_head_indices with tensor operations
Remove num_nodes_list
Adding HYDRAGNN_MASTER_ADDR env to set custom DDP port.
Co-authored-by: Zhifan Ye <zhifanye@mail.ustc.edu.cn>
…NL#264) * init commit for enabling deepspeed * black formatting * optional deepspeed availability * if-else for model intialization * init commit * enable enable_deepspeed_ci ci * deepspeed required * mark pytest stages * try tests.xxx import * fixed: deepspeed_test * try test_examples only * try test_deepspeed only * clean up printing and ready to merge * disable deepspeed stage 3, maybe incompatible with CI machine * test network occupation * mark mpi for deepspeed * CI ready to deploy * fix dependency * minor format fix * double check merge * deepspeed out of optional * flush CI cache without deepspeed * remove auto CI for enable_deepspeed_ci branch * fix hash error & more elegant deepspeed-zero unit test --------- Co-authored-by: Zhifan Ye <zhifanye@mail.ustc.edu.cn>
…ing (ORNL#268) * init commit, tested work on frontier * update black formatting * amend template --------- Co-authored-by: Zhifan Ye <zye327@login07.frontier.olcf.ornl.gov>
* add energy linear regression * remove pdb * remove var_conf * fix for energy per atom * remove debug * fix energy per atom * fix for new energy calc * add npz output * save energy mean and linear regresison term * black * fix adios write * remove emean
* Update deephyper runs Update to capture all errors with try-except block. * Update gfm_deephyper_multi_perlmutter.py * Update distributed.py Use "SLURM_STEP_NODELIST" env, which is needed for HPO.
Adding PNAPlus Stack
* force tests, which required model arg, and some typo fixing in Lennard Jones * Add PNAPlus since it uses positions as well * formatting
* utils renamed and black formatting applied * bug fixed solved for tests * black formatting fixed * examples corrected * test_model_loadpred.py fixed * black formatting fixed * test_loss_and_activation_functions.py fixed * black formatting fixed * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of hydragnn into hhydragnn package * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of dataset forlder into datasets * reverting inadvertent automated refactoring of dataset forlder into datasets * git formatting fixed * Adagrad converted to Adamax * Additional changes to fix bugs and suggestions from erdem * imports fixed for LennardJones example * formatting fixed * imports in LJ_data.py fixed * import of graph utils fixed in LJ_data.py * import of setup.ddp() fixed in LennardJones * setup_log call fixed * get_summary_writer call fixed * additional calls fixed * black formatting fixedf
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Small fix here, it would have caused errors of hanging gradients when there's only one convolutional layer.