-
Notifications
You must be signed in to change notification settings - Fork 782
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update shortest_path to TF2 #119
base: master
Are you sure you want to change the base?
update shortest_path to TF2 #119
Conversation
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please visit https://cla.developers.google.com/ to sign. Once you've signed (or fixed any issues), please reply here with What to do if you already signed the CLAIndividual signers
Corporate signers
ℹ️ Googlers: Go here for more info. |
@googlebot I signed it! |
CLAs look good, thanks! ℹ️ Googlers: Go here for more info. |
Thanks for the PR and the initiative @maguileracanon ! To make the review easier, do you think it would be possible to:
Thanks in advance :) |
Thanks, do you think it would be possible to rewrite the commit history to have just those two CLs (Similarly to how it was done here ). Currently, the diff of the "update to tf2" does not pick up on the specific changes that were made (ideally the changes would be able to highlight the tf2 difference specifically). If possible it would look more similar to this diff. I know github support for diffs in colab notebooks is not great, and maybe the changes are large enough that the diff cannot figure out the shared parts. But otherwise, it is hard to see what |
Hi ! yes. Sorry, the revert commit was because, as you mention, I noticed that the diff for the first 2 commits wasn't picking up the difference between them so I tried to revert it and see if I can find another solution. I still don't really understand why is taking the diff as if I had deleted everything and placed it again. I'll post a comment when I solve it. |
Ok, the problem was that I was deleting the output at the end and that was kind of re-writting the entire notebook. If you look at the last commit e82bd5b then the diff should be clear for review. I hope this is what you need, but let me know if there is something else you need. |
That looks much better! It could be great if you could get rid of the earlier commits so it shows a single first commit with the full copy (e.g. a large diff), and then the second commit with a diff similar to what you have now in the last commit. Otherwise, it is hard to track all of the changes in the earlier commits. One option could be to squash all of the commits (except the last one). This should result in a single commit that corresponds purely to the file move. Alternatively, it may be easier to start again, by:
|
e82bd5b
to
09e6b69
Compare
ok. The commit history looks a little bit tidier now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your PR great work! Please fine below some comments.
I think we should actually be able to achieve the same with quite a few line changes, to keep it closer to the tf1 version, and to the other tf2 version.
Something else that I noticed, is that when I run this demo agains the old tf2 version in colaboratory
I got quite different learning curves in the middle plot:
It may be worth investigating, although maybe it is just different default initialization settings or something like that.
@@ -45,7 +45,7 @@ | |||
}, | |||
{ | |||
"cell_type": "code", | |||
"execution_count": 0, | |||
"execution_count": null, | |||
"metadata": { | |||
"cellView": "form", | |||
"colab": {}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(can'c comment below, but these two should be changed):
!pip install graph_nets "dm-sonnet<2" "tensorflow_probability<0.9"
to
!pip install "graph_nets>=1.1" "dm-sonnet>=2.0.0b0"
and
pip install graph_nets matplotlib scipy "tensorflow>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
to
pip install "graph_nets>=1.1" matplotlib scipy "tensorflow>=2.1.0-rc1" "dm-sonnet>=2.0.0b0" tensorflow_probability
@@ -2,7 +2,7 @@ | |||
"cells": [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure this runs in Google Colaboratory, and add the relevant TF2 version to the README (analogous to how it was done for the the sort demo).
" input_graphs = networkxs_to_tf_graphs_tuple(inputs)\n", | ||
" target_graphs = networkxs_to_tf_graphs_tuple(targets)\n", | ||
" \n", | ||
" return input_graphs,target_graphs, raw_graphs\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Leave space after commas, it would be great if you could run a linter on this (but make sure it leaves the two space indentation).
"\n", | ||
"\n", | ||
"def create_feed_dict(rand, batch_size, num_nodes_min_max, theta, input_ph,\n", | ||
" target_ph):\n", | ||
"def create_tf_GraphTouple(rand, batch_size, num_nodes_min_max, theta):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def create_batch(
"def create_placeholders(rand, batch_size, num_nodes_min_max, theta):\n", | ||
" \"\"\"Creates placeholders for the model training and evaluation.\n", | ||
"def networkxs_to_tf_graphs_tuple(graph_nxs,\n", | ||
" node_shape_hint=None,\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sure arguments are aligned, and leave exactly two empty lines between method definitions :)
" with tf.GradientTape() as tape:\n", | ||
" output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n", | ||
" # Loss.\n", | ||
" loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing space
"\n", | ||
"# Get the input signature for that function by obtaining the specs\n", | ||
"input_signature = [\n", | ||
" utils_tf.specs_from_graphs_tuple(input_ph),\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it should be 4 spaces in this case, as this is not a nested context, but a line continuation.
@@ -802,36 +837,23 @@ | |||
"\n", | |||
"# Data.\n", | |||
"# Input and target placeholders.\n", | |||
"input_ph, target_ph = create_placeholders(rand, batch_size_tr,\n", | |||
"input_ph, target_ph, _ = create_tf_GraphTouple(rand, batch_size_tr,\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
example_input_data
, example_target_data
let's try to avoid "_ph"
" with tf.GradientTape() as tape:\n", | ||
" output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n", | ||
" # Loss.\n", | ||
" loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rename this method to compute_loss
@@ -998,12 +1034,12 @@ | |||
"node_size = 120\n", | |||
"min_c = 0.3\n", | |||
"num_graphs = len(raw_graphs)\n", | |||
"targets = utils_np.graphs_tuple_to_data_dicts(test_values[\"target\"])\n", | |||
"targets = graphs_touple_to_graph_list(targets_ge)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have checked and this one, and the one below just works with:
utils_np.graphs_tuple_to_data_dicts
so you can remove the graphs_touple_to_graph_list
method.
(You will need to also revert the changes in line 1017 and line 1043)
Hi @maguileracanon , I sent my review a while back, please let me know if you are still planning to get back to this :) |
Hello, graph-nets Authors!!
Following the discussion in #115 I wanted to contribute and adapted the shortest_path example to run in TF2 and implementing the sonnet Adam optimizer. I added a couple of functions to make it runnable. Ideally, they should probably live inside utils_np or utils_tf but I kept them here for simplicity.
I hope it is useful!