Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update shortest_path to TF2 #119

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

maguileracanon
Copy link

@maguileracanon maguileracanon commented May 29, 2020

Hello, graph-nets Authors!!
Following the discussion in #115 I wanted to contribute and adapted the shortest_path example to run in TF2 and implementing the sonnet Adam optimizer. I added a couple of functions to make it runnable. Ideally, they should probably live inside utils_np or utils_tf but I kept them here for simplicity.
I hope it is useful!

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed (or fixed any issues), please reply here with @googlebot I signed it! and we'll verify it.


What to do if you already signed the CLA

Individual signers
Corporate signers

ℹ️ Googlers: Go here for more info.

@maguileracanon
Copy link
Author

@googlebot I signed it!

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@alvarosg
Copy link
Collaborator

alvarosg commented Jun 2, 2020

Thanks for the PR and the initiative @maguileracanon !

To make the review easier, do you think it would be possible to:

  • Empty the notebook outputs (we prefer not committing binary data, and the demos are super easy to run in Google Colaboratory anyway).
  • Because this forks an existing TF1 file, ideally, the first commit would be a forked version of the tf1 version verbatim, and the second commit should show a clean diff for reviewing purposes of the new code, do you think that you can split the commits into that?

Thanks in advance :)

@alvarosg
Copy link
Collaborator

alvarosg commented Jun 3, 2020

Thanks, do you think it would be possible to rewrite the commit history to have just those two CLs (Similarly to how it was done here ).

Currently, the diff of the "update to tf2" does not pick up on the specific changes that were made (ideally the changes would be able to highlight the tf2 difference specifically). If possible it would look more similar to this diff.

I know github support for diffs in colab notebooks is not great, and maybe the changes are large enough that the diff cannot figure out the shared parts. But otherwise, it is hard to see what update to tf2 and Revert "update to tf2"commits are doing :)

@maguileracanon
Copy link
Author

Hi ! yes. Sorry, the revert commit was because, as you mention, I noticed that the diff for the first 2 commits wasn't picking up the difference between them so I tried to revert it and see if I can find another solution. I still don't really understand why is taking the diff as if I had deleted everything and placed it again. I'll post a comment when I solve it.

@maguileracanon
Copy link
Author

Ok, the problem was that I was deleting the output at the end and that was kind of re-writting the entire notebook. If you look at the last commit e82bd5b then the diff should be clear for review. I hope this is what you need, but let me know if there is something else you need.

@alvarosg
Copy link
Collaborator

alvarosg commented Jun 3, 2020

That looks much better! It could be great if you could get rid of the earlier commits so it shows a single first commit with the full copy (e.g. a large diff), and then the second commit with a diff similar to what you have now in the last commit. Otherwise, it is hard to track all of the changes in the earlier commits.

One option could be to squash all of the commits (except the last one). This should result in a single commit that corresponds purely to the file move.

Alternatively, it may be easier to start again, by:

  1. squash everything and moving the resulting commit into a different branch.
  2. remove all commits from this branch
  3. creating a single new commit in this branch just copying the demo file into the tf2 folder.
  4. rebase the commit you cherrypicked back to this branch (if everything else was correct, this should also give you a small diff after the rebase).

Mara TalbotPC added 2 commits June 3, 2020 18:32
Squashed commit:

[e47daf1] Revert "update to tf2"

This reverts commit a292023.

[a292023] update to tf2

[9c55e7d] copy to tf2 folder

[059d4bb] update shortest_path to TF2
@maguileracanon
Copy link
Author

ok. The commit history looks a little bit tidier now :)

Copy link
Collaborator

@alvarosg alvarosg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your PR great work! Please fine below some comments.

I think we should actually be able to achieve the same with quite a few line changes, to keep it closer to the tf1 version, and to the other tf2 version.

Something else that I noticed, is that when I run this demo agains the old tf2 version in colaboratory
I got quite different learning curves in the middle plot:

Tf2 version:
image

Tf1 version:
image

It may be worth investigating, although maybe it is just different default initialization settings or something like that.

@@ -45,7 +45,7 @@
},
{
"cell_type": "code",
"execution_count": 0,
"execution_count": null,
"metadata": {
"cellView": "form",
"colab": {},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(can'c comment below, but these two should be changed):

!pip install graph_nets "dm-sonnet<2" "tensorflow_probability<0.9"
to
!pip install "graph_nets>=1.1" "dm-sonnet>=2.0.0b0"

and
pip install graph_nets matplotlib scipy "tensorflow>=1.15,<2" "dm-sonnet<2" "tensorflow_probability<0.9"
to
pip install "graph_nets>=1.1" matplotlib scipy "tensorflow>=2.1.0-rc1" "dm-sonnet>=2.0.0b0" tensorflow_probability

@@ -2,7 +2,7 @@
"cells": [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure this runs in Google Colaboratory, and add the relevant TF2 version to the README (analogous to how it was done for the the sort demo).

" input_graphs = networkxs_to_tf_graphs_tuple(inputs)\n",
" target_graphs = networkxs_to_tf_graphs_tuple(targets)\n",
" \n",
" return input_graphs,target_graphs, raw_graphs\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leave space after commas, it would be great if you could run a linter on this (but make sure it leaves the two space indentation).

"\n",
"\n",
"def create_feed_dict(rand, batch_size, num_nodes_min_max, theta, input_ph,\n",
" target_ph):\n",
"def create_tf_GraphTouple(rand, batch_size, num_nodes_min_max, theta):\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

def create_batch(

"def create_placeholders(rand, batch_size, num_nodes_min_max, theta):\n",
" \"\"\"Creates placeholders for the model training and evaluation.\n",
"def networkxs_to_tf_graphs_tuple(graph_nxs,\n",
" node_shape_hint=None,\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sure arguments are aligned, and leave exactly two empty lines between method definitions :)

" with tf.GradientTape() as tape:\n",
" output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n",
" # Loss.\n",
" loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

missing space

"\n",
"# Get the input signature for that function by obtaining the specs\n",
"input_signature = [\n",
" utils_tf.specs_from_graphs_tuple(input_ph),\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it should be 4 spaces in this case, as this is not a nested context, but a line continuation.

@@ -802,36 +837,23 @@
"\n",
"# Data.\n",
"# Input and target placeholders.\n",
"input_ph, target_ph = create_placeholders(rand, batch_size_tr,\n",
"input_ph, target_ph, _ = create_tf_GraphTouple(rand, batch_size_tr,\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

example_input_data, example_target_data

let's try to avoid "_ph"

" with tf.GradientTape() as tape:\n",
" output_ops_tr = model(inputs_tr, num_processing_steps_tr)\n",
" # Loss.\n",
" loss_tr = create_loss_ops(targets_tr,output_ops_tr)\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename this method to compute_loss

@@ -998,12 +1034,12 @@
"node_size = 120\n",
"min_c = 0.3\n",
"num_graphs = len(raw_graphs)\n",
"targets = utils_np.graphs_tuple_to_data_dicts(test_values[\"target\"])\n",
"targets = graphs_touple_to_graph_list(targets_ge)\n",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have checked and this one, and the one below just works with:
utils_np.graphs_tuple_to_data_dicts

so you can remove the graphs_touple_to_graph_list method.

(You will need to also revert the changes in line 1017 and line 1043)

@alvarosg
Copy link
Collaborator

Hi @maguileracanon , I sent my review a while back, please let me know if you are still planning to get back to this :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants