Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Framework ops #255

Closed
wants to merge 77 commits into from
Closed

Framework ops #255

wants to merge 77 commits into from

Conversation

JimClarke5
Copy link
Contributor

Added org.tensoflow.framwork.op.FrameworkOps to tensorflow-frameworks.
For now, these are not generated, but hard coded.

These are higher level ops that may invoke core ops. Higher level Ops may perform the
operation solely in the TensorFlow framework or do preprocessing of the Operands before invoking
a core level Op.

As part or this PR,tf.nn.raw generated package was removed and those ops are now generated directly into org.tensorflow.op.nn undertensorflow-core-api.
org.tensorflow.op.NnOps uses the core (formerly raw) ops for
SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits.

FrameworkOps now contains the high level ops, sigmoidCrossEntropyWithLogits, softmaxCrossEntropyWithLogits, and sparseSoftmaxCrossEntropyWithLogits.

Also I moved SetsOps to org.tensorflow.framwork.op and l2Normalize to org.tensorflow.framwork.op.MathOps.
There are more framework ops when layers are checked in.

The easiest way to use it, for example is:

FrameworkOps fops = FrameworkOps.create(tf);
Operand<T> result =  fops.math.l2Normalize(tLabels, axis);
...
Operand<T> diffResult = fops.sets.difference(weightsShape2D, validDims);

Sync with master tensorflow on upstream
Merge main  branch to local branch
Update after losses merge
Pull latest tensorflow master
Resync with origin/master
Sync with tensorflow/java master
Moved tf.raw.nn Ops to tf.nn.
Changed generation to  generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
@rnett
Copy link
Contributor

rnett commented Mar 27, 2021

There's a bit of a soft boundary here I think, because there's ops like booleanMask that are composed of multiple core ops like the ones you moved, but are semantically core ops (are are core-ish in Python). I'd like to keep at least things like booleanMask in core.

I'd prefer to keep these in core for the Kotlin API. Currently it does all of it's op handling using Ops and the @Operator classes from the core API. Adapting that for this would require adding a mirror of FrameworkOps in Kotlin and adding a tensorflow-framework-kotlin (although I'd probably make it look like a group using extensions). Would it be possible to keep these in core-api and Ops but put them in their own group, something like Ops.framework.math.l2Normalize?

Regardless, we should codify the "which ops to where" guidelines and add them to the contributors doc.

@JimClarke5
Copy link
Contributor Author

I believe the goal was to keep the core ops separate from the framework ops. A few months back, I started out using nn.raw and nn, but that was causing some confusion, because the higher level ops, are expecting different shapes than their core counterparts.
Also, there are some higher level ops, like tf.tensordot, that is not a core api at all, but are written in Python, leveraging other core ops in tf.math and tf.linalg.
I have a version written in Java that is used in the Dense layer that I haven't pushed yet.

I notice that TF Python added tf.raw_ops.
Note: tf.raw_ops provides direct/low level access to all TensorFlow ops. See the RFC for details. Unless you are library writer, you likely do not need to use these ops directly.

@karllessard @Craigacp do you want to provide input?

@rnett
Copy link
Contributor

rnett commented Mar 28, 2021

Yeah, I don't particularly care either way about separate or not (although depending on what we define as "core" we may want to make booleanMask and variants a framework op, as it's defined in Python and using other ops here). But for use with the Kotlin API, it's much easier if this "separate" a) lives in core-api and b) is used from Ops. Of course, that means you won't be able to get them as separate, but I would think a separate group-like construct would be enough?

Also there's potentially things like #248's top level scopes, initScope, and resource management for eager tensors that use Ops and duplicating them here would be a bit of a pain.

If we do decide to leave them in framework, it would be much easier for those and for the Kotlin API if we generated FrameworkOps as a wrapper around Ops instead of Scope. That way you don't have to deal with any special handling of Ops. You already have a constructor for that, it would be easy enough to change the generator to use Ops like that. It would make the op methods much easier to write, as well.

Unrelated to everything else but something I thought of while looking at Python's: having some kind of frameworkOps getter in Ops would help discoverability a lot. Python has all of theirs under the same namespace (i.e. tf.math.abs is just a wrapper around the raw op), and if we separate it out like this I could see people not finding the ops or using the core ones unintentionally if the Python one of the same name is a framework op.

@JimClarke5 JimClarke5 mentioned this pull request Apr 18, 2021
Sync with Metrics Phase 2
Moved tf.raw.nn Ops to tf.nn.
Changed generation to  generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
…Logits and sparseSoftmaxCrossEntropyWithLogits
Moved tf.raw.nn Ops to tf.nn.
Changed generation to  generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups.
Fixed MetricsHelper and Losses to use the bew FrameworkOps.
Moved SetsOps to framework.op.
…l on the AssertThats. This change is unrelated to this PR, but the bug showed up here.
Move the functions to seperate classes.
…to Framework_Ops

� Conflicts:
�	tensorflow-core/tensorflow-core-api/src/gen/resources/ops.pb
�	tensorflow-framework/src/main/java/org/tensorflow/framework/losses/Losses.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/metrics/impl/MetricsHelper.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/FrameworkOps.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/LinalgOps.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/MathOps.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/NnOps.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/SetOps.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SigmoidCrossEntropyWithLogits.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SoftmaxCrossEntropyWithLogits.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SparseSoftmaxCrossEntropyWithLogits.java
�	tensorflow-framework/src/main/java/org/tensorflow/framework/op/sets/Sets.java
�	tensorflow-framework/src/test/java/org/tensorflow/framework/op/MathOpsTest.java
�	tensorflow-framework/src/test/java/org/tensorflow/framework/op/SetOpsTest.java
@google-cla
Copy link

google-cla bot commented Jun 17, 2021

We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google.
In order to pass this check, please resolve this problem and then comment @googlebot I fixed it.. If the bot doesn't comment, it means it doesn't think anything has changed.

ℹ️ Googlers: Go here for more info.

@JimClarke5
Copy link
Contributor Author

@karllessard I got the Need a CLA for one or more commit authors error. How do I find out who that is?

@karllessard
Copy link
Collaborator

There are a lot of commits in this PR which are not related to the framework ops, your rebase is incorrect, do it once again interactively (-i), making sure to drop all commits that are unrelated to your changes (you probably started your work from a branch that was not merged yet to master?). Also, make sure your Git repo metadata is in sync with the official one by fetching it remotely before rebasing.

@JimClarke5
Copy link
Contributor Author

I did all that. It would probably be easier to create a new branch on the latest master.

@JimClarke5
Copy link
Contributor Author

@karllessard The demonstrates an issue I keep running into. I push a PR, but then it takes so long to review, the PR gets so out of date with master. rebasing seems problematic when the delta is so big.

@karllessard
Copy link
Collaborator

Yeah @JimClarke5 I understand your pain. Making many smaller PR would definitely help get them merged before that happens, when that's feasible. e.g. we can add one metric at a time.

Otherwise we can be more relaxed on the framework and merge the large PRs without a deep review but I would prefer the first approach.

@JimClarke5
Copy link
Contributor Author

@karllessard usually the first phase of a PR has a lot of plumbing which makes it more complex. Subsequent phases can be smaller. I will redo this PR with a new branch.

@JimClarke5
Copy link
Contributor Author

JimClarke5 commented Jun 20, 2021

This branch is way out of sync with master.
I am closing and reopening a new PR based on new branch Framework_OPS_2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants