-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Framework ops #255
Framework ops #255
Conversation
Sync with master tensorflow on upstream
Merge main branch to local branch
Update after losses merge
Fix Javadoc errors (tensorflow#152)
pull type def
Metrics Phase 1 (tensorflow#180)
Pull latest tensorflow master
Merge with latest
Resync with origin/master
Sync with tensorflow/java master
Moved tf.raw.nn Ops to tf.nn. Changed generation to generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
There's a bit of a soft boundary here I think, because there's ops like I'd prefer to keep these in Regardless, we should codify the "which ops to where" guidelines and add them to the contributors doc. |
I believe the goal was to keep the core ops separate from the framework ops. A few months back, I started out using I notice that TF Python added tf.raw_ops. @karllessard @Craigacp do you want to provide input? |
Yeah, I don't particularly care either way about separate or not (although depending on what we define as "core" we may want to make booleanMask and variants a framework op, as it's defined in Python and using other ops here). But for use with the Kotlin API, it's much easier if this "separate" a) lives in Also there's potentially things like #248's top level scopes, initScope, and resource management for eager tensors that use Ops and duplicating them here would be a bit of a pain. If we do decide to leave them in Unrelated to everything else but something I thought of while looking at Python's: having some kind of |
Sync with Metrics Phase 2
Sync with master
Sync with Regularizers
Moved tf.raw.nn Ops to tf.nn. Changed generation to generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
…Logits and sparseSoftmaxCrossEntropyWithLogits
Moved tf.raw.nn Ops to tf.nn. Changed generation to generate SoftmaxCrossEntropyWithLogits and SparseSoftmaxCrossEntropyWithLogits to core NNOps (tf.nn).
Added NnOps and SetOps as groups. Fixed MetricsHelper and Losses to use the bew FrameworkOps. Moved SetsOps to framework.op.
…l on the AssertThats. This change is unrelated to this PR, but the bug showed up here.
Move the functions to seperate classes.
…to Framework_Ops � Conflicts: � tensorflow-core/tensorflow-core-api/src/gen/resources/ops.pb � tensorflow-framework/src/main/java/org/tensorflow/framework/losses/Losses.java � tensorflow-framework/src/main/java/org/tensorflow/framework/metrics/impl/MetricsHelper.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/FrameworkOps.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/LinalgOps.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/MathOps.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/NnOps.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/SetOps.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SigmoidCrossEntropyWithLogits.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SoftmaxCrossEntropyWithLogits.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/nn/SparseSoftmaxCrossEntropyWithLogits.java � tensorflow-framework/src/main/java/org/tensorflow/framework/op/sets/Sets.java � tensorflow-framework/src/test/java/org/tensorflow/framework/op/MathOpsTest.java � tensorflow-framework/src/test/java/org/tensorflow/framework/op/SetOpsTest.java
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. ℹ️ Googlers: Go here for more info. |
@karllessard I got the |
There are a lot of commits in this PR which are not related to the framework ops, your rebase is incorrect, do it once again interactively ( |
I did all that. It would probably be easier to create a new branch on the latest master. |
@karllessard The demonstrates an issue I keep running into. I push a PR, but then it takes so long to review, the PR gets so out of date with master. |
Yeah @JimClarke5 I understand your pain. Making many smaller PR would definitely help get them merged before that happens, when that's feasible. e.g. we can add one metric at a time. Otherwise we can be more relaxed on the framework and merge the large PRs without a deep review but I would prefer the first approach. |
@karllessard usually the first phase of a PR has a lot of plumbing which makes it more complex. Subsequent phases can be smaller. I will redo this PR with a new branch. |
This branch is way out of sync with master. |
Added
org.tensoflow.framwork.op.FrameworkOps
totensorflow-frameworks
.For now, these are not generated, but hard coded.
These are higher level ops that may invoke core ops. Higher level Ops may perform the
operation solely in the TensorFlow framework or do preprocessing of the Operands before invoking
a core level Op.
As part or this PR,
tf.nn.raw
generated package was removed and those ops are now generated directly intoorg.tensorflow.op.nn
undertensorflow-core-api.
org.tensorflow.op.NnOps
uses the core (formerly raw) ops forSoftmaxCrossEntropyWithLogits
andSparseSoftmaxCrossEntropyWithLogits
.FrameworkOps
now contains the high level ops,sigmoidCrossEntropyWithLogits
,softmaxCrossEntropyWithLogits
, andsparseSoftmaxCrossEntropyWithLogits
.Also I moved
SetsOps
toorg.tensorflow.framwork.op
andl2Normalize
toorg.tensorflow.framwork.op.MathOps
.There are more framework ops when layers are checked in.
The easiest way to use it, for example is: