-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Good First Issue][TF FE] Support Conj operation for TensorFlow models #21462
Comments
.take |
Thank you for looking into this issue! Please let us know if you have any questions or require any help. |
Hello @rupeshs! Do you have any questions or require any help? Just yesterday our CONTRIBUTING.md has been updated with a technical guide - I highly recommend checking it out. :) |
Hi @jvr0123, do you have any update on this task? Best regards, |
Hi @rkazants, Will have a pull request by the weekend. Was having some issues with the testing framework that I think I have mostly figured out. |
I am happy to announce that we have created a channel dedicated to Good First Issues support on our Intel DevHub Discord server! Join it to receive support, engage in discussions, ask questions and talk to OpenVINO developers. |
Hello, extreme apologies for the wait. I have just created a draft pull request. Thanks for linking the other PR, was extremely helpful! |
Context
OpenVINO component responsible for support of TensorFlow models is called as TensorFlow Frontend (TF FE). TF FE converts a model represented in TensorFlow opset to a model in OpenVINO opset.
In order to infer TensorFlow models with Conj operation by OpenVINO, TF FE needs to be extended with this operation support.
What needs to be done?
For Conj operation support, you need to add the corresponding loader into TF FE op directory and to register it into the dictionary of Loaders. One loader is responsible for conversion (or decomposition) of one type of TensorFlow operation.
Here is an example of loader implementation for TensorFlow
Einsum
operation:In this example,
translate_einsum_op
converts TFEinsum
into OVEinsum
.NodeContext
object passed into the loader packs all info about inputs and attributes ofEinsum
operation. The loader retrieves an attribute of the equation by using theNodeContext::get_attribute()
method, prepares input vector, createsEinsum
operation from OV opset and returns a vector of outputs.Responsibility of a loader is to parse operation attributes, prepare inputs and express TF operation via OV operations sub-graph. Example for
Einsum
demonstrates the resulted sub-graph with one operation. In PR #19007 you can see operation decomposition into multiple node sub-graph.Hint
OpenVINO does not support directly complex tensors. For supporting this, we use
ComplexTypeMark
that loader forConj
should propagate forward by representing complex tensor as floating-point tensor having additional dimension in the tail to contain real and imaginary parts. See examples of support complex tensors byComplexAbs
loader from #20860.Example Pull Requests
Resources
Contact points
@openvinotoolkit/openvino-tf-frontend-maintainers
The text was updated successfully, but these errors were encountered: