Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up the inference of the saved_model(s). Fixes #5847 #5848

Merged
merged 3 commits into from
Jun 23, 2021

Commits on Jun 17, 2021

  1. Speed up of the inference of saved_model(s).

    Signed-off-by: darth-vader-lg <luigi.generale@gmail.com>
    darth-vader-lg committed Jun 17, 2021
    Configuration menu
    Copy the full SHA
    48b517f View commit details
    Browse the repository at this point in the history

Commits on Jun 18, 2021

  1. Fixed TensorFlowTransform fitting problem.

    - Fixed the exception while fitting data with more than one input tensor. Followed the OnnxTransformer schema for the data view getters creation.
    
    Signed-off-by: darth-vader-lg <luigi.generale@gmail.com>
    darth-vader-lg committed Jun 18, 2021
    Configuration menu
    Copy the full SHA
    7af106e View commit details
    Browse the repository at this point in the history

Commits on Jun 22, 2021

  1. Dispose of the cached tensors in the TensorFlowTransformer.

    - The cached tensors are disposed at the end of inference operations.
    
    Signed-off-by: darth-vader-lg <luigi.generale@gmail.com>
    darth-vader-lg committed Jun 22, 2021
    Configuration menu
    Copy the full SHA
    e2e5ae6 View commit details
    Browse the repository at this point in the history