Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operate under Inference Mode as well as No Grad #112

Open
jatkinson1000 opened this issue Apr 8, 2024 · 2 comments
Open

Operate under Inference Mode as well as No Grad #112

jatkinson1000 opened this issue Apr 8, 2024 · 2 comments
Labels
autograd Tasks towards the online training / automatic differentiation feature enhancement New feature or request hackathon Hacktoberfest Issues open for Hacktoberfest contributions

Comments

@jatkinson1000
Copy link
Member

Originally commented by @ElliottKasoar as part of #103

  • InferenceMode
    No changes are currently included, but it would be good to support InferenceMode too eventually, as it should provide further performance benefits over NoGradMode.
    However, it has stricter requirements, and the mode was only added (as a beta) in PyTorch 1.9, so we would need to be careful if we want to support older versions.

"Looking at the docs we'd maybe want to create an optional argument bool :: inference mode and if set to true enable c10::InferenceMode for the function as a block.

@jatkinson1000 jatkinson1000 added the enhancement New feature or request label Apr 8, 2024
@jatkinson1000
Copy link
Member Author

Copying @ElliottKasoar's comment from #81

  1. InferenceMode
  • See: inference mode, autograd mechanics and the dev podcast
  • From our benchmarking (see FTorch with InferenceMode and NoGradMode sections), benefits were less clear, but in general it is expected to be at least as fast
    • Tests were carried out by replacing torch::AutoGradMode enable_grad(requires_grad); with c10::InferenceMode guard(requires_grad); in all ctorch.cpp functions, but ideally both options would be presented to users
  • This mode was only added (as a beta) in PyTorch 1.9, so we would need to consider support for older versions
  • The mode is also much stricter than NoGradMode, so cannot be used in all cases

@jatkinson1000 jatkinson1000 added the Hacktoberfest Issues open for Hacktoberfest contributions label Sep 30, 2024
@jwallwork23 jwallwork23 added autograd Tasks towards the online training / automatic differentiation feature hackathon labels Jan 13, 2025
@jatkinson1000
Copy link
Member Author

@jwallwork23 and I just discussed this.

  • Because we do not have a single C++ program and scope we can't simply set the value once and would need to do it function-by-function.
  • Providing it as an optional argument to every function is not ideal as it presents the opportunity for user error (passing one value to tensor creation, but a different one to net etc.)
  • The ideal solution would perhaps be to have a boolean in FTorch that is used to set the value of InferenceMode when calling to C++ functions. We would provide a function set_InferenceMode() or similar to set the value of this for a 'block' of Fortran to emulate a scope.

This would be good for a hackathon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autograd Tasks towards the online training / automatic differentiation feature enhancement New feature or request hackathon Hacktoberfest Issues open for Hacktoberfest contributions
Projects
None yet
Development

No branches or pull requests

2 participants