Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make __getattr__ on Module return object, not Any #257

Merged
merged 2 commits into from
Feb 6, 2024

Commits on Feb 6, 2024

  1. make __getattr__ on Module return object, not Any

    PyTorch chose to make it Any because they expect its users' code
    to be "highly dynamic": pytorch/pytorch#104321
    
    It is not the case for us, in Refiners having untyped code
    goes contrary to one of our core principles.
    
    Note that there is currently an open PR in PyTorch to
    return `Module | Tensor`, but in practice this is not always
    correct either: pytorch/pytorch#115074
    
    I also moved Residuals-related code from SD1 to latent_diffusion
    because SDXL should not depend on SD1.
    catwell committed Feb 6, 2024
    Configuration menu
    Copy the full SHA
    968306b View commit details
    Browse the repository at this point in the history
  2. make LoRA generic

    catwell committed Feb 6, 2024
    Configuration menu
    Copy the full SHA
    b1ad6f3 View commit details
    Browse the repository at this point in the history