Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Docs] Add docstring and type hint for initialization function of DDPM Scheduler #1556

Merged
merged 1 commit into from
Dec 26, 2022
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 25 additions & 13 deletions mmedit/models/editors/ddpm/ddpm_scheduler.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Union
from typing import Optional, Union

import numpy as np
import torch
Expand All @@ -12,11 +12,11 @@
class DDPMScheduler:

def __init__(self,
num_train_timesteps=1000,
beta_start=0.0001,
beta_end=0.02,
beta_schedule='linear',
trained_betas=None,
num_train_timesteps: int = 1000,
beta_start: float = 0.0001,
beta_end: float = 0.02,
beta_schedule: str = 'linear',
trained_betas: Optional[Union[np.array, list]] = None,
variance_type='fixed_small',
clip_sample=True):
"""```DDPMScheduler``` support the diffusion and reverse process
Expand All @@ -25,13 +25,25 @@ def __init__(self,
The code is heavily influenced by https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py. # noqa

Args:
num_train_timesteps (int, optional): _description_. Defaults to 1000.
beta_start (float, optional): _description_. Defaults to 0.0001.
beta_end (float, optional): _description_. Defaults to 0.02.
beta_schedule (str, optional): _description_. Defaults to 'linear'.
trained_betas (_type_, optional): _description_. Defaults to None.
variance_type (str, optional): _description_. Defaults to 'fixed_small'.
clip_sample (bool, optional): _description_. Defaults to True.
num_train_timesteps (int, optional): The timesteps for training
process. Defaults to 1000.
beta_start (float, optional): The beta value at start. The beta
values will be interpolated from beta_start to beta_end.
Defaults to 0.0001.
beta_end (float, optional): The beta value at the end. The beta
values will be interpolated from beta_start to beta_end.
Defaults to 0.02.
beta_schedule (str, optional): The interpolation schedule for beta
values. Supported choices are 'linear', 'scaled_linear', and
'squaredcos_cap_v2'. Defaults to 'linear'.
trained_betas (list, np.array, optional): betas directly to the
constructor to bypass `beta_start`, `beta_end` etc. Defaults to None.
variance_type (str, optional): How denoising unet output variance
value. Supported choices are 'fixed_small', 'fixed_small_log',
'fixed_large', 'fixed_large_log', 'learned', and 'leanred_range'.
Defaults to 'fixed_small'.
clip_sample (bool, optional): Whether clip the value of predicted
original image (x0) to [-1, 1]. Defaults to True.
"""
self.num_train_timesteps = num_train_timesteps
if trained_betas is not None:
Expand Down