Skip to content

lr_scheduler does not work when "interval": "step" #20436

Closed
@lucl13

Description

@lucl13

Bug description

lr_scheduler does not work when "interval": "step." No changes in lr were observed within one epoch, i think it still uses "interval": "epoch"

def configure_optimizers(self):
    #optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
    optimizer = torch.optim.AdamW(self.parameters(), lr=self.learning_rate, weight_decay=1e-2)
    
    total_steps = self.trainer.estimated_stepping_batches
    scheduler = torch.optim.lr_scheduler.OneCycleLR(
        optimizer, 
        max_lr=1e-2, 
        total_steps=total_steps
    )
    
    return {
        "optimizer": optimizer,
        "lr_scheduler": {
            "scheduler": scheduler,
            "interval": "step", 
            "frequency": 1
        },
    }
```****

### What version are you seeing the problem on?

v2.4

### How to reproduce the bug

_No response_

### Error messages and logs

_No response_

### Environment

<details>
  <summary>Current environment</summary>

#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(conda, pip, source):


</details>


### More info

_No response_

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingneeds triageWaiting to be triaged by maintainersver: 2.4.x

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions