GithubHelp home page GithubHelp logo

Comments (15)

LarryThermo avatar LarryThermo commented on May 4, 2024 13

Would you consider providing (or point me to an example) a simple but complete example of using ReduceLROnPlateau. Thanks, Lars

from lightning.

Ir1d avatar Ir1d commented on May 4, 2024 1

Also, at this moment https://github.com/williamFalcon/pytorch-lightning/blob/master/pytorch_lightning/trainer/trainer.py#L958 lr_scheduler.step() is called at the end of the epoch, while some scheduler (eg: https://pytorch.org/docs/master/optim.html#torch.optim.lr_scheduler.OneCycleLR) should be called at the end of the batch

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

@terkelbo good suggestion. can you propose a clean way of doing this? maybe it can be merged with #29

from lightning.

Ir1d avatar Ir1d commented on May 4, 2024

I think we can directly adjust lr in the optimizer as keras, which means we don't need ReduceLROnPlateau as a metric.

Specifically, perhaps we can consider adding a hook in pytorch_lightning/root_module/root_module.py like optimizer_step, so that this function can be exposed to callbacks?

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

@lr1d good point. you can just adjust the LR in the current callback also (optimizer_step). or in any of the other callbacks

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

closing because this should be implicitly supported since we can pass a ReduceLROnPlateau object... nothing we need to do, this is standard PyTorch functionality

from lightning.

Ir1d avatar Ir1d commented on May 4, 2024

When using the pytorch object, it requires to pass the metric as a param. for exmaple lr_scheduler.step(val_loss). How do you overwrite the origianl schedulers in thie way?
image

from lightning.

stolpa4 avatar stolpa4 commented on May 4, 2024

Please, I can't follow. Why the issue is closed? Do you suggest using some hooks like optimizer_step and implement reducing on plateau scheduling by hand?

from lightning.

terkelbo avatar terkelbo commented on May 4, 2024

I’m not sure either how to pass the ReduceLROnPlateau object as it needs the metric argument as pointed out by @Ir1d. @williamFalcon would it be possible that you give an example of how to use this scheduler with pytorch lightning?

from lightning.

kvhooreb avatar kvhooreb commented on May 4, 2024

It feels rather dirty to me, but you can save the loss in your pl.LightningModule's training_step() method, and then use it in the optimizer_step method. I can't verify whether it works as expected right now though.

def training_step(self, batch, batch_nb):
        # REQUIRED
        x, y = batch
        y_hat = self.forward(x)
        self.loss = F.mse_loss(y, y_hat)
        return {'loss': self.loss}

def configure_optimizers(self):
        # REQUIRED
        # can return multiple optimizers and learning_rate schedulers
        self.optimizer = torch.optim.Adam(self.parameters(), lr=self.lrinit)
        self.scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(self.optimizer, mode='min', 
                                                           factor=0.1, patience=10, 
                                                           verbose=False, threshold=0.0001, 
                                                           threshold_mode='rel', cooldown=0, 
                                                           min_lr=0, eps=1e-08)
        return self.optimizer

def optimizer_step(self, epoch_nb, batch_nb, optimizer, optimizer_i):
        """
        Do something instead of the standard optimizer behavior
        :param epoch_nb:
        :param batch_nb:
        :param optimizer:
        :param optimizer_i:
        :return:
        """
        # Sometimes (if closure is not None), this step method should return the loss. Now its None
        loss = optimizer.step()
        # So let's use self.loss, set in training_step, for LR scheduling
        self.scheduler.step(self.loss)
        # clear gradients
        optimizer.zero_grad()

Edit: This will not work as intended, since the optimizer step is called after every batch.

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

we could modify the scheduler step to take in the loss when it needs it? i have to look at this more carefully though

from lightning.

kvhooreb avatar kvhooreb commented on May 4, 2024

That would be the best solution indeed, but then you'd need a way to figure out which ones need the loss when calling step. I'm not sure how to do that at this moment.

For now, I solved it as follows, which may help people until there is a better solution:

  • Use the on_epoch_start callback on your LightningModule to initialize an empty list
  • In every training_step call, append the loss to the list
  • In the on_epoch_end callback, process the list to get the averga loss, call scheduler.step(mean_loss) and clear the list

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

From PT docs.
https://pytorch.org/docs/stable/optim.html

Prior to PyTorch 1.1.0, the learning rate scheduler was expected to be called before the optimizer’s update; 1.1.0 changed this behavior in a BC-breaking way. If you use the learning rate scheduler (calling scheduler.step()) before the optimizer’s update (calling optimizer.step()), this will skip the first value of the learning rate schedule. If you are unable to reproduce results after upgrading to PyTorch 1.1.0, please check if you are calling scheduler.step() at the wrong time.

from lightning.

williamFalcon avatar williamFalcon commented on May 4, 2024

Either way, @kvhooreb i added the .step(epoch) fix. Let me know if this works for you.

from lightning.

shawwn avatar shawwn commented on May 4, 2024

Yes, a simple example would be great please.

from lightning.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.