Affiliation:
1. Amazon
2. Istituto Italiano di Tecnologia
3. University College London
4. Università di Firenze
Abstract
We study the problem of fitting task-specific learning rate schedules from the perspective of hyperparameter optimization, aiming at good generalization. We describe the structure of the gradient of a validation error w.r.t. the learning rate schedule -- the hypergradient. Based on this, we introduce MARTHE, a novel online algorithm guided by cheap approximations of the hypergradient that uses past information from the optimization trajectory to simulate future behaviour. It interpolates between two recent techniques, RTHO (Franceschi et al., 2017) and HD (Baydin et al. 2018), and is able to produce learning rate schedules that are more stable leading to models that generalize better.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Less is More: Learning Simplicity in Datacenter Scheduling;2022 IEEE 13th International Green and Sustainable Computing Conference (IGSC);2022-10-24
2. Performance-Based Adaptive Learning Rate Scheduler Algorithm;Algorithms for Intelligent Systems;2021