You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Disclaimer: This code is a PROTOTYPE and most likely contains bugs. It should work with most Tensorflow models but most likely it doesn't comply with TensorFlow production code standards. Use at your own risk.
Requirements
TensorFlow >= 1.4.
For a PyTorch implementation of L4, see l4-pytorch (also linked here as a submodule).
Installation
Either use one of the following python pip commands,
or simply drop the L4/L4.py file to your project directory
Usage
Exaclty as you would expect from a TensorFlow optimizer. Empirically, good values for the 'fraction' parameter are 0.1 < fraction < 0.3, where 0.15 is set default and should work well enough in most cases. Decreasing 'fraction' is typically a good idea in case of a small batch size or more generally for very little signal in the gradients. Too high values of 'fraction' behave similarly to too high learning rates (i.e. divergence or very early plateauing).
importL4
...
opt=L4.L4Mom() # default value fraction=0.15 is usedgrads_and_vars=opt.compute_gradients(loss)
...
# Gradient manipulation
...
opt.apply_gradients(grads_and_vars) # (!) Passing the loss is no longer needed (!)
...
Notes
Contribute: If you spot a bug or some incompatibility, contribute via a pull request! Thank you!
About
Code for paper "L4: Practical loss-based stepsize adaptation for deep learning"