madgrad
'MADGRAD' Method for Stochastic Optimization
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <arxiv:2101.11075>.
Versions across snapshots
| Version | Repository | File | Size |
|---|---|---|---|
0.1.0 |
rolling linux/jammy R-4.5 | madgrad_0.1.0.tar.gz |
56.8 KiB |
0.1.0 |
rolling linux/noble R-4.5 | madgrad_0.1.0.tar.gz |
56.7 KiB |
0.1.0 |
rolling source/ R- | madgrad_0.1.0.tar.gz |
58.0 KiB |
0.1.0 |
latest linux/jammy R-4.5 | madgrad_0.1.0.tar.gz |
56.8 KiB |
0.1.0 |
latest linux/noble R-4.5 | madgrad_0.1.0.tar.gz |
56.7 KiB |
0.1.0 |
latest source/ R- | madgrad_0.1.0.tar.gz |
58.0 KiB |
0.1.0 |
2026-04-26 source/ R- | madgrad_0.1.0.tar.gz |
58.0 KiB |
0.1.0 |
2026-04-23 source/ R- | madgrad_0.1.0.tar.gz |
58.0 KiB |
0.1.0 |
2026-04-09 windows/windows R-4.5 | madgrad_0.1.0.zip |
59.3 KiB |
0.1.0 |
2025-04-20 source/ R- | madgrad_0.1.0.tar.gz |
58.0 KiB |