In this work, we consider minimizing the average of a very large number of
smooth and possibly non-convex functions, and we focus on two widely used
minibatch frameworks to tackle this optimization problem: Incremental Gradient
(IG) and Random Reshuffling (RR). We define ease-controlled modifications of
the IG/RR schemes, which require a light additional computational effort {but}
can be proved to converge under {weak} and standard assumptions. In particular,
we define two algorithmic schemes in which the IG/RR iteration is controlled by
using a watchdog rule and a derivative-free linesearch that activates only
sporadically to guarantee convergence. The two schemes differ in the watchdog
and the linesearch, which are performed using either a monotonic or a
non-monotonic rule. The two schemes also allow controlling the updating of the
stepsize used in the main IG/RR iteration, avoiding the use of pre-set rules
that may drive the stepsize to zero too fast, reducing the effort in designing
effective updating rules of the stepsize. We prove convergence under the mild
assumption of Lipschitz continuity of the gradients of the component functions
and perform extensive computational analysis using different deep neural
architectures and a benchmark of varying-size datasets. We compare our
implementation with both a full batch gradient method (i.e. L-BFGS) and an
implementation of IG/RR methods, proving that our algorithms require a similar
computational effort compared to the other online algorithms and that the
control on the learning rate may allow a faster decrease of the objective
function.
Dettaglio pubblicazione
2025, COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, Pages -
Convergence of ease-controlled Random Reshuffling gradient Algorithms under Lipschitz smoothness (01a Articolo in rivista)
Seccia Ruggiero, Coppola Corrado, Liuzzi Giampaolo, Palagi Laura
keywords