A second-order-like optimizer with adaptive gradient scaling for deep learning - ANITI - Artificial and Natural Intelligence Toulouse Institute
Pré-Publication, Document De Travail Année : 2024

A second-order-like optimizer with adaptive gradient scaling for deep learning

Résumé

In this empirical article, we introduce INNAprop, an optimization algorithm that combines the INNA method with the RMSprop adaptive gradient scaling. It leverages second-order information and rescaling while keeping the memory requirements of standard DL methods as AdamW or SGD with momentum. After giving geometrical insights, we evaluate INNAprop on CIFAR-10, Food101, and ImageNet with ResNets, VGG, DenseNet, and ViT, and on GPT-2 (OpenWebText) train from scratch and with LoRA fine-tuning (E2E). INNAprop consistently matches or outperforms AdamW both in training speed and accuracy, with minimal hyperparameter tuning in large-scale settings. Our code is publicly available at \url{https://github.com/innaprop/innaprop}.
Fichier principal
Vignette du fichier
innaprop_v2/innaprop_arxiv.pdf (1.19 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04724894 , version 1 (07-10-2024)
hal-04724894 , version 2 (09-12-2024)

Identifiants

Citer

Jérôme Bolte, Ryan Boustany, Edouard Pauwels, Andrei Purica. A second-order-like optimizer with adaptive gradient scaling for deep learning. 2024. ⟨hal-04724894v2⟩
112 Consultations
76 Téléchargements

Altmetric

Partager

More