[ECCV 2020] HALO: Hardware-Aware Learning to Optimize
Accepted as ECCV 2020 regular paper!
Abstract:
There has been an explosive demand for bringing machine learning (ML) powered intelligence into numerous Internet-of-Things (IoT) devices. However, the effectiveness of such intelligent functionality requires in-situ continuous model adaptation for adapting to new data and environments, while the on-device computing and energy resources are usually extremely constrained. Neither traditional hand-crafted (e.g., SGD, Adagrad, and Adam) nor existing meta optimizers are specifically designed to meet those challenges, as the former requires tedious hyper-parameter tuning while the latter are often costly due to the meta algorithms’ own overhead. To this end, we propose hardware-aware learning to optimize (HALO), a practical meta optimizer dedicated to resource-efficient on-device adaptation. Our HALO optimizer features the following highlights: (1) faster adaptation speed (i.e., taking fewer data or iterations to reach a specified accuracy) by introducing a new regularizer to promote empirical generalization; and (2) lower per-iteration complexity, thanks to a stochastic structural sparsity regularizer being enforced. Furthermore, the optimizer itself is designed as a very light-weight RNN and thus incurs negligible overhead. Ablation studies and experiments on five datasets, six optimizees, and two state-of-the-art (SOTA) edge AI devices validate that, while always achieving a better accuracy (↑0.46% - ↑20.28%), HALO can greatly trim down the energy cost (up to ↓60%) in adaptation, quantified using an IoT device or SOTA simulator.
Bibtex
If you find this work inspiring, please cite:
@InProceedings{li_2020_halo,
author = {Li, Chaojian and Chen, Tianlong and You, Haoran and Wang, Zhangyang and Lin, Yingyan},
title = {HALO: Hardware-Aware Learning to Optimize},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2020}
}