Weimin Wu

Ph.D. Candidate, Computer Science, Northwestern University

Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training


Unpublished


Weimin Wu*, Maojiang Su*, Jerry Yao-Chieh Hu*, Zhao Song, Han Liu
2024

View PDF https://arxiv.org/abs/2411.16549
Cite

Cite

APA   Click to copy
Wu*, W., Su*, M., Hu*, J. Y.-C., Song, Z., & Liu, H. (2024). Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training.


Chicago/Turabian   Click to copy
Wu*, Weimin, Maojiang Su*, Jerry Yao-Chieh Hu*, Zhao Song, and Han Liu. “Transformers Are Deep Optimizers: Provable In-Context Learning for Deep Model Training,” 2024.


MLA   Click to copy
Wu*, Weimin, et al. Transformers Are Deep Optimizers: Provable In-Context Learning for Deep Model Training. 2024.


BibTeX   Click to copy

@unpublished{weimin2024a,
  title = {Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training},
  year = {2024},
  author = {Wu*, Weimin and Su*, Maojiang and Hu*, Jerry Yao-Chieh and Song, Zhao and Liu, Han}
}

Abstract:

We investigate the transformer’s capability for in-context learning (ICL) to simulate the training process of deep models. Our key contribution is providing a positive example of using a transformer to train a deep neural network by gradient descent in an implicit fashion via ICL. Specifically, we provide an explicit construction of a (2N +4)L-layer transformer capable of simulating L gradient descent steps of an N-layer ReLU network through ICL. We also give the theoretical guarantees for the approximation within any given error and the convergence of the ICL gradient descent. Additionally, we extend our analysis to the more practical setting using Softmax-based transformers. We validate our findings on synthetic datasets for 3-layer, 4-layer, and 6-layer neural networks. The results show that ICL performance matches that of direct training.


Tools
Translate to