Emanuele Zangrando

Emanuele Zangrando

PhD Student

Gran Sasso Science Institute

Professional Summary

Hi everyone! I’m an applied math PhD student at Gran Sasso Science Institute in L’Aquila, Italy. My main interests are: large scale numerical optimization methods for deep learning, implicit biases of optimization, and feature learning mechanisms in deep networks. You can reach me at emanuele.zangrando@gssi.it

Education

PhD candidate in Mathematics

Gran Sasso Science Institute

MS Data Science

University of Padova

BS Mathematics

University of Padova

Interests

Optimization for Deep-Learning Feature learning and geometry of representation spaces (Neuromanifolds) High dimensional loss landscapes Calculus of variations
📚 My Research
The focus of my PhD project is the investigation of properties of low-rank neural networks and in effective memory-efficient ways to train or fine-tune them. Connected to this, I also recently gained interest in mechanisms of feature learning in deep networks, with a particular focus on the emergence of low-dimensional structures during training. In this page I keep a collection of my most recent works together with some selected projects and Demos.
Selected Publications
All Publications
(2025). Provable Emergence of Deep Neural Collapse and Low-Rank Bias in $L^2$-Regularized Nonlinear Networks. ArXiv preprint.
PDF
(2025). dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation. In ICLR.
(2025). GeoLoRA: Geometric integration for parameter-efficient fine-tuning. In ICLR.
(2024). Geometry-aware training of factorized layers in tensor Tucker format. In NeurIPS.
(2024). Low-Rank Adversarial PGD Attack. In ArXiv preprint.
PDF