About
ahmed . khaled @ princeton . edu
Welcome to my tiny corner of the internet! I’m Ahmed, I work on optimization and machine learning. I’m a third-year Ph.D. student in the ECE department at Princeton University, advised by Prof. Chi Jin . I am interested in optimization in machine learning, and in federated learning.
In the past, I interned at Google DeepMind in 2024 and at Meta AI research in summer 2023. Before that, I interned in the group of Prof. Peter Richtárik at KAUST in the summers of 2019/2020, where I worked on the distributed & stochastic optimization. Prior to that, I did some research on accelerating the training of neural networks by with Prof. Amir Atiya .
Publications and Preprints
Tuning-Free Stochastic Optimization
ICML 2024 Spotlight, with Chi Jin . (bibtex) .
Directional Smoothness and Gradient Methods: Convergence and Adaptivity
Preprint (February 2024), with Aaron Mishkin , Yuanhao Wang , Aaron Defazio , and Robert M. Gower . (bibtex) .
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method
Advances in Neural Information Processing Systems 35 (NeurIPS 2023), with Chi Jin and Konstantin Mishchenko . (bibtex) .
Faster federated optimization under second-order similarity
The 11th International Conference on Learning Representations (ICLR 2023), with Chi Jin . (bibtex) .
Better Theory for SGD in the Nonconvex World
Transactions on Machine Learning Research (TMLR) 2023, with Peter Richtárik . (bibtex) . Original preprint arXiv:2002.03329 on arXiv since 2020.
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
Preprint (2022), with Abdurakhmon Sadiev , Grigory Malinovsky , Eduard Gorbunov , Igor Sokolov , Konstantin Burlachenko , and Peter Richtárik . (bibtex) .
Proximal and Federated Random Reshuffling
The 39th International Conference on Machine Learning (ICML 2022), with Konstantin Mishchenko and Peter Richtárik . (bibtex) .
FLIX: A Simple and Communication-Efficient Alternative to Local Methods in Federated Learning
The 25th International Conference on Artificial Intelligence and Statistics (AISTATS 2022), with Elnur Gasanov , Samuel Horváth , and Peter Richtárik . (bibtex) .
Random Reshuffling: Simple Analysis with Vast Improvements
Advances in Neural Information Processing Systems 33 (NeurIPS 2020), with Konstantin Mishchenko and Peter Richtárik . (bibtex) .
Tighter Theory for Local SGD on Identical and Heterogeneous Data
The 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020, with Konstantin Mishschenko and Peter Richtárik . (bibtex) . Extends the workshop papers (a , b ).
Unified Analysis of Stochastic Gradient Methods for Composite Convex and Smooth Optimization
Journal version to appear in JOTA 2023, original preprint 2020, with Othmane Sebbouh , Nicolas Loizou , Robert M. Gower , and Peter Richtárik . (bibtex) .
Distributed Fixed Point Methods with Compressed Iterates
Preprint (2019), with Sélim Chraibi , Dmitry Kovalev , Peter Richtárik , Adil Salim , and Martin Takáč . (bibtex) .
Applying Fast Matrix Multiplication to Neural Networks
The 35th ACM/SIGAPP Symposium On Applied Computing (ACM SAC) 2020, with Amir F. Atiya and Ahmed H. Abdel-Gawad . (bibtex) .
Workshop papers
A novel analysis of gradient descent under directional smoothness
5th Annual Workshop on Optimization for Machine Learning (OPT2023), with Aaron Mishkin , Aaron Defazio , and Robert M. Gower . (bibtex) .
Better Communication Complexity for Local SGD
Oral presentation at the NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik . (bibtex) .
First Analysis of Local GD on Heterogenous Data
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Konstantin Mishschenko and Peter Richtárik . (bibtex) .
Gradient Descent with Compressed Iterates
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and Confidentiality, with Peter Richtárik . (bibtex) .
Talks
On the Convergence of Local SGD on Identical and Heterogeneous Data
Federated Learning One World Seminar (2020). Video and Slides .