About me
I have completed my PhD in January 2024 at Laboratoire de Mathématiques d’Orsay under the supervision of Lénaïc Chizat and Christophe Giraud, working on a mathematical theory of deep neural networks. I mainly studied the dynamics induced by gradient methods in the large-width limit to try to discover interesting properties that would shed light on why such models work so well in practice. I graduated from Ecole Polytechnique in 2017 and completed my Masters in 2018 at Ecole Normale Supérieure. Before my PhD, I worked two years in Berlin at Zalando as a research engineer.
My research interests are around statistics and optimisation, and my goal is to develop a rigorous theory of deep neural networks, even if it is at the cost of studying simpler models than the ones actually used in practice, because I believe it is the only way to extract the essence of what truly works in deep learning from the noise created by the myriad of variations in architecture and optimisation method. My hope is that in doing so, we might obtain insights on the learning principles at play in neural networks and therefore develop more performant, robust and less data-hungry networks than current ones.
Media
Publications
- K. Hajjar, L. Chizat, C. Giraud. Training Integrable Parameterizations of Deep Neural Networks in the Infinite-Width Limit. [arXiv:2110.15596], Preprint, 2021, Revised version currently under review at JMLR.
- K. Hajjar, L. Chizat. Symmetries in the Dynamics of Wide two-layer Neural Networks, 2022. [arXiv Preprint, ERA special issue]
- PhD Thesis