About me

Hello! I’m Ahmad, an AI researcher with a strong enthusiasm for mathematics and artificial intelligence. I graduated from École Polytechnique and am currently pursuing my PhD at Meta FAIR in Paris, under the supervision of Rémi Munos and Amaury Hayat.

My research interests include formal mathematical reasoning, formal provers, self-play, and automatic theory creation.

Beyond academics, you can find me pursuing my hobbies, which include climbing, calisthenics, playing the guitar, and reading. Feel free to reach out if you’d like to connect and have a chat ;)


News

15 April 2025: Paper Selected for Oral Presentation at UAI 2025

Thrilled to announce that our research paper, “Correlated Quantization for Faster Nonconvex Distributed Optimization” has been accepted at UAI 2025. A heartfelt thank you to everyone involved in making this outcome possible!

Paper available here.


9 December 2024: Paper Selected for Oral Presentation at NeurIPS Math-AI Workshop 2024

Our research paper, “Learning Mathematical Rules with Large Language Models,” has been selected for an oral presentation at the MATH-AI Workshop at NeurIPS’24. This recognition places it among the top 4% of submissions.

Paper available here.


15 October 2024: Paper accepted at NeurIPS Math-AI Workshop 2024

Thrilled to announce that our research paper, “Learning Mathematical Rules with Large Language Models” has been accepted at the MATH-AI Workshop at NeurIPS’24. This work was done during my end-of-studies internship in the AI research team at JPMorgan. A heartfelt thank you to everyone involved in making this outcome possible!

Paper available here.

20 January 2024: Paper accepted at AISTATS 2024

Thrilled to announce that our research paper, “Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates,” has been accepted at AISTATS 2024. A heartfelt thank you to everyone involved in making this outcome possible!

Paper available here.

15 January 2024: New Paper Available on arXiv

You can access the full paper on arXiv by clicking here.

I’m excited to share that our latest research paper, titled “Correlated Quantization for Faster Nonconvex Distributed Optimization” is now available on arXiv. This work was conducted during my internship at KAUST in the group of Peter Richtarik, in collaboration with Yury Demidovich, and Andrei Panferov.

Paper Overview In this paper we focus on the use of correlated quantizers, introduced by Suresh et al. in 2022, and demonstrate how these quantizers offer benefits over traditional independent quantizers in terms of communication complexity. The paper’s primary contribution is the analysis and improvement of the MARINA algorithm, a leading distributed non-convex optimization method. By incorporating correlated quantizers, we show that their enhanced version of MARINA outperforms both the original MARINA and the distributed Stochastic Gradient Descent (SGD) method of Suresh et al. in terms of communication efficiency. This expansion significantly broadens the usability of the MARINA algorithm beyond the conventional setup, which typically involves independent and unbiased compressors. The paper also presents extensive experimental results that support our theoretical claims.

15 October 2023: New Paper Available on arXiv

You can access the full paper on arXiv by clicking here.

I’m excited to share that our latest research paper, titled “Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates” is now available on arXiv. This work was conducted during my internship at KAUST in the group of Peter Richtarik.

Paper Overview In our recent paper, we focus on Byzantine-robust learning with compression, a crucial element for distributed optimization in collaborative learning scenarios. We introduce Byz-DASHA-PAGE, a novel Byzantine-robust method with compression, which offers superior convergence rates, smaller neighborhood sizes in heterogeneous cases, and increased tolerance to Byzantine workers compared to the state-of-the-art Byz-VR-MARINA. Additionally, we present Byz-EF21, the first Byzantine-robust method with communication compression and error feedback.