About me

Greetings, I’m Ahmad Rammal, a dedicated individual with a profound enthusiasm for mathematics, machine learning, and ethics. With a background in Applied Mathematics from Ecole polytechnique, and my ongoing master’s studies in the MVA program, I enjoy addressing challenging problems in the field of AI. Beyond academics, you can find me pursuing my hobbies, which include calisthenics, playing the guitar, cycling, meditation, and reading.

News

20 January 2024: Paper accepted at AISTATS 2024

Thrilled to announce that our research paper, “Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates,” has been accepted at AISTATS 2024. A heartfelt thank you to everyone involved in making this outcome possible!

Paper available here.

15 January 2024: New Paper Available on arXiv

You can access the full paper on arXiv by clicking here.

I’m excited to share that our latest research paper, titled “Correlated Quantization for Faster Nonconvex Distributed Optimization” is now available on arXiv. This work was conducted during my internship at KAUST in the group of Peter Richtarik, in collaboration with Yury Demidovich, and Andrei Panferov.

Paper Overview In this paper we focus on the use of correlated quantizers, introduced by Suresh et al. in 2022, and demonstrate how these quantizers offer benefits over traditional independent quantizers in terms of communication complexity. The paper’s primary contribution is the analysis and improvement of the MARINA algorithm, a leading distributed non-convex optimization method. By incorporating correlated quantizers, we show that their enhanced version of MARINA outperforms both the original MARINA and the distributed Stochastic Gradient Descent (SGD) method of Suresh et al. in terms of communication efficiency. This expansion significantly broadens the usability of the MARINA algorithm beyond the conventional setup, which typically involves independent and unbiased compressors. The paper also presents extensive experimental results that support our theoretical claims.

15 October 2023: New Paper Available on arXiv

You can access the full paper on arXiv by clicking here.

I’m excited to share that our latest research paper, titled “Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates” is now available on arXiv. This work was conducted during my internship at KAUST in the group of Peter Richtarik.

Paper Overview In our recent paper, we focus on Byzantine-robust learning with compression, a crucial element for distributed optimization in collaborative learning scenarios. We introduce Byz-DASHA-PAGE, a novel Byzantine-robust method with compression, which offers superior convergence rates, smaller neighborhood sizes in heterogeneous cases, and increased tolerance to Byzantine workers compared to the state-of-the-art Byz-VR-MARINA. Additionally, we present Byz-EF21, the first Byzantine-robust method with communication compression and error feedback.