Abstract:
The training and execution of machine learning models on quantum hardware is typically limited by the number of available qubits. A potential approach to overcoming this limitation is Distributed Quantum Machine Learning (DQML), where models are partitioned and executed across multiple quantum computers. While this increases the number of available qubits and potentially enables the training of larger models, it also introduces substantial classical and quantum communication overhead, leading to increased computational costs and extended training times. To investigate this approach and its limitations, this thesis presents a DQML model using a classical server and two quantum clients, implemented with the distributed quantum framework NetQASM. We evaluated the model on datasets with two and four features using a quantum network simulator and it achieved classification performance comparable to that of a centralized quantum baseline. To address the communication overhead, which resulted in training times of 50 to 500 minutes per epoch, optimizations in circuit design, entanglement generation, and distributed gate execution were implemented and evaluated. These adaptations led to a reduction in runtime of up to 60% while maintaining competitive classification accuracy.
Author:
Kian Izadi
Advisors:
Leo Sünkel, Michael Kölle, Thomas Gabor, Claudia Linnhoff-Popien
Student Thesis | Published March 2025 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors