Abstract:
This thesis explores the implementation and evaluation of Quantum Federated Learning (QFL), where Variational Quantum Circuits (VQCs) are collaboratively trained across multiple quantum clients. The primary focus is on comparing the performance and trainability of QFL with traditional non-federated quantum machine learning approaches using the MNIST dataset. Experiments were conducted with 2, 3, 4, and 5 clients, each processing different subsets of data, and with varying numbers of layers (1, 2, and 4) in the quantum circuits. The trainability of the models was assessed through the evaluation of accuracy, loss, and gradient norms throughout the training process. The results demonstrate that while QFL enables collaborative learning and shows significant improvements in these metrics during training, the baseline models without federated learning generally exhibit superior performance in terms of final accuracy and loss due to the uninterrupted optimization process. Additionally, the impact of increasing the number of layers on training stability and performance was examined.
Author:
Sina Mohammad Rezaei
Advisors:
Leo Sünkel, Thomas Gabor, Tobias Rohe, Claudia Linnhoff-Popien
Student Thesis | Published November 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors