• Home
  • News
  • Technology
  • Research
  • Teaching
  • Business
  • Jobs
  • Home
  • News
  • Technology
  • Research
  • Teaching
  • Business
  • Jobs
Contact
  • Deutsch
  • English

  • Home
  • News
  • Technology
  • Research
  • Teaching
  • Business
  • Jobs
Contact
  • Deutsch
  • English

Abstracts-QAI-EN

a:3:{s:6:"locale";s:5:"en_US";s:3:"rtl";i:0;s:9:"flag_code";s:2:"us";}
Comparison of different hybrid quantum machine learning approaches for image classification on quantum computers

Comparison of different hybrid quantum machine learning approaches for image classification on quantum computers

Abstract:

Nowadays, Machine learning (ML) and the classification of images are becoming increasingly important. ML is used amongst others in autonomous vehicles to determine obstacles or in medicine for the automatic detection of diseases. However, the demands on neural networks used for image classification are constantly increasing as the features in the images become more and more complex. A promising solution in this area is quantum computing, or more precisely quantum machine learning (QML). Due to the advantages that qubits used in quantum computers bring with them, QML approaches could achieve significantly faster and better results than conventional ML methods. Quantum computing is currently in the so-called ’noisy intermediate-scale quantum’ (NISQ) era which means that quantum computers only have a few qubits, which are prone to errors. Accordingly, quantum machine learning cannot be easily implemented. The solution are hybrid approaches that use classical structures and combine them with quantum circuits.

This work analyzes the hybrid approaches Quanvolutional Neural Network (QCNN), Quantum Transfer Learning (QTL) and Variational Quantum Circuit (VQC). These are trained to classify the images of the MNIST data set. The training is takes place several times with different seeds in order to test the robustness of the approaches. They are then compared based on accuracy, loss and training duration. Additionally, a conventional Convolutional Neural Network (CNN) is used for comparison. Finally, the most efficient approach will be determined. The evaluation of the experiment shows that the QCNN achieves significantly better results than QTL and VQC. However, the conventional CNN performs better than the QCNN in all metrics.

Author:

Nicolas Holeczek

Advisors:

Leo Sünkel, Philipp Altmann, Claudia Linnhoff-Popien


Student Thesis | Published December 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Architectural Influence on Variational Quantum Circuits in Multi-Agent Reinforcement Learning: Evolutionary Strategies for Optimization

Architectural Influence on Variational Quantum Circuits in Multi-Agent Reinforcement Learning: Evolutionary Strategies for Optimization

Abstract:

The field of Multi-Agent Reinforcement Learning (MARL) is becoming increasingly relevant in domains that involve the interaction of multiple agents, such as autonomous driving and robotics. One challenge in MARL is the exponential growth of dimensions in the state and action spaces. Quantum properties o!er a solution by enabling compact data processing and reducing trainable parameters. One drawback of gradient-based optimization methods in Quantum MARL is the possibility of Barren Plateaus impeding effective parameter updating, thereby hindering convergence. Evolutionary Algorithms, however, bypass this issue as they do not rely on gradient information. Building on research that demonstrates the potential of Evolutionary Algorithms in optimizing Variational Quantum Circuits for MARL tasks, we examine how introducing architectural changes into the evolutionary process affects optimization. We explore three different architecture concepts for Variational Quantum Circuits — Layer-Based, Gate-Based, and Prototype-Based — by applying two evolutionary strategies: one involving both recombination and mutation (ReMu), and the other using mutation only (Mu). To evaluate the efficacy of these approaches, we tested them in the Coin Game, comparing them to a baseline without architectural modifications. The mutation-only strategy with the Gate- Based approach yielded the best results, achieving the highest scores, number of coins collected, and own coin rates while using the fewest parameters. Furthermore, a variant of the Gate-Based approach with results comparable to those of the baseline required significantly fewer gates, resulting in an acceleration of the runtime by 90.1%.

Author:

Karola Schneider

Advisors:

Michael Kölle, Leo Sünkel, Claudia Linnhoff-Popien


Student Thesis | Published November 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


The Trainability of Quantum FederatedLearning

The Trainability of Quantum Federated Learning

Abstract:

This thesis explores the implementation and evaluation of Quantum Federated Learning (QFL), where Variational Quantum Circuits (VQCs) are collaboratively trained across multiple quantum clients. The primary focus is on comparing the performance and trainability of QFL with traditional non-federated quantum machine learning approaches using the MNIST dataset. Experiments were conducted with 2, 3, 4, and 5 clients, each processing different subsets of data, and with varying numbers of layers (1, 2, and 4) in the quantum circuits. The trainability of the models was assessed through the evaluation of accuracy, loss, and gradient norms throughout the training process. The results demonstrate that while QFL enables collaborative learning and shows significant improvements in these metrics during training, the baseline models without federated learning generally exhibit superior performance in terms of final accuracy and loss due to the uninterrupted optimization process. Additionally, the impact of increasing the number of layers on training stability and performance was examined.

Author:

Sina Mohammad Rezaei

Advisors:

Leo Sünkel, Thomas Gabor, Tobias Rohe, Claudia Linnhoff-Popien


Student Thesis | Published November 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Investigating the Lottery Ticket Hypothesis for Variational Quantum Circuits

Investigating the Lottery Ticket Hypothesis for Variational Quantum Circuits

Abstract:

Quantum computing is an emerging field in computer science that has made significant progress in recent years, including in the area of machine learning. Through the principles of quantum physics, it offers the possibility of overcoming the limitations of classical algorithms. However, variational quantum circuits (VQCs), a specific type of quantum circuits utilizing varying parameters, face a significant challenge from the barren plateau phenomenon, which can hinder the optimization process in certain cases. The Lottery Ticket Hypothesis (LTH) is a recent concept in classical machine learning that has led to notable improvements in neural networks. In this thesis, we investigate whether it can be applied to VQCs. The LTH claims that within a large neural network, there exists a smaller, more efficient subnetwork (a “winning ticket”) that can achieve comparable performance. Applying this approach to VQCs could help reduce the impact of the barren plateau problem. The results of this thesis show that the weak LTH can be applied to VQCs, with winning tickets discovered that retain as little as 26.0% of the original weights. For the strong LTH, where a pruning mask is learned without any training, we found a winning ticket for a binary VQC, performing at 100% accuracy with 45% remaining weights. This shows that the strong LTH is also applicable to VQCs. These findings provide initial evidence that the LTH may be a valuable tool for improving the efficiency and performance of VQCs in quantum machine learning tasks.

Author:

Leonhard Klingert

Advisors:

Michael Kölle, Julian Schönberger, Claudia Linnhoff-Popien


Student Thesis | Published November 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Quantum Reinforcement Learning via Parameterized Quantum Walks

Quantum Reinforcement Learning via Parameterized Quantum Walks

Abstract:

Random walks find application in various domains of research such as computer science, psychology, finance or mathematics, as they are a fundamental concept in probability theory and stochastics. But conventional computers quickly reach their limits regarding computational complexity, so other ways of efficiently solving complex problems like Quantum Computing are needed. Quantum walks, the quantum equivalent of classical random walks, use quantum effects such as superposition and entanglement to be more efficient than their classical counterparts. Nevertheless, running programs on quantum devices at near-term intermediate scale quantum devices presents some challenges due to high error rates, noise, and the number of available qubits. For a large number of graph problems, Gray Code Directed Edges (GCDE) encoding counteracts these problems by reducing the required number of qubits through an efficient representation of bipartite graphs using gray code.

This work investigates random walks in grid worlds and glued trees using classical reinforcement learning strategies such as Proximal Policy Optimization or Deep Q-learning Networks. In a next step, these environments are re-built using efficient GCDE encoding. The environments are translated into parameterized quantum circuits whose parameters are optimized and learned by the walker. The contribution of this work contains the application of efficient GCDE encoding in quantum environments and a comparison between a quantum and a random walker regarding training times and target distances. Furthermore, the effects of different start positions during training and evaluation are
considered.

Author:

Sabrina Egger

Advisors:

Jonas Stein, Michael Kölle, Claudia Linnhoff-Popien


Student Thesis | Published October 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Optimization of Variational Quantum Circuits for Hybrid Quantum Proximal Policy Optimization Algorithms

Optimization of Variational Quantum Circuits for Hybrid Quantum Proximal Policy Optimization Algorithms

Abstract:

Quantum computers, which are subject to current research, offer, apart from the hope for an quantum advantage, the chance of reducing the number of used trainable parameters. This is especially interesting for machine learning, since it could lead to a faster learning process and lower the use of computational resources. In the current Noisy Intermediate-Scale Quantum (NISQ) era the limited number of qubits and quantum noise make learning a difficult task. Therefore the research focuses on Variational Quantum Circuits (VQCs) which are hybrid algorithms constructed of a parameterised quantum circuit with classic optimization and only need few qubits to learn. Literature of the recent years proposes some interesting approaches to solve reinforcement learning problems using the VQC, which utilize promising strategies to increase its results that deserve closer research. In this work we will investigate data re-uploading, input and output scaling and an exponentially declining learning rate for the actor-VQC of a quantum proximal policy optimization (QPPO) algorithm, in the Frozen Lake and Cart Pole environments, on their ability to reduce the parameters of the VQC in relation to its performance. Our results show an increase of hyperparameter stability and performance for data re-uploading and our exponentially declining learning rate. While input scaling has no effect on the parameter effectiveness, output scaling can archive powerful greediness control and lead to a rise in learning speed and robustness.

Author:

Timo Witter

Advisors:

Michael Kölle, Philipp Altmann, Claudia Linnhoff-Popien


Student Thesis | Published February 2024 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Path-Connectedness of the Boundary between Features that Are Labeled Differently by a Single Layer Perceptron

Path-Connectedness of the Boundary between Features that Are Labeled Differently by a Single Layer Perceptron

Abstract:

Due to the remarkable advancements in high-performance computing, machines can process an increasingly high amount of data to adjust numerous parameters in a Machine Learning model (ML model). In this way, the machine recognizes and learns patterns and might come to good and fast decisions. Though, the success of an ML model does not just depend on the performance of the computer where it is deployed on that assures the capability of processing huge databases. Mostly, a high amount of data is helpful, but not the key to obtain a reliable model itself. Already models with just a few trainable parameters, where smaller data sets are sufficient for the training, can produce stunning outputs if the basic model is chosen adequately and fits to the data and to the task.

From an abstract point of view, ML models are parameterized functions, where the parameters are optimized during the learning process. To examine if a certain ML model qualitatively fits, we can set up requirements in a mathematical way. Here, we discuss specifications that do not consider a concrete assignment of the parameters but expect a certain behavior of the to a model corresponding function for arbitrary parameters. Subsequently, we can prove that a certain model fulfills them, or give a more specific counter-example, which yields that a certain mathematical property does not hold, in general, for the regarded model.

In this thesis, we consider a Single Layer Perceptron (SLP), the root of Deep Neural Networks, that categorizes features between two different labels. We show that under certain preconditions the boundary between the two categories within the feature space is path-connected. This indicates the SLP being a proper choice if we have pre-knowledge about the features: If we know that the boundary between the two categories is path-connected in reality, we can exclude such models that generate a boundary with gaps.

Author:

Remo Kötter

Advisors:

Maximilian Balthasar Mansky, Thomas Gabor, Claudia Linnhoff-Popien


Student Thesis | Published December 2023 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Using Quantum Machine Learning to Predict Asset Prices in Financial Markets

Using Quantum Machine Learning to Predict Asset Prices in Financial Markets

Abstract:

In the financial world, a lot of effort is spent on predicting future asset prices. Gaining even a modest increase in forecasting capability can generate enormous profits. Some statistical models identify patterns, trends, and correlations in past prices, and apply those patterns to forecast future values. A more novel approach is the use of artificial intelligence to learn underlying trends in the data and predict future prices. As quantum computing matures, its potential applications in this task have also become increasingly more interesting. In this thesis, several different models of these various types are implemented: ARIMA, RBM, LSTM, and QDBM (Quantum Deep Boltzmann Machine). These models are trained on historical asset prices and used to predict future asset prices. The model predictions are then also used as the input for a simulated trading algorithm, which investigates the effectiveness of these predictions in the active trading of assets. The predictions are performed for ten different assets listed on the NYSE, NASDAQ, and XETRA, for the five-year period from 2018 to 2022. The assets were chosen from varying industrial sectors and with diverse price histories. Trading based on the model predictions was able to either match or outperform the classic buy-and-hold approach in nine out of the ten assets tested.

Author:

Maximilian Adler

Advisors:

Claudia Linnhoff-Popien, Jonas Stein, Jonas Nüßlein, Nico Kraus (Aqarios GmbH)


Student Thesis | Published November 2023 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Quantum-Enhanced Denoising DiffusionModels

Quantum-Enhanced Denoising Diffusion Models

Abstract:

Machine learning models for generating images have gained much notoriety in the past year. DALL-E, Craiyon and Stable Diffusion can generate high-resolution images, by users simply typing a short description (prompt) of the desired image. Another growing field is quantum computing, particularly quantum-enhanced machine learning. Quantum computers solve problems using their unique quantum mechanical properties. In this paper we investigate how the use of Quantum-enhanced Machine Learning and Variational Quantum Circuits can improve image generation by diffusion-based models. The two major weaknesses of classical diffusion models are addressed, the low sampling speed
and the high number of required parameters. Implementations of a quantum-enhanced denoising diffusion model will be presented, and their performance is compared with that of classical models, by training the models on well-known datasets (MNIST digits and fashion, CIFAR10). We show, that our models deliver better performance (measured in FID, SSIM and PSNR) than the classical models with comparable number of parameters.

Author:

Gerhard Stenzel

Advisors:

Claudia Linnhoff-Popien, Michael Kölle, Jonas Stein, Andreas Sedlmeier


Student Thesis | Published October 2023 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


Dimensionality Reduction with Autoencodersfor Efficient Classification with VariationalQuantum Circuits

Dimensionality Reduction with Autoencoders for Efficient Classification with Variational Quantum Circuits

Abstract:

Quantum computing promises performance advantages, especially for data-intensive and complex computations. However, we are currently in the Noisy Intermediate-Scale Quantum era with a limited number of qubits available, which makes it challenging to realize these potential quantum advantages in machine learning. Several solutions, like hybrid transfer learning have been proposed, whereby a pre-trained classical neural network acts as the feature extractor and a variational quantum circuit as the classifier. While these approaches often yield good performance, it is not possible to clearly determine the contribution of the classical and quantum part. The goal of this thesis is therefore to introduce a hybrid model that addresses these limitations and implements a clear distinction between the classical and quantum parts. An autoencoder is used to reduce the input dimension. We compare the performance of transfer learning models (Dressed Quantum Circuit and SEQUENT) and a variational quantum circuit with amplitude embedding against our model. Additionally, the performance of a purely classical neural network on the uncompressed input and an autoencoder in combination with a neural network will be examined. We compare the test accuracies of the models over the datasets Banknote
Authentication, Breast Cancer Wisconsin, MNIST and AudioMNIST. The results show that the classical neural networks and the hybrid transfer learning approaches perform better than our model, which matches our expectations that the classical part in transfer learning plays the major role in the overall performance. Compared to a variational quantum circuit with amplitude embedding, no significant dierence can be observed, so that our model is a reasonable alternative to this.

Author:

Jonas Maurer

Advisors:

Claudia Linnhoff-Popien, Michael Kölle, Philipp Altmann, Leo Sünkel


Student Thesis | Published October 2023 | Copyright © QAR-Lab
Direct Inquiries to this work to the Advisors


123
Page 2 of 3

QAR-Lab – Quantum Applications and Research Laboratory
Ludwig-Maximilians-Universität München
Oettingenstraße 67
80538 Munich
Phone: +49 89 2180-9153
E-mail: qar-lab@mobile.ifi.lmu.de

© Copyright 2025

General

Team
Contact
Legal notice

Social Media

Twitter Linkedin Github

Language

  • Deutsch
  • English
Cookie-Zustimmung verwalten
Wir verwenden Cookies, um unsere Website und unseren Service zu optimieren.
Funktional Always active
Die technische Speicherung oder der Zugang ist unbedingt erforderlich für den rechtmäßigen Zweck, die Nutzung eines bestimmten Dienstes zu ermöglichen, der vom Teilnehmer oder Nutzer ausdrücklich gewünscht wird, oder für den alleinigen Zweck, die Übertragung einer Nachricht über ein elektronisches Kommunikationsnetz durchzuführen.
Vorlieben
Die technische Speicherung oder der Zugriff ist für den rechtmäßigen Zweck der Speicherung von Präferenzen erforderlich, die nicht vom Abonnenten oder Benutzer angefordert wurden.
Statistiken
Die technische Speicherung oder der Zugriff, der ausschließlich zu statistischen Zwecken erfolgt. Die technische Speicherung oder der Zugriff, der ausschließlich zu anonymen statistischen Zwecken verwendet wird. Ohne eine Vorladung, die freiwillige Zustimmung deines Internetdienstanbieters oder zusätzliche Aufzeichnungen von Dritten können die zu diesem Zweck gespeicherten oder abgerufenen Informationen allein in der Regel nicht dazu verwendet werden, dich zu identifizieren.
Marketing
Die technische Speicherung oder der Zugriff ist erforderlich, um Nutzerprofile zu erstellen, um Werbung zu versenden oder um den Nutzer auf einer Website oder über mehrere Websites hinweg zu ähnlichen Marketingzwecken zu verfolgen.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Einstellungen anzeigen
{title} {title} {title}