https://doi.org/10.1140/epjqt/s40507-025-00397-4
Research
Conditional diffusion-based parameter generation for quantum approximate optimization algorithm
1
College of Artificial Intelligence, Nanjing Tech University, Puzhu South Road, 211800, Nanjing, China
2
College of Information Engineering, Taizhou University, Jichuan East Road, 225300, Taizhou, China
3
College of Computer Science, Shaanxi Normal University, Chang An Road, 710062, Xi’an, China
Received:
17
January
2025
Accepted:
28
July
2025
Published online:
11
August
2025
The Quantum Approximate Optimization Algorithm (QAOA) is a hybrid quantum-classical algorithm that shows promise in efficiently solving the Max-Cut problem, a representative example of combinatorial optimization. However, its effectiveness heavily depends on the parameter optimization pipeline, where the parameter initialization strategy is nontrivial due to the non-convex and complex optimization landscapes characterized by issues with low-quality local minima. Recent inspiration comes from the diffusion of classical neural network parameters, which has demonstrated that neural network training can benefit from generating good initial parameters through diffusion models. However, whether the diffusion model can enhance the parameter optimization and performance of QAOA by generating well-performing initial parameters is still an open topic. Therefore, in this work, we formulate the problem of finding good initial parameters as a generative task and propose the initial parameter generation scheme through dataset-conditioned pre-trained parameter sampling. Concretely, the generative machine learning model, specifically the denoising diffusion probabilistic model (DDPM), is trained to learn the distribution of pre-trained parameters conditioned on the graph dataset. Intuitively, the proposed framework aims to effectively distill knowledge from pre-trained parameters to generate well-performing initial parameters for QAOA. To benchmark our framework, we adopt trotterized quantum annealing (TQA)-based and graph neural network (GNN) prediction-based initialization protocols as baselines. Through numerical experiments on Max-Cut problem instances of various sizes, we show that conditional DDPM can consistently generate high-quality initial parameters, improve convergence to the approximation ratio, and exhibit greater robustness against local minima over baselines. Additionally, the experimental results also indicate that the conditional DDPM trained on small problem instances can be extrapolated to larger ones, thus demonstrating the extrapolation capacity of our framework in terms of the qubit number.
Key words: Quantum Approximate Optimization Algorithm / Denoising diffusion probabilistic model / Parameter initialization / Max-Cut
© The Author(s) 2025
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.