Simulating large quantum systems or solving large-scale linear algebra problems are challenges that lie beyond the abilities of classical computers due to the extremely high computational costs. While the realization of quantum computers could unlock such tasks, serious constraints remain, including limited qubit numbers and noise processes that limit circuit depth.
Variational quantum algorithms (VQAs), which employ a classical optimizer to train a parametrized quantum circuit, are emerging as a leading strategy to address these constraints. Training these VQAs however requires huge numbers of iterations for convergence and also suffers from the barren plateau problem, where the variance of the gradients decreases exponentially as the number of qubits increases.
New research from Imperial College London tackles these issues by optimally training a VQA to represent quantum states and introducing a stable variant of the quantum natural gradient (QNG), a generalized quantum natural gradient (GQNG) that can be trained free of barren plateaus.
VQAs consist of a parameterized quantum circuit (PQC) that generates the quantum state with a unitary function parameterized by the M-dimensional parameter vector. Their goal is to learn the target parameters that approximate a given target state by maximizing the fidelity. A common approach to optimizing the fidelity is standard gradient ascent, which points in the direction of the steepest increase in fidelity. But the fidelity landscape is generally not Euclidean, and so the standard gradient may not update parameters toward the optimal direction. In the paper Optimal Training of Variational Quantum Algorithms Without Barren Plateaus, the researchers use quantum geometric information about the parameter space with the quantum Fisher information metric (QFIM). The parameter space can then be partially transformed to get a generalized quantum natural gradient (GQNG) that moves in the optimal direction and is more stable than the traditional quantum natural gradient (QNG), thus solving its convergence issues. The fidelity of the PQCs also forms a Gaussian kernel that can be used to calculate the optimal adaptive learning rate for each gradient update.
To evaluate the performance of the proposed VQA, the team conducted experiments using numerical simulations with various types of expressive PQCs. They measured the average infidelity after a single step of gradient ascent against a tuned hyperparameter that controls the trade-off between stability and optimal updates (beta), as well as the learning rate for different initial infidelities of the GQNG.
The results demonstrate that the infidelity initially decreases with increasing beta as more information from the QFIM is used, but when beta is larger than 0.6, a sharp increase in infidelity across all types of PQCs is observed due to an ill-conditioned inverse of the QFIM. The team also found that the adaptive learning rate calculated by their formula gives nearly the best learning rate even for larger infidelities.
The team also measured mean infidelity after one step of adaptive gradient ascent against initial infidelity, as well as infidelity against the number of iterations of gradient ascent. The results show that GQNG has lower infidelities compared to the standard gradient and the different PQCs have nearly the same trajectory. Moreover, GQNG with an adaptive learning rate outperforms other methods and reduces infidelities by more than one order of magnitude.
In their study, the researchers explicitly describe how to optimally train a PQC to represent a target quantum state, and show that the best direction for the gradient updates is provided by GQNG, which utilises quantum geometric information while avoiding the barren plateaus problem. The proposed method can also train different expressive PQCs that are able to represent a wide range of quantum states commonly used for VQAs.
The paper Optimal Training of Variational Quantum Algorithms Without Barren Plateaus is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.