Quantum computing has become an increasingly attractive research field due to its huge potential for solving complex real-world problems in areas such as optimization, cryptography, chemistry, and the emerging field of quantum natural language processing (QNLP). Existing QNLP approaches however require resource-heavy syntactic analysis and different parameterized quantum circuits for different syntactical sentence structures, limiting their scalability, flexibility and practicality, particularly when dealing with large-scale real-world datasets.
A team from the Baidu Research Institute for Quantum Computing and the University of Technology Sydney addresses these limitations in their new paper Quantum Self-Attention Neural Networks for Text Classification, proposing a simple yet powerful quantum self-attention neural network (QSANN) architecture that is effective and scalable to large real-world datasets and outperforms QNLP and classical self-attention networks on text classification tasks.
The research team summarizes their main contributions as follows:
- Our proposal is the first QNLP algorithm with a detailed circuit implementation scheme based on the self-attention mechanism. This method can be implemented on NISQ devices and is more practicable on large data sets compared with previously known QNLP methods based on syntactic analysis.
- In QSANN, we introduce the Gaussian projected quantum self-attention, which can efficiently dig out the correlations between words in high-dimensional quantum feature space. Furthermore, visualization of self-attention coefficients on text classification tasks confirms its ability to focus on the most relevant words.
- We experimentally demonstrate that QSANN outperforms existing QNLP methods based on syntactic analysis and simple classical self-attention neural networks on several public data sets for text classification. Numerical results also imply that QSANN is resilient to quantum noise.
The proposed QSANN architecture comprises a quantum self-attention layer (QSAL), a loss function, analytical gradients, and analysis. To perform text classification, QSANN first encodes input words into a large quantum Hilbert space, then projects them back to a low-dimensional classical feature space via quantum measurement. As such, the team can leverage the quantum advantage by utilizing the high-dimensional quantum feature space and projected quantum models to discover hidden text correlations and features that are difficult or even impossible to track using traditional approaches.
The team compared the proposed QSANN’s text classification performance against a syntactic analysis-based quantum model on simple tasks on the MC (meaning classification) and RP (relative clause evaluation) datasets; and against a classical self-attention network (CSANN) and the native method on the Yelp, IMDb, and Amazon public sentiment analysis datasets. In the evaluations, QSANN achieved 100 percent accuracy on the MC task and outperformed the CSANN benchmark on the Yelp, IMDb, and Amazon datasets.
The researchers also demonstrate QSANN’s ease of implementation on near-term quantum devices and its robustness to low-level quantum noises, validating the potential of combining self-attention and quantum neural networks for complicated real-world tasks.
The paper Quantum Self-Attention Neural Networks for Text Classification is on arXiv.
Author: Hecate He | Editor: Michael Sarazen
We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.