Robust Quantum Neural Networks
Robust Quantum Neural Networks
Quantum Machine Learning (QML) models of the variational type often consist of a data embedding, followed by a parametrized quantum circuit often called Quantum Neural Network (QNN). So far, QNNs have played a significant role in quantum machine learning applications. However, when it comes to physical implementation, especially in current NISQ (Noisy Intermediate Scale Devices) devices, QNNs face efficiency limitations like the high error rate of NISQ systems, and degraded accuracy compared to noise-free simulation. The same QNN on different hardware platforms has distinctly different performance/accuracy, mainly due to different gate error rates, which severely limits the performance.
So far, various types of noise mitigation techniques have been proposed to reduce the impact of noise when running quantum circuits on NISQ devices. However, these are often either generic techniques that do not leverage the advantages of QNNs, and are therefore only applicable to the inference stage, or they are specific to other algorithms or applications only, such as quantum chemistry. This paper proposes a QNN-specific noise aware framework called RoQNN (Robust QNN) that optimizes QNN robustness in both training and inference stages, boosts the intrinsic robustness of QNN parameters, and improves accuracy on real quantum machines.
The architecture of a QNN considered in this work consists of multiple blocks each having three components i) an encoder which encodes the classical values to quantum states with rotation gates, ii) trainable quantum layers containing parameterized gates to perform certain ML tasks, and iii) measurement of each qubit for obtaining a classical value. The measurement outcomes of one block are passed to the next block, and the architecture works in a three-stage pipeline. First, the measurement outcome distribution of each qubit is normalized across input samples during both training and inference to compensate for information loss. Second, noise is injected to the QNN training process by performing error gate insertion. Injecting noises into neural network training helps in obtaining a smoothed loss landscape for better generalization. Hence, by emulating the real noisy environment when deploying NNs, noise-injection-based training significantly boosts the noise-robustness. This is done by iteratively sampling error gates, inserting them to QNN, and updating weights during training. Finally, post-measurement quantization is carried out to quantize the measurement outcomes to discrete values to effectively correct for quantum noise-induced errors. The quantization corrects errors by value clamping, thus avoiding cascaded error accumulation. Moreover, by sparsifying the parameter space, quantization reduces the NN complexity as a regularization mechanism that mitigates potential overfitting issues.
The experimentation was carried out on 8 prototypical ML classification tasks, including MNIST, Fashion and CIFAR.
In terms of the hardware implementation, IBMQ quantum computers were used, and compilation was done via Qiskit (IBM) APIs. All experiments were run using the maximum offered 8192 shots. The experimental results show that RoQNN can improve accuracy by 20-40% for typical classification tasks and demonstrates high accuracy with pure quantum parameters on real quantum hardware. Furthermore, the work compares the noise-free measurement result distribution of 4 qubits with their noisy counterparts MNIST-4 on two devices. It is shown that the post-measurement normalization reduces the mismatch between two distributions with improved SNR on each qubit and each individual measurement outcome.
The overall results demonstrate that RoQNN improves the accuracy of considered baseline designs, irrespective of the QNN model size and design space. Both noise injection and quantization individually improve the overall accuracy of the model by 9%, however, combining these two techniques delivers an accuracy gain of 17%, hence better performance. Another crucial advantage of the proposed model is scalability since the training cost, post-measurement normalization and quantization are linearly scalable with the number of qubits. However, the real challenge lies in optimizing the level of noise injected, because large noise can affect the stability and hence the accuracy of the training process. On the other hand, low noise level doesn’t contribute to improving the model’s robustness. Such an architecture can be employed by other QML models, following an analysis of the noise level that is inserted in the quantum system. Further exploration of such QML models should be investigated.