Recognition: unknown
From Membership-Privacy Leakage to Quantum Machine Unlearning
read the original abstract
Quantum machine learning (QML) has the potential to achieve quantum advantage for specific tasks by combining quantum computation with classical machine learning (ML). In classical ML, a significant challenge is membership-privacy leakage, whereby an attacker can infer from model outputs whether specific data were used in training. When specific data are required to be withdrawn, removing their influence from the trained model becomes necessary. Machine unlearning (MU) addresses this issue by enabling the model to forget the withdrawn data, thereby preventing membership-privacy leakage. However, this leakage remains underexplored in QML. This raises two research questions: do QML models leak membership privacy about their training data, and can MU methods efficiently mitigate such leakage in QML models? We investigate these questions using two quantum neural network (QNN) architectures, a basic QNN and a hybrid QNN, evaluated in noiseless simulations and cloud quantum device demonstrations. To answer the first question, we analyze how quantum constraints shape membership-privacy leakage in QML and then formalize a realistic gray-box threat model accordingly. Based on this, we design a membership inference attack (MIA) tailored to QNN outputs, and our results provide clear evidence of membership leakage in both QNNs. To answer the second question, we propose a quantum machine unlearning (QMU) framework, comprising three MU mechanisms. Evaluations on two QNN architectures show that QMU removes the influence of the withdrawn data while preserving accuracy for retained data. A comparative analysis further characterizes the three MU mechanisms with respect to data dependence, computational cost, and robustness.
This paper has not been read by Pith yet.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.