Table of Contents
- Introduction
- What Are Adversarial Attacks?
- Motivation for Studying Attacks in QML
- Classical Adversarial Attacks: A Brief Overview
- Unique Vulnerabilities in Quantum Models
- Types of Adversarial Attacks in QML
- Perturbation of Input States
- Parameter Perturbation Attacks
- Attacks on Quantum Feature Maps
- Fidelity-Based Adversarial Examples
- White-Box vs Black-Box Attacks in QML
- Gradient-Based Attacks on VQCs
- Adversarial Examples for Quantum Classifiers
- Transferability of Adversarial Examples in QML
- Robustness of Quantum Kernels
- Defending Quantum Models Against Attacks
- Quantum Regularization Techniques
- Noise as a Double-Edged Sword
- Open Problems and Research Challenges
- Conclusion
1. Introduction
Adversarial attacks pose a significant threat to classical machine learning systems. As quantum machine learning (QML) becomes more widespread, understanding its vulnerabilities to similar attacks becomes critical to building robust and trustworthy quantum AI systems.
2. What Are Adversarial Attacks?
- Deliberate perturbations to input data that mislead a model
- Examples include imperceptible noise added to images or signals
- Goal: fool the model without obvious change to the input
3. Motivation for Studying Attacks in QML
- QML systems may be deployed in high-stakes environments
- Quantum systems are inherently noisy and hard to interpret
- Understanding adversarial risks is key to security and trust
4. Classical Adversarial Attacks: A Brief Overview
- FGSM (Fast Gradient Sign Method)
- PGD (Projected Gradient Descent)
- CW (Carlini-Wagner) and decision-based attacks
5. Unique Vulnerabilities in Quantum Models
- Quantum data representations are fragile
- Superposition and entanglement introduce novel dependencies
- Observability limits complicate detection
6. Types of Adversarial Attacks in QML
- Input-level perturbations to quantum states
- Gate-level or parameter-level attacks on circuits
- Attacks on measurement process or shot-noise exploitation
7. Perturbation of Input States
- Modify amplitude or angle encoded states
- Small shifts in encoded features can lead to large output deviations
- Adversarial states may look similar but induce different measurement outcomes
8. Parameter Perturbation Attacks
- Add noise to trained gate parameters in VQCs
- Target high-sensitivity regions in the optimization landscape
9. Attacks on Quantum Feature Maps
- Exploit vulnerabilities in quantum kernels
- Manipulate classical data such that mapped quantum states are indistinguishable
10. Fidelity-Based Adversarial Examples
- Minimize fidelity between original and perturbed quantum state
- Objective: maximize classification error while maintaining quantum closeness
11. White-Box vs Black-Box Attacks in QML
- White-box: attacker has full circuit and parameter access
- Black-box: only circuit outputs are accessible
- Gradient estimation via parameter-shift rule enables black-box gradient attacks
12. Gradient-Based Attacks on VQCs
- Use parameter-shift gradients to compute adversarial directions
- Similar to FGSM, perturb input parameters or circuit angles
13. Adversarial Examples for Quantum Classifiers
- Constructed using hybrid loss maximization
- Simulated using Qiskit or PennyLane
- Demonstrated on quantum-enhanced image or time series classifiers
14. Transferability of Adversarial Examples in QML
- Do examples crafted for one quantum model fool another?
- Transfer effects studied across kernel-based and variational circuits
15. Robustness of Quantum Kernels
- Some quantum kernels are more robust to small data perturbations
- Analyze based on the sensitivity of kernel matrix eigenvalues
16. Defending Quantum Models Against Attacks
- Adversarial training (inject noisy or adversarial samples)
- Gradient masking (limit differentiability)
- Circuit randomization and dropout
17. Quantum Regularization Techniques
- Add penalty terms to loss function for sensitivity control
- Train using noise-injected circuits for generalization
18. Noise as a Double-Edged Sword
- May obscure gradients, making attacks harder
- Also destabilizes learning and increases variance
19. Open Problems and Research Challenges
- Formal adversarial bounds for QML models
- Scalable attack algorithms for large QPU systems
- Security standards for quantum AI applications
20. Conclusion
Adversarial attacks represent an emerging frontier in the security of quantum machine learning. As quantum AI systems mature, building robust, interpretable, and attack-resistant models will be vital to ensuring the reliability of quantum-enhanced decision-making.