Table of Contents
- Project Overview
- Objective and Problem Definition
- Tools and Environment Setup
- Dataset Selection and Preprocessing
- Feature Encoding Strategy
- Quantum Circuit Design
- Model Architecture (Hybrid Classical-Quantum)
- Training and Optimization Loop
- Evaluation Metrics and Validation Strategy
- Running on Real Hardware (Optional)
- Integration with API or UI
- Logging and Experiment Tracking
- Model Deployment Strategy
- Security and Access Management
- Benchmarking Against Classical Models
- Scalability and Performance Tuning
- Documentation and Codebase Structure
- Final Presentation and Demo Plan
- Submission Checklist
- Conclusion
1. Project Overview
This capstone involves designing, training, evaluating, and deploying a fully functional Quantum AI model for a real-world task such as classification, recommendation, or regression using Qiskit or PennyLane.
2. Objective and Problem Definition
- Choose a problem domain: e.g., sentiment analysis, fraud detection
- Define success metrics: accuracy, AUC, latency
- State hypothesis: How will QML improve or complement classical ML?
3. Tools and Environment Setup
- PennyLane / Qiskit / TensorFlow Quantum
- Python 3.9+, Jupyter Notebook, VS Code
- Cloud backends: IBM Quantum, Amazon Braket (optional)
4. Dataset Selection and Preprocessing
- Use UCI, Kaggle, or custom dataset
- Normalize, reduce dimensions to 2–8 features
- Split into train/validation/test sets
5. Feature Encoding Strategy
- Angle Encoding
- Amplitude Encoding
- Basis Encoding
- Data re-uploading if needed
6. Quantum Circuit Design
- Feature Map: ZZFeatureMap, RealAmplitude
- Ansatz: TwoLocal, HardwareEfficientAnsatz
- Design for shallow circuits and noise resilience
7. Model Architecture (Hybrid Classical-Quantum)
- Classical frontend (dense or CNN)
- Quantum core for decision making
- Classical post-processing for final activation/output
8. Training and Optimization Loop
- Optimizers: SPSA, COBYLA, Adam, RMSProp
- Loss function: MSE, cross-entropy
- Use parameter-shift rule for gradients
9. Evaluation Metrics and Validation Strategy
- Accuracy, F1 score, precision, recall
- Use k-fold validation
- Compare with classical SVM/MLP
10. Running on Real Hardware (Optional)
- IBM Q Experience: use small backends like
ibmq_quito
- Amazon Braket: IonQ or Rigetti
- Submit transpiled circuits with shot configuration
11. Integration with API or UI
- REST API with FastAPI or Flask
- Optional web frontend using Streamlit or React
- Expose
/predict
endpoint for inference
12. Logging and Experiment Tracking
- Use MLflow or Weights & Biases
- Track parameters, metrics, hardware used, and versions
13. Model Deployment Strategy
- Containerize with Docker
- Deploy on Heroku, AWS Lambda, or GCP Cloud Run
- Set up CI/CD using GitHub Actions
14. Security and Access Management
- Protect API with tokens or OAuth
- Use env files to store quantum backend credentials
- Rate-limit usage if public
15. Benchmarking Against Classical Models
- Train equivalent classical models (e.g., SVM, RandomForest)
- Compare performance and latency
- Discuss quantum speedups or tradeoffs
16. Scalability and Performance Tuning
- Minimize number of qubits
- Batch predictions
- Use cache for encoded inputs
17. Documentation and Codebase Structure
README.md
,requirements.txt
,notebooks/
,src/
,api/
,data/
- Include usage examples and training logs
18. Final Presentation and Demo Plan
- Present model architecture and rationale
- Show live or recorded demo of inference
- Visualize circuit and accuracy graphs
19. Submission Checklist
- [ ] Code repository (GitHub/Bitbucket)
- [ ] Final report (PDF or markdown)
- [ ] Trained model (simulated or QPU)
- [ ] Video demo or slide deck
- [ ] Deployment link (if any)
20. Conclusion
This capstone provides hands-on experience building and deploying a quantum AI model using modern hybrid techniques. By the end, students will demonstrate proficiency in QML pipeline design, circuit optimization, API deployment, and cross-model evaluation.