A mixture-of-experts hybrid quantum model achieves 0.793 average precision on credit card fraud detection compared to 0.770 for XGBoost, with modest extra inference time.
BERT: Pre-training of deep bidirectional transformers for language understanding,
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
quant-ph 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
A Mixture-of-Experts Framework for Practical Hybrid-Quantum Models in Credit Card Fraud Detection
A mixture-of-experts hybrid quantum model achieves 0.793 average precision on credit card fraud detection compared to 0.770 for XGBoost, with modest extra inference time.