GT-PD achieves linear convergence to a variance-determined neighborhood in Byzantine settings by clipping messages and using dual-metric probabilistic dropout to preserve doubly stochastic mixing; GT-PD-L adds leaky integration for partial isolation.
Byzantine-resilient distributed learning: Towards optimal statistical rates
1 Pith paper cite this work. Polarity classification is still indexing.
1
Pith paper citing it
fields
cs.LG 1years
2026 1verdicts
UNVERDICTED 1representative citing papers
citing papers explorer
-
Convergence of Byzantine-Resilient Gradient Tracking via Probabilistic Edge Dropout
GT-PD achieves linear convergence to a variance-determined neighborhood in Byzantine settings by clipping messages and using dual-metric probabilistic dropout to preserve doubly stochastic mixing; GT-PD-L adds leaky integration for partial isolation.