pith. machine review for the scientific record. sign in

arxiv: 2503.16251 · v2 · submitted 2025-03-20 · 💻 cs.LG · cs.CV· cs.DC· cs.ET

Recognition: unknown

RESFL: An Uncertainty-Aware Framework for Responsible Federated Learning by Balancing Privacy, Fairness and Utility

Authors on Pith no claims yet
classification 💻 cs.LG cs.CVcs.DCcs.ET
keywords privacyresfladversarialaggregationfairnesslearningacrossautonomous
0
0 comments X
read the original abstract

Federated Learning (FL) has gained prominence in machine learning applications across critical domains by enabling collaborative model training without centralized data aggregation. However, FL frameworks that protect privacy often sacrifice fairness and reliability. Differential privacy can reduce data leakage, but it may also obscure sensitive attributes needed for bias correction, thereby worsening performance gaps across demographic groups. This work studies the privacy-fairness trade-off in FL-based object detection and introduces RESFL, an integrated framework that jointly improves both objectives. RESFL combines adversarial privacy disentanglement with uncertainty-guided fairness-aware aggregation. The adversarial component uses a gradient reversal layer to suppress sensitive attribute information, reducing privacy risks while preserving fairness-relevant structure. The uncertainty-aware aggregation component uses an evidential neural network to adaptively weight client updates, prioritizing contributions with lower fairness disparities and higher confidence. This produces robust and equitable FL model updates. Experiments in high-stakes autonomous vehicle settings show that RESFL achieves high mAP on FACET and CARLA, reduces membership-inference attack success by 37%, reduces the equality-of-opportunity gap by 17% relative to the FedAvg baseline, and maintains stronger adversarial robustness. Although evaluated in autonomous driving, RESFL is domain-agnostic and can be applied to a broad range of application domains beyond this setting.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Toward Individual Fairness Without Centralized Data: Selective Counterfactual Consistency for Vertical Federated Learning

    cs.CY 2026-05 unverdicted novelty 7.0

    SCC-VFL reduces individual decision flip rates by up to 98% in vertical federated learning while preserving accuracy through differentially private feature role discovery and selective counterfactual consistency enforcement.