PPHH-VFL splits the model head into a plaintext public part secured by adversarial training and a small MPC private part, yielding up to 6 orders of magnitude faster inference than end-to-end MPC on models up to 86M parameters.
Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption
5 Pith papers cite this work. Polarity classification is still indexing.
years
2026 5verdicts
UNVERDICTED 5representative citing papers
X-NegoBox is a proposed explainable framework that negotiates privacy budgets for energy data exchange using trust, sensitivity, and purpose factors, with experiments claiming reduced leakage and higher acceptance rates.
FedProxy replaces weak adapters with a proxy SLM for federated LLM fine-tuning, outperforming prior methods and approaching centralized performance via compression, heterogeneity-aware aggregation, and training-free fusion.
DDP-SA combines client-side Laplace noise perturbation with full-threshold additive secret sharing to let federated learning servers reconstruct only aggregated noisy gradients without exposing individual client updates.
The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.
citing papers explorer
-
Private Vertical Federated Inference for Time-Series
PPHH-VFL splits the model head into a plaintext public part secured by adversarial training and a small MPC private part, yielding up to 6 orders of magnitude faster inference than end-to-end MPC on models up to 86M parameters.
-
X-NegoBox: An Explainable Privacy-Budget Negotiation Framework for Secure Peer-to-Peer Energy Data Exchange
X-NegoBox is a proposed explainable framework that negotiates privacy budgets for energy data exchange using trust, sensitivity, and purpose factors, with experiments claiming reduced leakage and higher acceptance rates.
-
FedProxy: Federated Fine-Tuning of LLMs via Proxy SLMs and Heterogeneity-Aware Fusion
FedProxy replaces weak adapters with a proxy SLM for federated LLM fine-tuning, outperforming prior methods and approaching centralized performance via compression, heterogeneity-aware aggregation, and training-free fusion.
-
DDP-SA: Scalable Privacy-Preserving Federated Learning via Distributed Differential Privacy and Secure Aggregation
DDP-SA combines client-side Laplace noise perturbation with full-threshold additive secret sharing to let federated learning servers reconstruct only aggregated noisy gradients without exposing individual client updates.
-
Split and Aggregation Learning for Foundation Models Over Mobile Embodied AI Network (MEAN): A Comprehensive Survey
The paper surveys split and aggregation learning for foundation models in 6G networks to improve efficiency, resource use, and data privacy in distributed AI.