pith. machine review for the scientific record. sign in

arxiv: 2605.00011 · v1 · submitted 2026-03-11 · 💻 cs.LG · cs.AI· cs.DC

Recognition: no theorem link

FedACT: Concurrent Federated Intelligence across Heterogeneous Data Sources

Authors on Pith no claims yet

Pith reviewed 2026-05-15 12:28 UTC · model grok-4.3

classification 💻 cs.LG cs.AIcs.DC
keywords federated learningdevice schedulingheterogeneous resourcesconcurrent jobsjob completion timeresource allocationparticipation fairness
0
0 comments X

The pith

FedACT schedules heterogeneous devices across multiple concurrent federated learning jobs using alignment scores to minimize average job completion time.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces FedACT as a scheduling method for running several federated learning tasks simultaneously on shared devices with varying resources. It evaluates how well each device's available compute, memory, and bandwidth match a job's demands through an alignment score, then assigns devices accordingly while enforcing balanced participation across jobs. This dual focus on compatibility and fairness aims to reduce the time until all jobs finish their training rounds. Experiments on benchmark datasets show large gains in speed and final model accuracy over prior approaches that treat jobs in isolation. If the method generalizes, shared device pools could support many learning tasks at once without one job starving the others of suitable hardware.

Core claim

FedACT formulates an optimal scheduling plan that prioritizes devices with higher alignment scores between their available resources and each job's resource demands, while incorporating participation fairness constraints to balance device contributions across concurrent FL jobs, resulting in lower average job completion time and higher global model accuracy than baselines.

What carries the argument

The alignment scoring mechanism that evaluates compatibility between a device's available resources and a job's resource demands.

Load-bearing premise

The alignment scoring mechanism accurately captures compatibility between dynamic device resources and job demands, and enforcing participation fairness does not materially increase overall job completion time.

What would settle it

A controlled test with real heterogeneous devices running several simultaneous FL jobs that measures whether the reported reductions in average job completion time and accuracy gains persist against the same baselines.

Figures

Figures reproduced from arXiv: 2605.00011 by Isabelle G Chapman, Klara Nahrstedt, Li Chen, Md Sirajul Islam, Nian-Feng Tzeng, N I Md Ashafuddula, Xu Yuan.

Figure 1
Figure 1. Figure 1: An example of Multi-Job Federated Learning: smart [PITH_FULL_IMAGE:figures/full_fig_p001_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: An overview of the training procedure within the [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Test accuracy versus elapsed wall-clock time for different jobs in Group A with the IID distribution. [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Test accuracy versus elapsed wall-clock time for different jobs in Group B with the IID distribution. [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Test accuracy versus elapsed wall-clock time for different jobs in Group A with the Non-IID distribution. [PITH_FULL_IMAGE:figures/full_fig_p008_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Test accuracy versus elapsed wall-clock time for different jobs in Group B with the Non-IID distribution. [PITH_FULL_IMAGE:figures/full_fig_p009_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Elapsed wall-clock training time required for each job of Group A to achieve the target convergence accuracy under [PITH_FULL_IMAGE:figures/full_fig_p009_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Elapsed wall-clock training time required for each job of Group B to achieve the target convergence accuracy under [PITH_FULL_IMAGE:figures/full_fig_p010_8.png] view at source ↗
read the original abstract

Federated Learning (FL) enables collaborative intelligence across decentralized data source devices in a privacy-preserving way. While substantial research attention has been drawn to optimizing the learning process for an individual task, real-world applications increasingly require multiple machine learning tasks simultaneously training their models across a shared pool of devices. Naively applying single-FL optimization techniques in multi-FL systems results in suboptimal system performance, particularly due to device heterogeneity and resource inefficiency. To address such a critical open challenge, we introduce {\em FedACT}, a novel resource heterogeneity-aware device scheduling approach designed to efficiently schedule heterogeneous devices across multiple concurrent FL jobs, with the goal of minimizing their average job completion time (JCT). {\em FedACT} dynamically assigns devices to FL jobs based on an alignment scoring mechanism that evaluates the compatibility between available resources of devices and resource demands of jobs. Additionally, it incorporates participation fairness to ensure balanced contributions from devices across jobs, further enhancing the accuracy levels of learned global models. An optimal scheduling plan is formulated in {\em FedACT} by prioritizing devices with higher alignment scores, while ensuring fair participation across jobs. To evaluate the effectiveness of the proposed scheduling algorithm, we carried out comprehensive experiments using diverse FL jobs and benchmark datasets. Experimental results demonstrate that {\em FedACT} reduces the average JCT by up to 8.3\(\times\) and improves model accuracy by up to 44.5\%, compared to the state-of-the-art baselines.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper introduces FedACT, a resource heterogeneity-aware device scheduling approach for concurrent federated learning jobs across heterogeneous devices. It uses an alignment scoring mechanism to evaluate compatibility between device resources and job demands, incorporates a participation fairness term, and formulates an optimal plan by prioritizing high-score devices. Experiments on diverse FL jobs and benchmark datasets claim reductions in average job completion time by up to 8.3× and model accuracy improvements by up to 44.5% versus state-of-the-art baselines.

Significance. If the results hold under rigorous verification, FedACT addresses a practical challenge in multi-task FL by improving efficiency and model quality in resource-heterogeneous settings. The combination of dynamic scoring and fairness is a useful systems contribution, though the absence of approximation guarantees for the greedy scheduler limits broader impact.

major comments (2)
  1. [Abstract] Abstract: The abstract asserts large empirical gains (up to 8.3× JCT reduction and 44.5% accuracy improvement) but supplies no description of the alignment scoring formula, experimental setup, baseline implementations, statistical tests, or potential post-hoc choices. This omission is load-bearing because it prevents any assessment of whether the data actually support the central claims.
  2. [Scheduling algorithm] Scheduling algorithm: The method formulates an optimal plan but solves it via greedy selection ordered by alignment score plus fairness term, with no proof or approximation bound showing the result is within a constant factor of optimality. This is load-bearing for the JCT minimization claim, as greedy choice properties can fail under correlated job demands or high device churn, turning reported speedups into potentially setup-specific outcomes.
minor comments (2)
  1. [Abstract] Abstract: Raw LaTeX commands such as {em FedACT} appear in the text; these should be rendered as proper formatting in the final manuscript.
  2. [Method] Throughout: Provide explicit mathematical definitions or pseudocode for the alignment score and the fairness constraint to improve reproducibility and clarity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments on our manuscript. We address each major point below with honest responses and indicate where revisions will be incorporated.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The abstract asserts large empirical gains (up to 8.3× JCT reduction and 44.5% accuracy improvement) but supplies no description of the alignment scoring formula, experimental setup, baseline implementations, statistical tests, or potential post-hoc choices. This omission is load-bearing because it prevents any assessment of whether the data actually support the central claims.

    Authors: We acknowledge that the abstract is high-level and omits key details on the alignment scoring formula, experimental setup, baselines, and statistical procedures. These elements are fully described in Sections 3.2 (alignment scoring in Equation 3), 4.1 (setup, datasets, and baseline implementations), and 4.2 (results with averages over 5 independent runs and standard deviations). To address the concern directly, we will revise the abstract to include a concise sentence on the alignment scoring mechanism and fairness term while noting that full experimental details appear in the body. This is a partial revision due to abstract length limits. revision: partial

  2. Referee: [Scheduling algorithm] Scheduling algorithm: The method formulates an optimal plan but solves it via greedy selection ordered by alignment score plus fairness term, with no proof or approximation bound showing the result is within a constant factor of optimality. This is load-bearing for the JCT minimization claim, as greedy choice properties can fail under correlated job demands or high device churn, turning reported speedups into potentially setup-specific outcomes.

    Authors: The referee is correct that we formulate the scheduling objective as an optimization problem but solve it via greedy selection on alignment scores plus the fairness term, without providing an approximation guarantee. The underlying assignment problem is NP-hard under general resource constraints and dynamic arrivals, which precludes a simple constant-factor proof. We will revise Section 3.3 to explicitly label the scheduler as a greedy heuristic, discuss its potential limitations under correlated demands or high churn, and add new experiments simulating those conditions to demonstrate that the reported gains remain consistent. This strengthens the presentation without overstating optimality. revision: partial

Circularity Check

0 steps flagged

No significant circularity detected in derivation chain

full rationale

The paper introduces FedACT as a new algorithmic scheduling method based on an alignment scoring mechanism for device-job compatibility plus a fairness term. No equations, fitted parameters, or derivation steps appear in the provided text that reduce by construction to prior inputs, self-citations, or renamed empirical patterns. The central claims rest on experimental comparisons rather than a closed mathematical loop; the greedy prioritization of an 'optimal plan' is presented as a constructive heuristic without self-referential definitions or load-bearing self-citations that would force the reported JCT gains. This is a standard case of an independent algorithmic proposal evaluated empirically.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

Review performed on abstract only; the central claim rests on an unspecified alignment scoring function and an unstated fairness constraint whose precise definitions and parameterizations are not provided.

axioms (2)
  • domain assumption Device resource availability can be meaningfully scored for compatibility with job resource demands
    Invoked by the alignment scoring mechanism described in the abstract.
  • domain assumption Enforcing balanced device participation across jobs improves or at least does not harm global model accuracy
    Stated as part of the design goal in the abstract.

pith-pipeline@v0.9.0 · 5582 in / 1285 out tokens · 36824 ms · 2026-05-15T12:28:20.837817+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 4 internal anchors

  1. [1]

    EU. 2018. European Union’s General Data Protection Regulation (GDPR). European Union. Accessed 2024-04. [Online]. Available: https://eugdpr.org/

  2. [2]

    Communication-efficient learning of deep networks from decentralized data,

    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” inArtificial intelligence and statistics. PMLR, 2017, pp. 1273– 1282

  3. [4]

    Falcon: Addressing stragglers in heterogeneous parameter server via multiple parallelism,

    Q. Zhou, S. Guo, H. Lu, L. Li, M. Guo, Y . Sun, and K. Wang, “Falcon: Addressing stragglers in heterogeneous parameter server via multiple parallelism,”IEEE Transactions on Computers, vol. 70, no. 1, pp. 139– 155, 2020

  4. [5]

    Clusterfl: a similarity-aware federated learning system for human activity recogni- tion,

    X. Ouyang, Z. Xie, J. Zhou, J. Huang, and G. Xing, “Clusterfl: a similarity-aware federated learning system for human activity recogni- tion,” inProceedings of the 19th Annual International Conference on Mobile Systems, Applications, and Services, 2021, pp. 54–66

  5. [6]

    Federated Learning for Mobile Keyboard Prediction

    A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augen- stein, H. Eichner, C. Kiddon, and D. Ramage, “Federated learning for mobile keyboard prediction,”arXiv preprint arXiv:1811.03604, 2018

  6. [7]

    Oort: Efficient federated learning via guided participant selection,

    F. Lai, X. Zhu, H. V . Madhyastha, and M. Chowdhury, “Oort: Efficient federated learning via guided participant selection,” inOSDI, 2021, pp. 19–35

  7. [8]

    Pyramidfl: A fine-grained client selection framework for efficient federated learning,

    C. Li, X. Zeng, M. Zhang, and Z. Cao, “Pyramidfl: A fine-grained client selection framework for efficient federated learning,” inProceedings of the 28th annual international conference on mobile computing and networking, 2022, pp. 158–171

  8. [9]

    Fedfairˆ3: Unlocking threefold fairness in federated learning,

    S. Javaherian, S. Panta, S. Williams, M. S. Islam, and L. Chen, “Fedfairˆ3: Unlocking threefold fairness in federated learning,”in Pro- ceedings of IEEE International Conference on Communications (ICC), pp. 1–7, 2024

  9. [10]

    Fedclust: Tackling data heterogeneity in federated learning through weight-driven client clustering,

    M. S. Islam, S. Javaherian, F. Xu, X. Yuan, L. Chen, and N.-F. Tzeng, “Fedclust: Tackling data heterogeneity in federated learning through weight-driven client clustering,” inProceedings of the 53rd International Conference on Parallel Processing, 2024, pp. 474–483

  10. [11]

    Communication-efficient federated learning via knowledge distillation,

    C. Wu, F. Wu, L. Lyu, Y . Huang, and X. Xie, “Communication-efficient federated learning via knowledge distillation,”Nature communications, vol. 13, no. 1, p. 2032, 2022

  11. [12]

    Fedboost: A communication- efficient algorithm for federated learning,

    J. Hamer, M. Mohri, and A. T. Suresh, “Fedboost: A communication- efficient algorithm for federated learning,” inInternational Conference on Machine Learning. PMLR, 2020, pp. 3973–3983

  12. [13]

    Ditto: Fair and robust federated learning through personalization,

    T. Li, S. Hu, A. Beirami, and V . Smith, “Ditto: Fair and robust federated learning through personalization,” inInternational Conference on Machine Learning. PMLR, 2021, pp. 6357–6368

  13. [14]

    Pgfed: Personalize each client’s global objective for federated learning,

    J. Luo, M. Mendieta, C. Chen, and S. Wu, “Pgfed: Personalize each client’s global objective for federated learning,”International Confer- ence on Computer Vision, 2023

  14. [15]

    Fedala: Adaptive local aggregation for personalized federated learning,

    J. Zhang, Y . Hua, H. Wang, T. Song, Z. Xue, R. Ma, and H. Guan, “Fedala: Adaptive local aggregation for personalized federated learning,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 37, no. 9, 2023, pp. 11 237–11 244

  15. [16]

    Asynchronous federated optimization,

    C. Xie, S. Koyejo, and I. Gupta, “Asynchronous federated optimization,” arXiv preprint arXiv:1903.03934, 2019

  16. [17]

    Federated learning with buffered asynchronous aggregation,

    J. Nguyen, K. Malik, H. Zhan, A. Yousefpour, M. Rabbat, M. Malek, and D. Huba, “Federated learning with buffered asynchronous aggregation,” inInternational Conference on Artificial Intelligence and Statistics. PMLR, 2022, pp. 3581–3607

  17. [18]

    Fedcore: Straggler-free federated learning with distributed coresets,

    H. Guo, H. Gu, X. Wang, B. Chen, E. K. Lee, T. Eilam, D. Chen, and K. Nahrstedt, “Fedcore: Straggler-free federated learning with distributed coresets,” inICC 2024-IEEE International Conference on Communications. IEEE, 2024, pp. 280–286

  18. [19]

    Seafl: Enhancing efficiency in semi-asynchronous federated learning through adaptive aggregation and selective training,

    M. S. Islam, S. Panta, F. Xu, X. Yuan, L. Chen, and N.-F. Tzeng, “Seafl: Enhancing efficiency in semi-asynchronous federated learning through adaptive aggregation and selective training,” in2025 IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2025, pp. 509– 519

  19. [20]

    Efficient device scheduling with multi-job federated learning,

    C. Zhou, J. Liu, J. Jia, J. Zhou, Y . Zhou, H. Dai, and D. Dou, “Efficient device scheduling with multi-job federated learning,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 9, 2022, pp. 9971–9979

  20. [21]

    Fedast: federated asynchronous simultaneous training,

    B. Askin, P. Sharma, C. Joe-Wong, and G. Joshi, “Fedast: federated asynchronous simultaneous training,”arXiv preprint arXiv:2406.00302, 2024

  21. [22]

    Learning multiple layers of features from tiny images,

    A. Krizhevsky, G. Hintonet al., “Learning multiple layers of features from tiny images,” 2009

  22. [23]

    Gradient-based learning applied to document recognition,

    Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998

  23. [24]

    Emnist: Extending mnist to handwritten letters,

    G. Cohen, S. Afshar, J. Tapson, and A. Van Schaik, “Emnist: Extending mnist to handwritten letters,” in2017 international joint conference on neural networks (IJCNN). IEEE, 2017, pp. 2921–2926

  24. [25]

    Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms

    H. Xiao, K. Rasul, and R. V ollgraf, “Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms,”arXiv preprint arXiv:1708.07747, 2017

  25. [26]

    Federated learning: Challenges, methods, and future directions,

    T. Li, A. K. Sahu, A. Talwalkar, and V . Smith, “Federated learning: Challenges, methods, and future directions,”IEEE signal processing magazine, vol. 37, no. 3, pp. 50–60, 2020

  26. [27]

    Scaffold: Stochastic controlled averaging for federated learn- ing,

    S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, “Scaffold: Stochastic controlled averaging for federated learn- ing,” inInternational Conference on Machine Learning. PMLR, 2020, pp. 5132–5143

  27. [28]

    Federated optimization in heterogeneous networks,

    T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V . Smith, “Federated optimization in heterogeneous networks,”Proceedings of Machine learning and systems, vol. 2, pp. 429–450, 2020

  28. [29]

    Public-key cryptosystems based on composite degree residu- osity classes,

    P. Paillier, “Public-key cryptosystems based on composite degree residu- osity classes,” inInternational conference on the theory and applications of cryptographic techniques. Springer, 1999, pp. 223–238

  29. [30]

    Practical Secure Aggregation for Federated Learning on User-Held Data

    K. Bonawitz, V . Ivanov, B. Kreuter, A. Marcedone, H. B. McMa- han, S. Patel, D. Ramage, A. Segal, and K. Seth, “Practical secure aggregation for federated learning on user-held data,”arXiv preprint arXiv:1611.04482, 2016

  30. [31]

    Differential privacy: A survey of results,

    C. Dwork, “Differential privacy: A survey of results,” inInternational conference on theory and applications of models of computation. Springer, 2008, pp. 1–19

  31. [32]

    Joint device scheduling and resource allocation for latency constrained wireless federated learn- ing,

    W. Shi, S. Zhou, Z. Niu, M. Jiang, and L. Geng, “Joint device scheduling and resource allocation for latency constrained wireless federated learn- ing,”IEEE Transactions on Wireless Communications, vol. 20, no. 1, pp. 453–467, 2020

  32. [33]

    Speeding up distributed machine learning using codes,

    K. Lee, M. Lam, R. Pedarsani, D. Papailiopoulos, and K. Ramchandran, “Speeding up distributed machine learning using codes,”IEEE Trans- actions on Information Theory, vol. 64, no. 3, pp. 1514–1529, 2017

  33. [34]

    Load distribution fairness in p2p data management systems,

    T. Pitoura and P. Triantafillou, “Load distribution fairness in p2p data management systems,” in2007 IEEE 23rd International Conference on Data Engineering. IEEE, 2006, pp. 396–405

  34. [35]

    “fairness analysis

    A. Finkelstein, M. Harman, S. A. Mansouri, J. Ren, and Y . Zhang, ““fairness analysis” in requirements assignments,” in2008 16th IEEE International Requirements Engineering Conference. IEEE, 2008, pp. 115–124

  35. [36]

    A multi- agent q-learning-based framework for achieving fairness in http adaptive streaming,

    S. Petrangeli, M. Claeys, S. Latr ´e, J. Famaey, and F. De Turck, “A multi- agent q-learning-based framework for achieving fairness in http adaptive streaming,” in2014 IEEE Network Operations and Management Sym- posium (NOMS). IEEE, 2014, pp. 1–9

  36. [37]

    A survey on bias and fairness in machine learning,

    N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan, “A survey on bias and fairness in machine learning,”ACM computing surveys (CSUR), vol. 54, no. 6, pp. 1–35, 2021

  37. [38]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”arXiv preprint arXiv:1409.1556, 2014

  38. [39]

    Imagenet classification with deep convolutional neural networks,

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Advances in neural informa- tion processing systems, vol. 25, 2012

  39. [40]

    Deep residual learning for image recognition,

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inProceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778

  40. [41]

    Cellular traffic load prediction with lstm and gaussian process regression,

    W. Wang, C. Zhou, H. He, W. Wu, W. Zhuang, and X. Shen, “Cellular traffic load prediction with lstm and gaussian process regression,” inICC 2020-2020 IEEE international conference on communications (ICC). IEEE, 2020, pp. 1–6

  41. [42]

    Scheduling algorithms for efficient execution of stream workflow applications in multicloud environments,

    M. Barika, S. Garg, A. Chan, and R. N. Calheiros, “Scheduling algorithms for efficient execution of stream workflow applications in multicloud environments,”IEEE transactions on services computing, vol. 15, no. 2, pp. 860–875, 2019

  42. [43]

    Towards federated learning at scale: System design,

    K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V . Ivanov, C. Kiddon, J. Kone ˇcn`y, S. Mazzocchi, B. McMahanet al., 11 Accepted at the 40th IEEE International Parallel & Distributed Processing Symposium (IPDPS), 2026, New Orleans, USA “Towards federated learning at scale: System design,”Proceedings of machine learning and systems, vol. 1, p...

  43. [44]

    Fedvision: An online visual object detection platform powered by federated learning,

    Y . Liu, A. Huang, Y . Luo, H. Huang, Y . Liu, Y . Chen, L. Feng, T. Chen, H. Yu, and Q. Yang, “Fedvision: An online visual object detection platform powered by federated learning,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 08, 2020, pp. 13 172– 13 179

  44. [45]

    To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices,

    L. Li, D. Shi, R. Hou, H. Li, M. Pan, and Z. Han, “To talk or to work: Flexible communication compression for energy efficient federated learning over heterogeneous mobile edge devices,” inIEEE INFOCOM 2021-IEEE Conference on Computer Communications. IEEE, 2021, pp. 1–10

  45. [46]

    Enabling privacy-preserving incentives for mobile crowd sensing systems,

    H. Jin, L. Su, B. Ding, K. Nahrstedt, and N. Borisov, “Enabling privacy-preserving incentives for mobile crowd sensing systems,” in 2016 IEEE 36th International Conference on Distributed Computing Systems (ICDCS). IEEE, 2016, pp. 344–353

  46. [47]

    Multi-job intelligent scheduling with cross-device federated learning,

    J. Liu, J. Jia, B. Ma, C. Zhou, J. Zhou, Y . Zhou, H. Dai, and D. Dou, “Multi-job intelligent scheduling with cross-device federated learning,” IEEE Transactions on Parallel and Distributed Systems, vol. 34, no. 2, pp. 535–551, 2022

  47. [48]

    Joint participant selection and learning scheduling for multi-model federated edge learning,

    X. Wei, J. Liu, and Y . Wang, “Joint participant selection and learning scheduling for multi-model federated edge learning,” in2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS). IEEE, 2022, pp. 537–545

  48. [49]

    Multi-model federated learning,

    N. Bhuyan and S. Moharir, “Multi-model federated learning,” in2022 14th International Conference on COMmunication Systems & NETworkS (COMSNETS). IEEE, 2022, pp. 779–783

  49. [50]

    Fair training of multiple federated learning models on resource constrained network devices,

    M. Siew, S. Arunasalam, Y . Ruan, Z. Zhu, L. Su, S. Ioannidis, E. Yeh, and C. Joe-Wong, “Fair training of multiple federated learning models on resource constrained network devices,” inProceedings of the 22nd International Conference on Information Processing in Sensor Networks, 2023, pp. 330–331

  50. [51]

    Venn: Re- source management across federated learning jobs,

    J. Liu, F. Lai, D. Ding, Y . Zhang, and M. Chowdhury, “Venn: Re- source management across federated learning jobs,”arXiv preprint arXiv:2312.08298, 2023. 12