pith. machine review for the scientific record. sign in

arxiv: 2604.24824 · v2 · submitted 2026-04-27 · 💻 cs.LG

Recognition: no theorem link

Negative Ontology of True Target for Machine Learning: Towards Evaluation and Learning under Democratic Supervision

Yongquan Yang

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:13 UTC · model grok-4.3

classification 💻 cs.LG
keywords true targetnegative ontologydemocratic supervisionmachine learningmultiple inaccurate true targetsevaluation frameworkpredictive modeling
0
0 comments X

The pith

Machine learning should replace the assumption of one objective true target with multiple inaccurate targets under democratic supervision.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper examines how the assumption that a true target exists or does not exist shapes machine learning methods for prediction. It adopts the position that no objective true target exists in the real world and uses that stance to define democratic supervision. Multiple inaccurate true targets serve as the concrete way to put democratic supervision into practice at the level of individual examples. From those targets the authors derive rules for creating and judging them, a way to evaluate models against them, and a learning approach that treats the target as undefinable, then assemble these pieces into the EL-MIATTs framework. A demonstration in an education setting shows how the framework can guide practical predictive modeling.

Core claim

Grounded in the non-existence of an objective true target, democratic supervision for machine learning is defined and realized at the instance level through multiple inaccurate true targets; principles for their logic-driven generation and assessment, a logical assessment formulation for evaluation, and undefinable true target learning for model training are derived, yielding the EL-MIATTs framework for predictive modeling.

What carries the argument

Multiple Inaccurate True Targets (MIATTs), the instance-level mechanism that carries democratic supervision by supplying several approximate targets in place of any single objective one.

If this is right

  • Logic-driven principles govern the generation and assessment of multiple inaccurate true targets.
  • Evaluation proceeds via a logical assessment formulation that operates on those targets.
  • Learning proceeds through undefinable true target learning that does not require a single fixed target.
  • The resulting EL-MIATTs framework supports predictive modeling that aligns with democratic supervision.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Practitioners might systematically collect several human annotations per example rather than forcing a single consensus label.
  • The approach could extend naturally to tasks where outcomes are inherently contested, such as risk scoring or content labeling.
  • Models trained under this framework may prove more stable when tested on new data that carries similar ambiguity.

Load-bearing premise

No objective true target exists for machine learning tasks in the real world.

What would settle it

The discovery of even one predictive task in which a single target label can be fixed objectively and verified without reference to any human judgment or modeling choice would falsify the non-existence premise.

read the original abstract

This article philosophically examines how shifts in assumptions regarding the existence and non-existence of the true target (TT) give rise to new perspectives and insights for machine learning (ML)-based predictive modeling and, correspondingly, proposes a knowledge system for evaluation and learning under Democratic Supervision. By systematically analysing the existence assumption of the TT in current mainstream ML paradigms, we explicitly adopt a negative ontology perspective, positing that the TT does not objectively exist in the real world, and, grounded in this non-existence assumption, define Democratic Supervision for ML. We further present Multiple Inaccurate True Targets (MIATTs) as an instance-level realization of Democratic Supervision. Building upon MIATTs, we derive principles, for the logic-driven generation and assessment of MIATTs, a logical assessment formulation for evaluation with MIATTs, and undefinable true target learning for learning with MIATTs. Based on these components, we establish the evaluation and learning with MIATTs (EL-MIATTs) framework for ML-based predictive modelling. A real-world application demonstrates the potential of the proposed EL-MIATTs framework in supporting education and professional development for individuals, aligning with prior discussions of Democratic Supervision in the fields of education and professional development.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper philosophically examines shifts in assumptions about the existence of a true target (TT) in ML-based predictive modeling. Adopting a negative ontology that the TT does not objectively exist in the real world, it defines Democratic Supervision for ML, presents Multiple Inaccurate True Targets (MIATTs) as an instance-level realization, derives principles for logic-driven generation and assessment of MIATTs along with a logical assessment formulation and undefinable true target learning, and assembles these into the EL-MIATTs framework for evaluation and learning. A real-world application in education and professional development is used to illustrate the framework.

Significance. If coherent and adopted, the work offers a conceptual reframing that could inform ML applications involving subjective or contested labels, such as in education, by prioritizing democratic processes over objective TT assumptions. It explicitly builds on prior discussions of Democratic Supervision in education and professional development. However, the absence of mathematical formalizations, algorithms, empirical results, or comparisons to existing methods (e.g., crowdsourced labeling or ensemble techniques) restricts its technical significance within core ML research.

major comments (1)
  1. Abstract: The central claims of the EL-MIATTs framework, including the derived principles for MIATTs generation/assessment and undefinable true target learning, are presented as following directly from the non-existence assumption of the TT. This renders the construction circular, as each component (Democratic Supervision, MIATTs, EL-MIATTs) is defined in terms of the input premise without independent grounding, validation, or a concrete test that could falsify the framework.
minor comments (2)
  1. The manuscript would benefit from explicit comparisons to related concepts in ML such as crowdsourcing, weak supervision, or multi-label learning to clarify novelty and avoid overlap.
  2. Given the conceptual focus, adding at least one worked example with concrete MIATTs instances and how the logical assessment formulation applies would improve accessibility for technical readers.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for their review and for highlighting this important point about the logical structure of our framework. We respond to the major comment below.

read point-by-point responses
  1. Referee: Abstract: The central claims of the EL-MIATTs framework, including the derived principles for MIATTs generation/assessment and undefinable true target learning, are presented as following directly from the non-existence assumption of the TT. This renders the construction circular, as each component (Democratic Supervision, MIATTs, EL-MIATTs) is defined in terms of the input premise without independent grounding, validation, or a concrete test that could falsify the framework.

    Authors: We respectfully disagree that the construction is circular. The negative ontology is adopted as an explicit foundational premise after systematic analysis of mainstream ML assumptions; Democratic Supervision is then defined as a direct consequence of this premise. MIATTs are introduced as a distinct instance-level operationalization, and the principles for logic-driven generation/assessment, the logical assessment formulation, and undefinable true target learning are derived through further step-by-step logical reasoning rather than by redefinition. The real-world application in education and professional development provides an illustrative grounding that aligns with prior literature on Democratic Supervision in those fields. Nevertheless, we acknowledge that the abstract could more clearly separate the premise from the subsequent derivations and will revise it to emphasize the logical progression and the illustrative (rather than falsifying) role of the application. revision: partial

Circularity Check

0 steps flagged

No significant circularity identified

full rationale

The paper is explicitly philosophical and definitional: it begins with an explicit premise (negative ontology positing non-existence of an objective true target), then defines Democratic Supervision, MIATTs, and the EL-MIATTs framework as grounded realizations of that premise. No equations, algorithms, fitted parameters, or technical derivations appear in the provided text that could reduce by construction to the inputs. The structure is a standard conceptual development from stated assumptions rather than a self-referential loop or renamed fit; validity is framed as coherence and usefulness, not falsifiable technical steps.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 3 invented entities

The contribution consists almost entirely of new definitions and a framework constructed from a single foundational assumption without external benchmarks or evidence.

axioms (1)
  • domain assumption The true target (TT) does not objectively exist in the real world
    This negative ontology assumption is explicitly adopted as the grounding for defining Democratic Supervision and all subsequent components.
invented entities (3)
  • Democratic Supervision no independent evidence
    purpose: Supervision paradigm for ML based on non-existence of TT
    Newly defined concept to replace traditional supervision under the ontology shift.
  • Multiple Inaccurate True Targets (MIATTs) no independent evidence
    purpose: Instance-level realization of Democratic Supervision
    Invented operationalization of the supervision idea.
  • EL-MIATTs framework no independent evidence
    purpose: Integrated system for evaluation and learning with MIATTs
    Proposed overarching framework built from the new concepts.

pith-pipeline@v0.9.0 · 5513 in / 1543 out tokens · 68668 ms · 2026-05-12T03:13:18.135035+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

91 extracted references · 91 canonical work pages · 2 internal anchors

  1. [1]

    a more radical stance

    Introduction The true target (TT), which is a computationally equivalent transformation of the ground- truth, serves as a fundamental concept in the formulation and deployment of ML paradigms [1]. Assumptions regarding the TT are therefore crucial, as they implicitly define what is being learned, how supervision is interpreted, and how models are expected...

  2. [2]

    solution implementation

    Machine Learning The objective of machine learning ( ML) is to construct a predictive model with data collected for a specific prediction task based on efficient computing resources [45, 46]. This section introduces fundamental terminologies in ML, clarifies the interrelations among them, and discuss corresponding implications in shaping higher-level meth...

  3. [3]

    Existence Assumptions of True Target in Current Mainstream Machine Learning Paradigms Prior works [9, 13] have systematically examined the existence assumptions about TT underlying current major evaluation and learning paradigms. The evaluation paradigms considered include those based on accurate true targets (ATTs) [47–51] and those based on inaccurate t...

  4. [4]

    Democratic Supervision

    Explicitly Posited Non-Existence Assumption of True Target and Defined Democratic Supervision for Machine Learning We explicitly posit, in Assumption 1, the non-existence of the true target for ML from the perspective of negative ontology [9]. Assumption 1 ( Negative Oncology of True Target for ML): The true target does not objectively exist in the real w...

  5. [5]

    This section presents a component for operationalizing Democratic Supervision at the instance level through Multiple Inaccurate True Targets (MIATTs)

    Presented Multiple Inaccurate True Targets as an Instance - Level Realization of Democratic Supervision Grounded in the non -existence assumption of TT, Democratic Supervision enables a more inclusive research landscape, thereby extending ML research toward evaluation and learning under such a paradigm . This section presents a component for operationaliz...

  6. [6]

    Partial representation: 𝑆𝐹(𝑡𝑛∗)⊂𝑆𝐹(𝑡∗), i.e, each 𝑡𝑛∗ encodes only a subset of the underlying true target’s semantic facts

  7. [7]

    In other words, no single 𝑡𝑛∗ fully specifies 𝑡∗, but together the MIATTs set captures one or more of its essential aspects

    Collective coverage: ⋃ 𝑆𝐹(𝑡𝑛∗)𝑁 𝑛=1 ⊆𝑆𝐹(𝑡∗), with the possibility that ⋃ 𝑆𝐹(𝑡𝑛∗)𝑁 𝑛=1 =𝑆𝐹(𝑡∗). In other words, no single 𝑡𝑛∗ fully specifies 𝑡∗, but together the MIATTs set captures one or more of its essential aspects. Building on this foundation, MIATTs is an instance -level realization of Democratic Supervision, grounding the abstract paradigm in a con...

  8. [8]

    Proposed EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets Building upon MIATTs, in this section, we propose the EL -MIATTs framework for evaluation and learning with MIATTs [43]. The framework is ground in logic -driven MIATTs generation and assessment [42], logical assessment formula (LAF) for evaluation with MIATTs [13], and unde...

  9. [9]

    In this application, we treated ourselves as the non -expert at identifying bicycle lane in street images (i.e

    Conducted Application of EL-MIATTs for Supporting Education and Professional Development for Individuals Based on prior works [9, 13, 42, 43, 79] , EL-MIATTs has been applied in bicycle lane segmentation task [44]. In this application, we treated ourselves as the non -expert at identifying bicycle lane in street images (i.e. assuming the TT of bicycle lan...

  10. [10]

    From the negative ontology perspective, we e xplicitly posited that the TT does not objectively exist in the real world for ML

    Conclusion In this article, philosophically examining the shifts in assumptions regarding the existence and non-existence of the TT, we have shown that relaxing the existence assumption of the TT to the non -existence assumption gives rise to a fundamentally different understanding of supervision. From the negative ontology perspective, we e xplicitly pos...

  11. [11]

    Moderately supervised learning: definition, framework and generality

    Yang Y. Moderately supervised learning: definition, framework and generality. Artif Intell Rev. 2024;57:37. https://doi.org/10.1007/s10462-023-10654-6

  12. [12]

    Crowdsourcing as a Model for Problem Solving: An Introduction and Cases

    Brabham DC. Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence: The International Journal of Research into New Media Technologies. 2008;14:75–90. https://doi.org/10.1177/1354856507084420

  13. [13]

    Learning from crowds

    Raykar VC, Yu S, Zhao LH, Valadez GH, Florin C, Bogoni L, et al. Learning from crowds. Journal of Machine Learning Research. 2010;11

  14. [14]

    Deep Learning from Crowds

    Rodrigues F, Pereira F. Deep Learning from Crowds. AAAI. 2018;32. https://doi.org/10.1609/aaai.v32i1.11506

  15. [16]

    Deep Learning From Multiple Noisy Annotators as A Union

    Wei H, Xie R, Feng L, Han B, An B. Deep Learning From Multiple Noisy Annotators as A Union. IEEE Trans Neural Netw Learning Syst. 2023;34:10552 –62. https://doi.org/10.1109/TNNLS.2022.3168696

  16. [17]

    Learning with noisy labels

    Natarajan N, Dhillon IS, Ravikumar PK, Tewari A. Learning with noisy labels. Advances in Neural Information Processing Systems. 2013;26

  17. [18]

    Learning From Noisy Labels With Deep Neural Networks: A Survey

    Song H, Kim M, Park D, Shin Y, Lee J -G. Learning From Noisy Labels With Deep Neural Networks: A Survey. IEEE Trans Neural Netw Learning Syst. 2023;34:8135 –53. https://doi.org/10.1109/TNNLS.2022.3152527

  18. [19]

    Undefinable True Target Learning: Towards Learning with Democratic Supervision

    Yang Y. Undefinable True Target Learning: Towards Learning with Democratic Supervision. 2025. https://doi.org/10.32388/KBK3P8.5

  19. [20]

    Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning

    Yang Y, Yang Y, Yuan Y, Zheng J, Zhongxi Z. Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning. Multimed Tools Appl. 2020;79:26787–

  20. [21]

    https://doi.org/10.1007/s11042-020-09185-x

  21. [22]

    Handling noisy labels via one-step abductive multi-target learning and its application to helicobacter pylori segmentation

    Yang Y, Yang Y, Chen J, Zheng J, Zheng Z. Handling noisy labels via one-step abductive multi-target learning and its application to helicobacter pylori segmentation. Multimed Tools Appl. 2024. https://doi.org/10.1007/s11042-023-17743-2

  22. [23]

    One-step abductive multi-target learning with diverse noisy samples and its application to tumour segmentation for breast cancer

    Yang Y, Li F, Wei Y, Chen J, Chen N, Alobaidi MH, et al. One-step abductive multi-target learning with diverse noisy samples and its application to tumour segmentation for breast cancer. Expert Systems with Applications. 2024;251:123923. https://doi.org/10.1016/j.eswa.2024.123923

  23. [24]

    Logical assessment formula and its principles for evaluations with inaccurate ground-truth labels

    Yang Y. Logical assessment formula and its principles for evaluations with inaccurate ground-truth labels. Knowl Inf Syst. 2024. https://doi.org/10.1007/s10115-023-02047-6

  24. [25]

    Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancer

    Yang Y, Bu H. Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancer. Comput Artif Intell. 2024;2:1443. https://doi.org/10.59400/cai.v2i2.1443

  25. [26]

    Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth

    Wang C, Gao Y, Fan C, Hu J, Lam TL, Lane ND, et al. Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth. In: Chen H, Luo L, editors. Trustworthy Machine Learning for Healthcare, vol. 13932. Cham: Springer Nature Switzerland; 2023. pp. 147–62. https://doi.org/10.1007/978-3-031-39539-0_13

  26. [27]

    Learning from multiple annotators for medical image segmentation

    Zhang L, Tanno R, Xu M, Huang Y, Bronik K, Jin C, et al. Learning from multiple annotators for medical image segmentation. Pattern Recognition. 2023;138:109400. https://doi.org/10.1016/j.patcog.2023.109400

  27. [28]

    Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks

    Mokhberian N, Marmarelis MG, Hopp FR, Basile V, Morstatter F, Lerman K. Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks. 2023. https://doi.org/10.48550/ARXIV.2311.09743

  28. [29]

    Beyond confusion matrix: learning from multiple annotators with awareness of instance features

    Li J, Sun H, Li J. Beyond confusion matrix: learning from multiple annotators with awareness of instance features. Mach Learn. 2023;112:1053 –75. https://doi.org/10.1007/s10994-022-06211-x

  29. [30]

    Learning From Crowdsourced Noisy Labels: A signal processing perspective

    Ibrahim S, Traganitis PA, Fu X, Giannakis GB. Learning From Crowdsourced Noisy Labels: A signal processing perspective. IEEE Signal Process Mag. 2025;42:84 –106. https://doi.org/10.1109/MSP.2025.3572636

  30. [31]

    Cheap and fast–but is it good? evaluating non- expert annotations for natural language tasks

    Snow R, O’connor B, Jurafsky D, Ng AY. Cheap and fast–but is it good? evaluating non- expert annotations for natural language tasks. Proceedings of the 2008 conference on empirical methods in natural language processing. 2008. pp. 254–63

  31. [32]

    Evaluating Crowdsourcing Participants in the Absence of Ground-Truth

    Subramanian R, Rosales R, Fung G, Dy J. Evaluating Crowdsourcing Participants in the Absence of Ground-Truth. 2016. https://doi.org/10.48550/ARXIV.1605.09432

  32. [33]

    Crowdsourcing in the Absence of Ground Truth–A Case Study

    Srinivasan R, Chander A. Crowdsourcing in the Absence of Ground Truth–A Case Study. arXiv Preprint arXiv:190607254. 2019

  33. [34]

    The multidimensional wisdom of crowds

    Welinder P, Branson S, Perona P, Belongie S. The multidimensional wisdom of crowds. Advances in Neural Information Processing Systems. 2010;23

  34. [35]

    Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm

    Dawid AP, Skene AM. Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm. Applied Statistics. 1979;28:20. https://doi.org/10.2307/2346806

  35. [36]

    Whose vote should count more: Optimal integration of labels from labelers of unknown expertise

    Whitehill J, Wu T, Bergsma J, Movellan J, Ruvolo P. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems. 2009;22

  36. [37]

    Making deep neural networks robust to label noise: A loss correction approach

    Patrini G, Rozza A, Krishna Menon A, Nock R, Qu L. Making deep neural networks robust to label noise: A loss correction approach. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. pp. 1944–52

  37. [38]

    Learning from noisy examples

    Angluin D, Laird P. Learning from noisy examples. Mach Learn. 1988;2:343 –70. https://doi.org/10.1007/BF00116829

  38. [39]

    Training deep neural networks on noisy labels with bootstrapping

    Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A. Training deep neural networks on noisy labels with bootstrapping. arXiv Preprint arXiv:14126596. 2014

  39. [40]

    Co-teaching: Robust training of deep neural networks with extremely noisy labels

    Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in Neural Information Processing Systems. 2018;31

  40. [41]

    EchoAlign: Bridging Generative and Discriminative Learning under Noisy Labels

    Zheng Y, Han Z, Yin Y, Gao X, Liu T. Can We Treat Noisy Labels as Accurate? 2024. https://doi.org/10.48550/ARXIV.2405.12969

  41. [42]

    The’Problem’of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation

    Plank B. The’Problem’of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation. arXiv Preprint arXiv:221102570. 2022

  42. [43]

    IEEE Transactions on Neural Networks and Learning Systems25(5), 845–869 (2014) https://doi.org/10.1109/TNNLS.2013.2292894

    Frenay B, Verleysen M. Classification in the Presence of Label Noise: A Survey. IEEE Trans Neural Netw Learning Syst. 2014;25:845 –69. https://doi.org/10.1109/TNNLS.2013.2292894

  43. [44]

    A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels

    Joyce RJ, Raff E, Nicholas C. A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels. Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security. New York, NY, USA: ACM; 2021. pp. 73 –84. https://doi.org/10.1145/3474369.3486867

  44. [45]

    Simultaneous truth and performance level estimation (STAPLE): An algorithm for the validation of image segmentation

    Warfield SK, Zou KH, Wells WM. Simultaneous truth and performance level estimation (STAPLE): An algorithm for the validation of image segmentation. IEEE Transactions on Medical Imaging. 2004. https://doi.org/10.1109/TMI.2004.828354

  45. [46]

    Two Methods for Validating Brain Tissue Classifiers

    Martin-Fernandez M, Bouix S, Ungar L, McCarley RW, Shenton ME. Two Methods for Validating Brain Tissue Classifiers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2005. pp. 515–22. https://doi.org/10.1007/11566465_64

  46. [47]

    On evaluating brain tissue classifiers without a ground truth

    Bouix S, Martin-Fernandez M, Ungar L, Nakamura M, Koo MS, McCarley RW, et al. On evaluating brain tissue classifiers without a ground truth. NeuroImage. 2007. https://doi.org/10.1016/j.neuroimage.2007.04.031

  47. [48]

    Standards of democratic supervision

    Waite D. Standards of democratic supervision. Standards for instructional supervision. Routledge; 2020. pp. 33–48

  48. [49]

    Democratic supervision

    Lyons AF. Democratic supervision. The High School Journal. 1957;41:22–4

  49. [50]

    Democratic Supervision and Creative Supervision: Are They Possible Misnomers?

    Helwig C. Democratic Supervision and Creative Supervision: Are They Possible Misnomers?. 1968

  50. [51]

    Some Suggestions for a Program of Democratic Supervision

    Thayer V. Some Suggestions for a Program of Democratic Supervision. Educational Research Bulletin. 1927;177–82

  51. [52]

    Professional Development through Democratic Supervision

    Jones NB. Professional Development through Democratic Supervision. 1995

  52. [53]

    Bridging Theory and Practice in Implementing EL -MIATTs: Logic -Driven Algorithms for MIATTs Generation and Assessment

    Yang Y. Bridging Theory and Practice in Implementing EL -MIATTs: Logic -Driven Algorithms for MIATTs Generation and Assessment. 2025. https://doi.org/10.32388/0UD1AN

  53. [54]

    EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets

    Yang Y. EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets. 2026. https://doi.org/10.32388/UMHEFG.4

  54. [55]

    From Theory to Practice: A Case Study on EL-MIATTs Framework for Bicycle Lane Segmentation in Street Images

    Yang Y. From Theory to Practice: A Case Study on EL-MIATTs Framework for Bicycle Lane Segmentation in Street Images. Qeios. 2025. https://doi.org/10.32388/EZWLSN

  55. [56]

    Machine learning research: four current direction

    Ditterrich T. Machine learning research: four current direction. Artificial Intelligence Magzine. 1997;4:97–136

  56. [57]

    Machine learning: Trends, perspectives, and prospects

    Jordan MI, Mitchell TM. Machine learning: Trends, perspectives, and prospects. Science. 2015;349:255–60. https://doi.org/10.1126/science.aaa8415

  57. [58]

    Evaluating Classifiers Without Expert Labels

    Jung HJ, Lease M. Evaluating Classifiers Without Expert Labels. 2012. https://doi.org/10.48550/arxiv.1212.0960

  58. [59]

    Are Labels Always Necessary for Classifier Accuracy Evaluation? Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Deng W, Zheng L. Are Labels Always Necessary for Classifier Accuracy Evaluation? Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. pp. 15069–78

  59. [60]

    Performance measure characterization for evaluating neuroimage segmentation algorithms

    Chang HH, Zhuang AH, Valentino DJ, Chu WC. Performance measure characterization for evaluating neuroimage segmentation algorithms. NeuroImage. 2009. https://doi.org/10.1016/j.neuroimage.2009.03.068

  60. [61]

    Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool

    Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging. 2015;15:29. https://doi.org/10.1186/s12880 - 015-0068-x

  61. [62]

    A Review on Evaluation Metrics for Data Classification Evaluations

    M H, M.N S. A Review on Evaluation Metrics for Data Classification Evaluations. International Journal of Data Mining & Knowledge Management Process. 2015;5:01–11. https://doi.org/10.5121/ijdkp.2015.5201

  62. [63]

    Statistical Learning Theory: Models, Concepts, and Results

    Luxburg UV, Schölkopf B. Statistical Learning Theory: Models, Concepts, and Results. Handbook of the History of Logic, vol. 10. Elsevier; 2011. pp. 651 –706. https://doi.org/10.1016/B978-0-444-52936-7.50016-1

  63. [64]

    Unsupervised Learning

    James G, Witten D, Hastie T, Tibshirani R, Taylor J. Unsupervised Learning. An Introduction to Statistical Learning. Cham: Springer International Publishing; 2023. pp. 503 –56. https://doi.org/10.1007/978-3-031-38747-0_12

  64. [65]

    Unsupervised learning: foundations of neural computation

    Hinton G, Sejnowski TJ. Unsupervised learning: foundations of neural computation. MIT press; 1999

  65. [66]

    Supervised Learning

    Cunningham P, Cord M, Delany SJ. Supervised Learning. In: Cord M, Cunningham P, editors. Machine Learning Techniques for Multimedia. Berlin, Heidelberg: Springer Berlin Heidelberg; 2008. pp. 21–49. https://doi.org/10.1007/978-3-540-75171-7_2

  66. [67]

    Supervised learning in DNA neural networks

    Cherry KM, Qian L. Supervised learning in DNA neural networks. Nature. 2025;645:639 –

  67. [68]

    https://doi.org/10.1038/s41586-025-09479-w

  68. [69]

    Optical Remote Sensing Image Understanding With Weak Supervision: Concepts, methods, and perspectives

    Yue J, Fang L, Ghamisi P, Xie W, Li J, Chanussot J, et al. Optical Remote Sensing Image Understanding With Weak Supervision: Concepts, methods, and perspectives. IEEE Geosci Remote Sens Mag. 2022;10:250–69. https://doi.org/10.1109/MGRS.2022.3161377

  69. [70]

    Learning from Incomplete and Inaccurate Supervision

    Zhang Z -Y, Zhao P, Jiang Y, Zhou Z -H. Learning from Incomplete and Inaccurate Supervision. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage AK USA: ACM; 2019. pp. 1017 –25. https://doi.org/10.1145/3292500.3330902

  70. [71]

    National Science Review5(1), 44–53 (2017) https://doi.org/10.1093/nsr/nwx106

    Zhou Z-H. A brief introduction to weakly supervised learning. National Science Review. 2018;5:44–53. https://doi.org/10.1093/nsr/nwx106

  71. [72]

    Weakly supervised machine learning

    Ren Z, Wang S, Zhang Y. Weakly supervised machine learning. CAAI Trans on Intel Tech. 2023;8:549–80. https://doi.org/10.1049/cit2.12216

  72. [73]

    Q -learning

    Watkins CJCH, Dayan P. Q -learning. Mach Learn. 1992;8:279 –92. https://doi.org/10.1007/BF00992698

  73. [74]

    Reinforcement learning: An introduction

    Sutton RS, Barto AG, others. Reinforcement learning: An introduction. vol. 1. MIT press Cambridge; 1998

  74. [75]

    Deep reinforcement learning from human preferences

    Christiano PF, Leike J, Brown T, Martic M, Legg S, Amodei D. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems. 2017;30

  75. [76]

    Reinforcement Learning for Sequential Decision and Optimal Control

    Li SE. Reinforcement Learning for Sequential Decision and Optimal Control. Singapore: Springer Nature Singapore; 2023. https://doi.org/10.1007/978-981-19-7784-8

  76. [77]

    A survey on semi -supervised learning

    Van Engelen JE, Hoos HH. A survey on semi -supervised learning. Mach Learn. 2020;109:373–440. https://doi.org/10.1007/s10994-019-05855-6

  77. [78]

    A Survey on Deep Semi -Supervised Learning

    Yang X, Song Z, King I, Xu Z. A Survey on Deep Semi -Supervised Learning. IEEE Trans Knowl Data Eng. 2023;35:8934–54. https://doi.org/10.1109/TKDE.2022.3220219

  78. [79]

    Automated machine learning for positive -unlabelled learning

    Saunders JD, Freitas AA. Automated machine learning for positive -unlabelled learning. Appl Intell. 2025;55:875. https://doi.org/10.1007/s10489-025-06706-9

  79. [80]

    Learning from positive and unlabeled data: a survey

    Bekker J, Davis J. Learning from positive and unlabeled data: a survey. Mach Learn. 2020;109:719–60. https://doi.org/10.1007/s10994-020-05877-5

  80. [81]

    A survey of deep active learning,

    Ren P, Xiao Y, Chang X, Huang P-Y, Li Z, Gupta BB, et al. A Survey of Deep Active Learning. ACM Comput Surv. 2022;54:1–40. https://doi.org/10.1145/3472291

Showing first 80 references.