Recognition: no theorem link
Negative Ontology of True Target for Machine Learning: Towards Evaluation and Learning under Democratic Supervision
Pith reviewed 2026-05-12 03:13 UTC · model grok-4.3
The pith
Machine learning should replace the assumption of one objective true target with multiple inaccurate targets under democratic supervision.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Grounded in the non-existence of an objective true target, democratic supervision for machine learning is defined and realized at the instance level through multiple inaccurate true targets; principles for their logic-driven generation and assessment, a logical assessment formulation for evaluation, and undefinable true target learning for model training are derived, yielding the EL-MIATTs framework for predictive modeling.
What carries the argument
Multiple Inaccurate True Targets (MIATTs), the instance-level mechanism that carries democratic supervision by supplying several approximate targets in place of any single objective one.
If this is right
- Logic-driven principles govern the generation and assessment of multiple inaccurate true targets.
- Evaluation proceeds via a logical assessment formulation that operates on those targets.
- Learning proceeds through undefinable true target learning that does not require a single fixed target.
- The resulting EL-MIATTs framework supports predictive modeling that aligns with democratic supervision.
Where Pith is reading between the lines
- Practitioners might systematically collect several human annotations per example rather than forcing a single consensus label.
- The approach could extend naturally to tasks where outcomes are inherently contested, such as risk scoring or content labeling.
- Models trained under this framework may prove more stable when tested on new data that carries similar ambiguity.
Load-bearing premise
No objective true target exists for machine learning tasks in the real world.
What would settle it
The discovery of even one predictive task in which a single target label can be fixed objectively and verified without reference to any human judgment or modeling choice would falsify the non-existence premise.
read the original abstract
This article philosophically examines how shifts in assumptions regarding the existence and non-existence of the true target (TT) give rise to new perspectives and insights for machine learning (ML)-based predictive modeling and, correspondingly, proposes a knowledge system for evaluation and learning under Democratic Supervision. By systematically analysing the existence assumption of the TT in current mainstream ML paradigms, we explicitly adopt a negative ontology perspective, positing that the TT does not objectively exist in the real world, and, grounded in this non-existence assumption, define Democratic Supervision for ML. We further present Multiple Inaccurate True Targets (MIATTs) as an instance-level realization of Democratic Supervision. Building upon MIATTs, we derive principles, for the logic-driven generation and assessment of MIATTs, a logical assessment formulation for evaluation with MIATTs, and undefinable true target learning for learning with MIATTs. Based on these components, we establish the evaluation and learning with MIATTs (EL-MIATTs) framework for ML-based predictive modelling. A real-world application demonstrates the potential of the proposed EL-MIATTs framework in supporting education and professional development for individuals, aligning with prior discussions of Democratic Supervision in the fields of education and professional development.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper philosophically examines shifts in assumptions about the existence of a true target (TT) in ML-based predictive modeling. Adopting a negative ontology that the TT does not objectively exist in the real world, it defines Democratic Supervision for ML, presents Multiple Inaccurate True Targets (MIATTs) as an instance-level realization, derives principles for logic-driven generation and assessment of MIATTs along with a logical assessment formulation and undefinable true target learning, and assembles these into the EL-MIATTs framework for evaluation and learning. A real-world application in education and professional development is used to illustrate the framework.
Significance. If coherent and adopted, the work offers a conceptual reframing that could inform ML applications involving subjective or contested labels, such as in education, by prioritizing democratic processes over objective TT assumptions. It explicitly builds on prior discussions of Democratic Supervision in education and professional development. However, the absence of mathematical formalizations, algorithms, empirical results, or comparisons to existing methods (e.g., crowdsourced labeling or ensemble techniques) restricts its technical significance within core ML research.
major comments (1)
- Abstract: The central claims of the EL-MIATTs framework, including the derived principles for MIATTs generation/assessment and undefinable true target learning, are presented as following directly from the non-existence assumption of the TT. This renders the construction circular, as each component (Democratic Supervision, MIATTs, EL-MIATTs) is defined in terms of the input premise without independent grounding, validation, or a concrete test that could falsify the framework.
minor comments (2)
- The manuscript would benefit from explicit comparisons to related concepts in ML such as crowdsourcing, weak supervision, or multi-label learning to clarify novelty and avoid overlap.
- Given the conceptual focus, adding at least one worked example with concrete MIATTs instances and how the logical assessment formulation applies would improve accessibility for technical readers.
Simulated Author's Rebuttal
We thank the referee for their review and for highlighting this important point about the logical structure of our framework. We respond to the major comment below.
read point-by-point responses
-
Referee: Abstract: The central claims of the EL-MIATTs framework, including the derived principles for MIATTs generation/assessment and undefinable true target learning, are presented as following directly from the non-existence assumption of the TT. This renders the construction circular, as each component (Democratic Supervision, MIATTs, EL-MIATTs) is defined in terms of the input premise without independent grounding, validation, or a concrete test that could falsify the framework.
Authors: We respectfully disagree that the construction is circular. The negative ontology is adopted as an explicit foundational premise after systematic analysis of mainstream ML assumptions; Democratic Supervision is then defined as a direct consequence of this premise. MIATTs are introduced as a distinct instance-level operationalization, and the principles for logic-driven generation/assessment, the logical assessment formulation, and undefinable true target learning are derived through further step-by-step logical reasoning rather than by redefinition. The real-world application in education and professional development provides an illustrative grounding that aligns with prior literature on Democratic Supervision in those fields. Nevertheless, we acknowledge that the abstract could more clearly separate the premise from the subsequent derivations and will revise it to emphasize the logical progression and the illustrative (rather than falsifying) role of the application. revision: partial
Circularity Check
No significant circularity identified
full rationale
The paper is explicitly philosophical and definitional: it begins with an explicit premise (negative ontology positing non-existence of an objective true target), then defines Democratic Supervision, MIATTs, and the EL-MIATTs framework as grounded realizations of that premise. No equations, algorithms, fitted parameters, or technical derivations appear in the provided text that could reduce by construction to the inputs. The structure is a standard conceptual development from stated assumptions rather than a self-referential loop or renamed fit; validity is framed as coherence and usefulness, not falsifiable technical steps.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The true target (TT) does not objectively exist in the real world
invented entities (3)
-
Democratic Supervision
no independent evidence
-
Multiple Inaccurate True Targets (MIATTs)
no independent evidence
-
EL-MIATTs framework
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Introduction The true target (TT), which is a computationally equivalent transformation of the ground- truth, serves as a fundamental concept in the formulation and deployment of ML paradigms [1]. Assumptions regarding the TT are therefore crucial, as they implicitly define what is being learned, how supervision is interpreted, and how models are expected...
-
[2]
Machine Learning The objective of machine learning ( ML) is to construct a predictive model with data collected for a specific prediction task based on efficient computing resources [45, 46]. This section introduces fundamental terminologies in ML, clarifies the interrelations among them, and discuss corresponding implications in shaping higher-level meth...
-
[3]
Existence Assumptions of True Target in Current Mainstream Machine Learning Paradigms Prior works [9, 13] have systematically examined the existence assumptions about TT underlying current major evaluation and learning paradigms. The evaluation paradigms considered include those based on accurate true targets (ATTs) [47–51] and those based on inaccurate t...
-
[4]
Explicitly Posited Non-Existence Assumption of True Target and Defined Democratic Supervision for Machine Learning We explicitly posit, in Assumption 1, the non-existence of the true target for ML from the perspective of negative ontology [9]. Assumption 1 ( Negative Oncology of True Target for ML): The true target does not objectively exist in the real w...
-
[5]
Presented Multiple Inaccurate True Targets as an Instance - Level Realization of Democratic Supervision Grounded in the non -existence assumption of TT, Democratic Supervision enables a more inclusive research landscape, thereby extending ML research toward evaluation and learning under such a paradigm . This section presents a component for operationaliz...
-
[6]
Partial representation: 𝑆𝐹(𝑡𝑛∗)⊂𝑆𝐹(𝑡∗), i.e, each 𝑡𝑛∗ encodes only a subset of the underlying true target’s semantic facts
-
[7]
Collective coverage: ⋃ 𝑆𝐹(𝑡𝑛∗)𝑁 𝑛=1 ⊆𝑆𝐹(𝑡∗), with the possibility that ⋃ 𝑆𝐹(𝑡𝑛∗)𝑁 𝑛=1 =𝑆𝐹(𝑡∗). In other words, no single 𝑡𝑛∗ fully specifies 𝑡∗, but together the MIATTs set captures one or more of its essential aspects. Building on this foundation, MIATTs is an instance -level realization of Democratic Supervision, grounding the abstract paradigm in a con...
-
[8]
Proposed EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets Building upon MIATTs, in this section, we propose the EL -MIATTs framework for evaluation and learning with MIATTs [43]. The framework is ground in logic -driven MIATTs generation and assessment [42], logical assessment formula (LAF) for evaluation with MIATTs [13], and unde...
-
[9]
Conducted Application of EL-MIATTs for Supporting Education and Professional Development for Individuals Based on prior works [9, 13, 42, 43, 79] , EL-MIATTs has been applied in bicycle lane segmentation task [44]. In this application, we treated ourselves as the non -expert at identifying bicycle lane in street images (i.e. assuming the TT of bicycle lan...
-
[10]
Conclusion In this article, philosophically examining the shifts in assumptions regarding the existence and non-existence of the TT, we have shown that relaxing the existence assumption of the TT to the non -existence assumption gives rise to a fundamentally different understanding of supervision. From the negative ontology perspective, we e xplicitly pos...
-
[11]
Moderately supervised learning: definition, framework and generality
Yang Y. Moderately supervised learning: definition, framework and generality. Artif Intell Rev. 2024;57:37. https://doi.org/10.1007/s10462-023-10654-6
-
[12]
Crowdsourcing as a Model for Problem Solving: An Introduction and Cases
Brabham DC. Crowdsourcing as a Model for Problem Solving: An Introduction and Cases. Convergence: The International Journal of Research into New Media Technologies. 2008;14:75–90. https://doi.org/10.1177/1354856507084420
-
[13]
Raykar VC, Yu S, Zhao LH, Valadez GH, Florin C, Bogoni L, et al. Learning from crowds. Journal of Machine Learning Research. 2010;11
work page 2010
-
[14]
Rodrigues F, Pereira F. Deep Learning from Crowds. AAAI. 2018;32. https://doi.org/10.1609/aaai.v32i1.11506
-
[16]
Deep Learning From Multiple Noisy Annotators as A Union
Wei H, Xie R, Feng L, Han B, An B. Deep Learning From Multiple Noisy Annotators as A Union. IEEE Trans Neural Netw Learning Syst. 2023;34:10552 –62. https://doi.org/10.1109/TNNLS.2022.3168696
-
[17]
Natarajan N, Dhillon IS, Ravikumar PK, Tewari A. Learning with noisy labels. Advances in Neural Information Processing Systems. 2013;26
work page 2013
-
[18]
Learning From Noisy Labels With Deep Neural Networks: A Survey
Song H, Kim M, Park D, Shin Y, Lee J -G. Learning From Noisy Labels With Deep Neural Networks: A Survey. IEEE Trans Neural Netw Learning Syst. 2023;34:8135 –53. https://doi.org/10.1109/TNNLS.2022.3152527
-
[19]
Undefinable True Target Learning: Towards Learning with Democratic Supervision
Yang Y. Undefinable True Target Learning: Towards Learning with Democratic Supervision. 2025. https://doi.org/10.32388/KBK3P8.5
-
[20]
Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning
Yang Y, Yang Y, Yuan Y, Zheng J, Zhongxi Z. Detecting helicobacter pylori in whole slide images via weakly supervised multi-task learning. Multimed Tools Appl. 2020;79:26787–
work page 2020
-
[21]
https://doi.org/10.1007/s11042-020-09185-x
-
[22]
Yang Y, Yang Y, Chen J, Zheng J, Zheng Z. Handling noisy labels via one-step abductive multi-target learning and its application to helicobacter pylori segmentation. Multimed Tools Appl. 2024. https://doi.org/10.1007/s11042-023-17743-2
-
[23]
Yang Y, Li F, Wei Y, Chen J, Chen N, Alobaidi MH, et al. One-step abductive multi-target learning with diverse noisy samples and its application to tumour segmentation for breast cancer. Expert Systems with Applications. 2024;251:123923. https://doi.org/10.1016/j.eswa.2024.123923
-
[24]
Logical assessment formula and its principles for evaluations with inaccurate ground-truth labels
Yang Y. Logical assessment formula and its principles for evaluations with inaccurate ground-truth labels. Knowl Inf Syst. 2024. https://doi.org/10.1007/s10115-023-02047-6
-
[25]
Yang Y, Bu H. Validation of the practicability of logical assessment formula for evaluations with inaccurate ground-truth labels: An application study on tumour segmentation for breast cancer. Comput Artif Intell. 2024;2:1443. https://doi.org/10.59400/cai.v2i2.1443
-
[26]
Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth
Wang C, Gao Y, Fan C, Hu J, Lam TL, Lane ND, et al. Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth. In: Chen H, Luo L, editors. Trustworthy Machine Learning for Healthcare, vol. 13932. Cham: Springer Nature Switzerland; 2023. pp. 147–62. https://doi.org/10.1007/978-3-031-39539-0_13
-
[27]
Learning from multiple annotators for medical image segmentation
Zhang L, Tanno R, Xu M, Huang Y, Bronik K, Jin C, et al. Learning from multiple annotators for medical image segmentation. Pattern Recognition. 2023;138:109400. https://doi.org/10.1016/j.patcog.2023.109400
-
[28]
Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks
Mokhberian N, Marmarelis MG, Hopp FR, Basile V, Morstatter F, Lerman K. Capturing Perspectives of Crowdsourced Annotators in Subjective Learning Tasks. 2023. https://doi.org/10.48550/ARXIV.2311.09743
-
[29]
Beyond confusion matrix: learning from multiple annotators with awareness of instance features
Li J, Sun H, Li J. Beyond confusion matrix: learning from multiple annotators with awareness of instance features. Mach Learn. 2023;112:1053 –75. https://doi.org/10.1007/s10994-022-06211-x
-
[30]
Learning From Crowdsourced Noisy Labels: A signal processing perspective
Ibrahim S, Traganitis PA, Fu X, Giannakis GB. Learning From Crowdsourced Noisy Labels: A signal processing perspective. IEEE Signal Process Mag. 2025;42:84 –106. https://doi.org/10.1109/MSP.2025.3572636
-
[31]
Cheap and fast–but is it good? evaluating non- expert annotations for natural language tasks
Snow R, O’connor B, Jurafsky D, Ng AY. Cheap and fast–but is it good? evaluating non- expert annotations for natural language tasks. Proceedings of the 2008 conference on empirical methods in natural language processing. 2008. pp. 254–63
work page 2008
-
[32]
Evaluating Crowdsourcing Participants in the Absence of Ground-Truth
Subramanian R, Rosales R, Fung G, Dy J. Evaluating Crowdsourcing Participants in the Absence of Ground-Truth. 2016. https://doi.org/10.48550/ARXIV.1605.09432
-
[33]
Crowdsourcing in the Absence of Ground Truth–A Case Study
Srinivasan R, Chander A. Crowdsourcing in the Absence of Ground Truth–A Case Study. arXiv Preprint arXiv:190607254. 2019
work page 2019
-
[34]
The multidimensional wisdom of crowds
Welinder P, Branson S, Perona P, Belongie S. The multidimensional wisdom of crowds. Advances in Neural Information Processing Systems. 2010;23
work page 2010
-
[35]
Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm
Dawid AP, Skene AM. Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm. Applied Statistics. 1979;28:20. https://doi.org/10.2307/2346806
-
[36]
Whose vote should count more: Optimal integration of labels from labelers of unknown expertise
Whitehill J, Wu T, Bergsma J, Movellan J, Ruvolo P. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Advances in Neural Information Processing Systems. 2009;22
work page 2009
-
[37]
Making deep neural networks robust to label noise: A loss correction approach
Patrini G, Rozza A, Krishna Menon A, Nock R, Qu L. Making deep neural networks robust to label noise: A loss correction approach. Proceedings of the IEEE conference on computer vision and pattern recognition. 2017. pp. 1944–52
work page 2017
-
[38]
Angluin D, Laird P. Learning from noisy examples. Mach Learn. 1988;2:343 –70. https://doi.org/10.1007/BF00116829
-
[39]
Training deep neural networks on noisy labels with bootstrapping
Reed S, Lee H, Anguelov D, Szegedy C, Erhan D, Rabinovich A. Training deep neural networks on noisy labels with bootstrapping. arXiv Preprint arXiv:14126596. 2014
work page 2014
-
[40]
Co-teaching: Robust training of deep neural networks with extremely noisy labels
Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, et al. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in Neural Information Processing Systems. 2018;31
work page 2018
-
[41]
EchoAlign: Bridging Generative and Discriminative Learning under Noisy Labels
Zheng Y, Han Z, Yin Y, Gao X, Liu T. Can We Treat Noisy Labels as Accurate? 2024. https://doi.org/10.48550/ARXIV.2405.12969
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2405.12969 2024
-
[42]
The’Problem’of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation
Plank B. The’Problem’of Human Label Variation: On Ground Truth in Data, Modeling and Evaluation. arXiv Preprint arXiv:221102570. 2022
work page 2022
-
[43]
Frenay B, Verleysen M. Classification in the Presence of Label Noise: A Survey. IEEE Trans Neural Netw Learning Syst. 2014;25:845 –69. https://doi.org/10.1109/TNNLS.2013.2292894
-
[44]
A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels
Joyce RJ, Raff E, Nicholas C. A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels. Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security. New York, NY, USA: ACM; 2021. pp. 73 –84. https://doi.org/10.1145/3474369.3486867
-
[45]
Warfield SK, Zou KH, Wells WM. Simultaneous truth and performance level estimation (STAPLE): An algorithm for the validation of image segmentation. IEEE Transactions on Medical Imaging. 2004. https://doi.org/10.1109/TMI.2004.828354
-
[46]
Two Methods for Validating Brain Tissue Classifiers
Martin-Fernandez M, Bouix S, Ungar L, McCarley RW, Shenton ME. Two Methods for Validating Brain Tissue Classifiers. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 2005. pp. 515–22. https://doi.org/10.1007/11566465_64
-
[47]
On evaluating brain tissue classifiers without a ground truth
Bouix S, Martin-Fernandez M, Ungar L, Nakamura M, Koo MS, McCarley RW, et al. On evaluating brain tissue classifiers without a ground truth. NeuroImage. 2007. https://doi.org/10.1016/j.neuroimage.2007.04.031
-
[48]
Standards of democratic supervision
Waite D. Standards of democratic supervision. Standards for instructional supervision. Routledge; 2020. pp. 33–48
work page 2020
-
[49]
Lyons AF. Democratic supervision. The High School Journal. 1957;41:22–4
work page 1957
-
[50]
Democratic Supervision and Creative Supervision: Are They Possible Misnomers?
Helwig C. Democratic Supervision and Creative Supervision: Are They Possible Misnomers?. 1968
work page 1968
-
[51]
Some Suggestions for a Program of Democratic Supervision
Thayer V. Some Suggestions for a Program of Democratic Supervision. Educational Research Bulletin. 1927;177–82
work page 1927
-
[52]
Professional Development through Democratic Supervision
Jones NB. Professional Development through Democratic Supervision. 1995
work page 1995
-
[53]
Yang Y. Bridging Theory and Practice in Implementing EL -MIATTs: Logic -Driven Algorithms for MIATTs Generation and Assessment. 2025. https://doi.org/10.32388/0UD1AN
-
[54]
EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets
Yang Y. EL-MIATTs: Evaluation and Learning with Multiple Inaccurate True Targets. 2026. https://doi.org/10.32388/UMHEFG.4
-
[55]
Yang Y. From Theory to Practice: A Case Study on EL-MIATTs Framework for Bicycle Lane Segmentation in Street Images. Qeios. 2025. https://doi.org/10.32388/EZWLSN
-
[56]
Machine learning research: four current direction
Ditterrich T. Machine learning research: four current direction. Artificial Intelligence Magzine. 1997;4:97–136
work page 1997
-
[57]
Machine learning: Trends, perspectives, and prospects
Jordan MI, Mitchell TM. Machine learning: Trends, perspectives, and prospects. Science. 2015;349:255–60. https://doi.org/10.1126/science.aaa8415
-
[58]
Evaluating Classifiers Without Expert Labels
Jung HJ, Lease M. Evaluating Classifiers Without Expert Labels. 2012. https://doi.org/10.48550/arxiv.1212.0960
-
[59]
Deng W, Zheng L. Are Labels Always Necessary for Classifier Accuracy Evaluation? Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021. pp. 15069–78
work page 2021
-
[60]
Performance measure characterization for evaluating neuroimage segmentation algorithms
Chang HH, Zhuang AH, Valentino DJ, Chu WC. Performance measure characterization for evaluating neuroimage segmentation algorithms. NeuroImage. 2009. https://doi.org/10.1016/j.neuroimage.2009.03.068
-
[61]
Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool
Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Medical Imaging. 2015;15:29. https://doi.org/10.1186/s12880 - 015-0068-x
-
[62]
A Review on Evaluation Metrics for Data Classification Evaluations
M H, M.N S. A Review on Evaluation Metrics for Data Classification Evaluations. International Journal of Data Mining & Knowledge Management Process. 2015;5:01–11. https://doi.org/10.5121/ijdkp.2015.5201
-
[63]
Statistical Learning Theory: Models, Concepts, and Results
Luxburg UV, Schölkopf B. Statistical Learning Theory: Models, Concepts, and Results. Handbook of the History of Logic, vol. 10. Elsevier; 2011. pp. 651 –706. https://doi.org/10.1016/B978-0-444-52936-7.50016-1
-
[64]
James G, Witten D, Hastie T, Tibshirani R, Taylor J. Unsupervised Learning. An Introduction to Statistical Learning. Cham: Springer International Publishing; 2023. pp. 503 –56. https://doi.org/10.1007/978-3-031-38747-0_12
-
[65]
Unsupervised learning: foundations of neural computation
Hinton G, Sejnowski TJ. Unsupervised learning: foundations of neural computation. MIT press; 1999
work page 1999
-
[66]
Cunningham P, Cord M, Delany SJ. Supervised Learning. In: Cord M, Cunningham P, editors. Machine Learning Techniques for Multimedia. Berlin, Heidelberg: Springer Berlin Heidelberg; 2008. pp. 21–49. https://doi.org/10.1007/978-3-540-75171-7_2
-
[67]
Supervised learning in DNA neural networks
Cherry KM, Qian L. Supervised learning in DNA neural networks. Nature. 2025;645:639 –
work page 2025
-
[68]
https://doi.org/10.1038/s41586-025-09479-w
-
[69]
Yue J, Fang L, Ghamisi P, Xie W, Li J, Chanussot J, et al. Optical Remote Sensing Image Understanding With Weak Supervision: Concepts, methods, and perspectives. IEEE Geosci Remote Sens Mag. 2022;10:250–69. https://doi.org/10.1109/MGRS.2022.3161377
-
[70]
Learning from Incomplete and Inaccurate Supervision
Zhang Z -Y, Zhao P, Jiang Y, Zhou Z -H. Learning from Incomplete and Inaccurate Supervision. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage AK USA: ACM; 2019. pp. 1017 –25. https://doi.org/10.1145/3292500.3330902
-
[71]
National Science Review5(1), 44–53 (2017) https://doi.org/10.1093/nsr/nwx106
Zhou Z-H. A brief introduction to weakly supervised learning. National Science Review. 2018;5:44–53. https://doi.org/10.1093/nsr/nwx106
-
[72]
Weakly supervised machine learning
Ren Z, Wang S, Zhang Y. Weakly supervised machine learning. CAAI Trans on Intel Tech. 2023;8:549–80. https://doi.org/10.1049/cit2.12216
-
[73]
Watkins CJCH, Dayan P. Q -learning. Mach Learn. 1992;8:279 –92. https://doi.org/10.1007/BF00992698
-
[74]
Reinforcement learning: An introduction
Sutton RS, Barto AG, others. Reinforcement learning: An introduction. vol. 1. MIT press Cambridge; 1998
work page 1998
-
[75]
Deep reinforcement learning from human preferences
Christiano PF, Leike J, Brown T, Martic M, Legg S, Amodei D. Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems. 2017;30
work page 2017
-
[76]
Reinforcement Learning for Sequential Decision and Optimal Control
Li SE. Reinforcement Learning for Sequential Decision and Optimal Control. Singapore: Springer Nature Singapore; 2023. https://doi.org/10.1007/978-981-19-7784-8
-
[77]
A survey on semi -supervised learning
Van Engelen JE, Hoos HH. A survey on semi -supervised learning. Mach Learn. 2020;109:373–440. https://doi.org/10.1007/s10994-019-05855-6
-
[78]
A Survey on Deep Semi -Supervised Learning
Yang X, Song Z, King I, Xu Z. A Survey on Deep Semi -Supervised Learning. IEEE Trans Knowl Data Eng. 2023;35:8934–54. https://doi.org/10.1109/TKDE.2022.3220219
-
[79]
Automated machine learning for positive -unlabelled learning
Saunders JD, Freitas AA. Automated machine learning for positive -unlabelled learning. Appl Intell. 2025;55:875. https://doi.org/10.1007/s10489-025-06706-9
-
[80]
Learning from positive and unlabeled data: a survey
Bekker J, Davis J. Learning from positive and unlabeled data: a survey. Mach Learn. 2020;109:719–60. https://doi.org/10.1007/s10994-020-05877-5
-
[81]
A survey of deep active learning,
Ren P, Xiao Y, Chang X, Huang P-Y, Li Z, Gupta BB, et al. A Survey of Deep Active Learning. ACM Comput Surv. 2022;54:1–40. https://doi.org/10.1145/3472291
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.