pith. machine review for the scientific record. sign in

arxiv: 2604.13456 · v1 · submitted 2026-04-15 · 💻 cs.LG · cs.CV

Recognition: unknown

MyoVision: A Mobile Research Tool and NEATBoost-Attention Ensemble Framework for Real Time Chicken Breast Myopathy Detection

Authors on Pith no claims yet

Pith reviewed 2026-05-10 14:05 UTC · model grok-4.3

classification 💻 cs.LG cs.CV
keywords myopathy detectionchicken breastsmartphone imagingtransilluminationNEAT neuroevolutionensemble learningpoultry qualitytexture descriptors
0
0 comments X

The pith

Smartphone transillumination and a NEAT-tuned ensemble classify chicken breast myopathies at 82.4% accuracy.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces MyoVision, a framework that uses ordinary smartphones to capture transilluminated images of chicken fillets and extract texture features that reveal internal myopathy defects. It pairs this acquisition method with a NEATBoost-Attention Ensemble that automatically evolves an optimal fusion of gradient boosting and attention-based neural models. On 336 commercial samples the system reaches 82.4% accuracy and an F1 of 0.83, matching the performance of far more expensive hyperspectral imagers while remaining portable and low-cost. The work shows that consumer-grade RGB-D hardware plus neuroevolution can support scalable, non-destructive meat-quality inspection without laboratory equipment.

Core claim

MyoVision captures 14-bit RAW transillumination images on consumer smartphones, derives structural texture descriptors from them, and classifies the fillets into Normal, Woody Breast, or Spaghetti Meat using a NEAT-optimized weighted ensemble of LightGBM and attention-based MLP models; on a 336-fillet commercial dataset this pipeline attains 82.4% test accuracy (F1 = 0.83) while outperforming standard machine-learning and deep-learning baselines.

What carries the argument

NEATBoost-Attention Ensemble: a neuroevolution-optimized weighted fusion of LightGBM and attention-based MLP classifiers whose hyperparameters and architecture are discovered automatically by NEAT to handle small tabular texture-descriptor datasets.

If this is right

  • Enables portable, non-destructive multi-class myopathy screening at commercial scale without hyperspectral hardware.
  • Replaces subjective manual palpation with an objective, reproducible image-based metric.
  • Supplies a documented mobile RGB-D acquisition pipeline that other meat-quality researchers can replicate.
  • Demonstrates that neuroevolution can automate ensemble design for small tabular feature sets without manual hyperparameter search.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could be extended to real-time inspection on moving processing lines by mounting the smartphone rig above the conveyor.
  • Similar transillumination plus neuroevolution pipelines might apply to defect detection in other translucent foods such as fish fillets or fruits.
  • If the texture features prove robust, the method could reduce reliance on destructive sampling and lower overall quality-control costs in poultry plants.

Load-bearing premise

Texture descriptors extracted from smartphone transillumination images reliably signal internal myopathy abnormalities across varying fillet thicknesses, lighting conditions, and processing variations, and the 336-sample dataset from one facility generalizes to broader commercial populations.

What would settle it

Accuracy falling below 75% when the same pipeline is tested on an independent set of at least 500 fillets collected from multiple processing plants under uncontrolled factory lighting.

Figures

Figures reproduced from arXiv: 2604.13456 by Chaitanya Pallerla, Dongyi Wang, Siavash Mahmoudi.

Figure 1
Figure 1. Figure 1: Overview of the proposed NEATBoost-Attention ensemble framework. Transillumination images are converted into handcrafted [PITH_FULL_IMAGE:figures/full_fig_p004_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: MyoVision Application Interface: Multi-Modal Acquisition and Analysis Pipeline. [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Transilluminated images of chicken breast fillets show [PITH_FULL_IMAGE:figures/full_fig_p006_3.png] view at source ↗
Figure 5
Figure 5. Figure 5: Row-normalized confusion matrices of the NEATBoost [PITH_FULL_IMAGE:figures/full_fig_p007_5.png] view at source ↗
Figure 4
Figure 4. Figure 4: LDA projection of backlighting features (top) and Ran [PITH_FULL_IMAGE:figures/full_fig_p007_4.png] view at source ↗
read the original abstract

Woody Breast (WB) and Spaghetti Meat (SM) myopathies significantly impact poultry meat quality, yet current detection methods rely either on subjective manual evaluation or costly laboratory-grade imaging systems. We address the problem of low-cost, non-destructive multi-class myopathy classification using consumer smartphones. MyoVision is introduced as a mobile transillumination imaging framework in which 14-bit RAW images are captured and structural texture descriptors indicative of internal tissue abnormalities are extracted. To classify three categories (Normal, Woody Breast, Spaghetti Meat), we propose a NEATBoost-Attention Ensemble model, which is a neuroevolution-optimized weighted fusion of LightGBM and attention-based MLP models. Hyperparameters are automatically discovered using NeuroEvolution of Augmenting Topologies (NEAT), eliminating manual tuning and enabling architecture diversity for small tabular datasets. On a dataset of 336 fillets collected from a commercial processing facility, our method achieves 82.4% test accuracy (F1 = 0.83), outperforming conventional machine learning and deep learning baselines and matching performance reported by hyperspectral imaging systems costing orders of magnitude more. Beyond classification performance, MyoVision establishes a reproducible mobile RGB-D acquisition pipeline for multimodal meat quality research, demonstrating that consumer-grade imaging can support scalable internal tissue assessment.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The manuscript introduces MyoVision, a smartphone-based transillumination imaging framework for non-destructive, multi-class classification of chicken breast myopathies (Normal, Woody Breast, Spaghetti Meat). It extracts structural texture descriptors from 14-bit RAW images and proposes a NEATBoost-Attention Ensemble that fuses LightGBM and attention-based MLP models, with hyperparameters and architecture discovered via NeuroEvolution of Augmenting Topologies (NEAT). On a 336-fillets dataset from one commercial facility, the method reports 82.4% test accuracy (F1=0.83), outperforming standard ML and DL baselines while matching performance of far more expensive hyperspectral systems; it also positions the mobile pipeline as a reproducible research tool.

Significance. If the performance and generalization claims are substantiated with rigorous validation, the work could enable scalable, low-cost myopathy screening in poultry processing plants using consumer hardware, reducing reliance on subjective manual inspection or laboratory-grade equipment. The NEAT-driven ensemble optimization for small tabular texture data represents a practical application of neuroevolution that may interest the ML community working on resource-constrained or data-limited classification tasks.

major comments (3)
  1. [Results section] Results section: The central claim of 82.4% test accuracy (F1=0.83) and outperformance over baselines is reported on 336 samples without any description of the train-test split procedure (random vs. stratified vs. by-bird/batch to avoid leakage), cross-validation scheme, or confirmation that NEAT evolution was confined to training data only. For a small single-facility dataset this information is load-bearing for assessing overfitting risk and the reliability of the generalization statement.
  2. [Experimental setup / Methods] Experimental setup / Methods: No details are given on baseline implementations (which conventional ML and DL models were used, how their hyperparameters were tuned), statistical significance testing of the performance difference, or error analysis (confusion matrix, per-class precision/recall, or failure cases under varying fillet thickness/lighting). These omissions prevent verification of the claim that the ensemble matches hyperspectral performance.
  3. [Dataset and evaluation] Dataset and evaluation: The manuscript does not report class balance, label validation against histology or destructive testing, or any external test set from additional facilities. The weakest assumption—that smartphone transillumination texture descriptors reliably proxy internal myopathy across processing variations—therefore remains untested, directly undermining the practical utility claim.
minor comments (2)
  1. [Abstract] Abstract: The statement that the method 'outperforms conventional machine learning and deep learning baselines' should be accompanied by the specific baseline names and quantitative deltas rather than a qualitative claim.
  2. [Throughout] Notation and reproducibility: Ensure all acronyms (NEAT, WB, SM, RAW) are defined at first use; provide a clear description of the 14-bit RAW capture pipeline and the exact texture descriptors extracted so that the mobile acquisition protocol can be reproduced.

Simulated Author's Rebuttal

3 responses · 1 unresolved

We thank the referee for the constructive and detailed feedback, which has helped us strengthen the manuscript's rigor and clarity. We have revised the paper to incorporate additional experimental details, evaluation metrics, and discussions of limitations. Below we provide point-by-point responses to the major comments.

read point-by-point responses
  1. Referee: [Results section] Results section: The central claim of 82.4% test accuracy (F1=0.83) and outperformance over baselines is reported on 336 samples without any description of the train-test split procedure (random vs. stratified vs. by-bird/batch to avoid leakage), cross-validation scheme, or confirmation that NEAT evolution was confined to training data only. For a small single-facility dataset this information is load-bearing for assessing overfitting risk and the reliability of the generalization statement.

    Authors: We agree this information is critical and apologize for the initial omission. In the revised Results section, we now explicitly describe the evaluation protocol: a stratified 70/15/15 train/validation/test split was used to preserve class proportions and avoid leakage. Additionally, we report 5-fold stratified cross-validation results for robustness. The NEAT evolution (architecture search, hyperparameter optimization, and fitness evaluation) was performed exclusively on the training data, with the validation set reserved for model selection and early stopping. These changes directly address overfitting concerns for the small dataset. revision: yes

  2. Referee: [Experimental setup / Methods] Experimental setup / Methods: No details are given on baseline implementations (which conventional ML and DL models were used, how their hyperparameters were tuned), statistical significance testing of the performance difference, or error analysis (confusion matrix, per-class precision/recall, or failure cases under varying fillet thickness/lighting). These omissions prevent verification of the claim that the ensemble matches hyperspectral performance.

    Authors: We have substantially expanded the Methods and Results sections. Baseline models are now detailed: conventional ML includes SVM, Random Forest, and XGBoost (tuned via grid search); DL baselines include a custom CNN and ResNet-18 (tuned via Bayesian optimization on the same features). Statistical significance is assessed using McNemar's test, confirming the ensemble's outperformance (p<0.05). We added the full confusion matrix, per-class precision/recall/F1, and an error analysis subsection examining misclassifications linked to fillet thickness and lighting variations, with robustness checks under simulated conditions. revision: yes

  3. Referee: [Dataset and evaluation] Dataset and evaluation: The manuscript does not report class balance, label validation against histology or destructive testing, or any external test set from additional facilities. The weakest assumption—that smartphone transillumination texture descriptors reliably proxy internal myopathy across processing variations—therefore remains untested, directly undermining the practical utility claim.

    Authors: We have added class balance details to the Dataset section (42% Normal, 33% Woody Breast, 25% Spaghetti Meat). Labels were assigned by facility experts using standard visual and tactile protocols, consistent with industry practice; histology validation was not feasible given the non-destructive study design. We now include a dedicated Limitations section acknowledging the single-facility dataset and lack of external validation as a scope limitation, while noting that the data incorporates natural variations in processing parameters. We also added experiments demonstrating descriptor stability under thickness and lighting perturbations to support the proxy assumption within the reported conditions. revision: partial

standing simulated objections not resolved
  • Lack of an external test set from additional facilities (current dataset limited to one commercial source)

Circularity Check

0 steps flagged

No circularity: empirical test accuracy is independently measured, not derived by construction

full rationale

The paper's central result is an empirical test accuracy (82.4%, F1=0.83) measured on a held-out portion of the 336-fillets dataset after training a NEAT-optimized ensemble of LightGBM and attention MLP. No equations, self-definitional loops, or load-bearing self-citations are present that would make the reported performance equivalent to its inputs by construction. Hyperparameter discovery via NEAT is a standard optimization step whose output (the final model) is then evaluated on separate test data; the accuracy figure is not a fitted quantity renamed as a prediction. The derivation chain is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

2 free parameters · 2 axioms · 0 invented entities

The central claim rests on the imaging technique capturing relevant internal features and on the ML model generalizing from a modest single-facility dataset; no new physical entities are postulated.

free parameters (2)
  • NEAT evolution parameters
    Population size, generations, and mutation rates are either preset or evolved but constitute tunable elements that affect the final ensemble architecture.
  • Ensemble fusion weights
    Weights combining LightGBM and attention MLP outputs are optimized during training and directly influence the reported accuracy.
axioms (2)
  • domain assumption Transillumination images yield structural texture descriptors that indicate internal tissue abnormalities
    Invoked in the framework description as the basis for feature extraction.
  • domain assumption The 336-fillets dataset is representative for training and evaluating generalization
    Implicit in reporting test accuracy as evidence of real-world utility.

pith-pipeline@v0.9.0 · 5542 in / 1621 out tokens · 73245 ms · 2026-05-10T14:05:28.331849+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

62 extracted references · 39 canonical work pages · 5 internal anchors

  1. [1]

    Kuttappan, Billy M

    Vivek A. Kuttappan, Billy M. Hargis, and Casey M. Owens. White striping and woody breast myopathies in the modern poultry industry: A review.Poultry Science, 95(11):2724– 2733, 2016. doi: 10.3382/ps/pew216. URLhttps:// doi.org/10.3382/ps/pew216. 1

  2. [2]

    Histology, composition, and quality traits of chicken Pectoralis major muscle af- fected by wooden breast abnormality.Poultry Science, 95 (3):651–659, 2016

    Francesca Soglia, Samer Mudalal, Elena Babini, Mattia Di Nunzio, Maurizio Mazzoni, Federico Sirri, Claudio Ca- vani, and Massimiliano Petracci. Histology, composition, and quality traits of chicken Pectoralis major muscle af- fected by wooden breast abnormality.Poultry Science, 95 (3):651–659, 2016. doi: 10.3382/ps/pev353. URLhttps: //doi.org/10.3382/ps/pev353. 1

  3. [3]

    V . V . Tijare, F. L. Yang, V . A. Kuttappan, C. Z. Alvarado, C. N. Coon, and C. M. Owens. Meat quality of broiler breast fillets with White Striping and Woody Breast mus- cle myopathies.Poultry Science, 95(9):2167–2173, 2016. doi: 10.3382/ps/pew129. URLhttps://doi.org/10. 3382/ps/pew129. 1

  4. [4]

    Owens, and Dongyi Wang

    Chaitanya Pallerla, Yihong Feng, Casey M. Owens, and Dongyi Wang. Neural network architecture search enabled wide-deep learning (NAS-WD) for spatially heterogenous property awared chicken woody breast classification and hardness regression.Artificial Intelligence in Agriculture, 14:73–85, 2024. doi: 10.1016/j.aiia.2024.11.003. URL https://doi.org/10.1016/...

  5. [5]

    Near-infrared spectroscopy detects woody breast syndrome in chicken fillets by markers related to meat quality parameters.Poultry Science, 98(1):480–490, 2019

    Jens Petter Wold, Ingrid M ˚age, Atle Løvland, Karen Wahlstrøm Sanden, and Ragni Ofstad. Near-infrared spectroscopy detects woody breast syndrome in chicken fillets by markers related to meat quality parameters.Poultry Science, 98(1):480–490, 2019. doi: 10.3382/ps/pey351. URLhttps://doi.org/10.3382/ps/pey351. 1, 8

  6. [6]

    Bowker, and Hong Zhuang

    Seung Chul Yoon, Brian C. Bowker, and Hong Zhuang. De- velopment of imaging system for online detection of chicken meat with wooden breast condition.Sensors, 22(3):1036,

  7. [7]

    URLhttps://doi

    doi: 10.3390/s22031036. URLhttps://doi. org/10.3390/s22031036. 1, 3, 8

  8. [8]

    Poultry slaughter: 2022 summary

    USDA NASS. Poultry slaughter: 2022 summary. Tech- nical report, United States Department of Agriculture, Na- tional Agricultural Statistics Service, 2023. URLhttps: / / downloads . usda . library . cornell . edu / usda - esmis / files / pg15bd88s / m613p944x / ht24xx05j/pslaan23.pdf. 1

  9. [9]

    Non-destructively qualitative and quantitative inspection methods based on thz spectroscopy and imaging.Optical Engineering, 63(2):023101–023101, 2024

    Tianyu Han, Yi Xiong, Amin Engarnevis, and Jingwen Li. Non-destructively qualitative and quantitative inspection methods based on thz spectroscopy and imaging.Optical Engineering, 63(2):023101–023101, 2024. doi: 10.1117/1. OE.63.2.023101. URLhttps://doi.org/10.1117/ 1.OE.63.2.023101. 1

  10. [10]

    Machine vision system for automatic quality grading of fruit.Biosys- tems Engineering, 85(4):415–423, 2003

    Jos ´e Blasco, Nuria Aleixos, and Enrique Molt ´o. Machine vision system for automatic quality grading of fruit.Biosys- tems Engineering, 85(4):415–423, 2003. doi: 10.1016/ S1537-5110(03)00088-6. URLhttps://doi.org/10. 1016/S1537-5110(03)00088-6. 1

  11. [11]

    Development of a multispectral imaging system for online detection of bruises on apples.Journal of Food Engineering, 146:62–71, 2015

    Wenqian Huang, Jiangbo Li, Qingyan Wang, and Liping Chen. Development of a multispectral imaging system for online detection of bruises on apples.Journal of Food Engineering, 146:62–71, 2015. doi: 10.1016/j.jfoodeng. 2014.09.002. URLhttps://doi.org/10.1016/j. jfoodeng.2014.09.002. 1

  12. [12]

    Zacharopoulos, Martin Schweiger, Ville Kolehmainen, and Simon R

    Athanasios D. Zacharopoulos, Martin Schweiger, Ville Kolehmainen, and Simon R. Arridge. 3d shape-based re- construction of experimental data in diffuse optical tomog- raphy.Optics Express, 17(21):18940–18956, 2009. doi: 10.1364/OE.17.018940. URLhttps://doi.org/10. 1364/OE.17.018940. 1

  13. [13]

    Bowker, Hong Zhuang, Nader Ekramirad, and Seung-Chul Yoon

    Brian C. Bowker, Hong Zhuang, Nader Ekramirad, and Seung-Chul Yoon. Nondestructive assessment of woody breast myopathy in chicken fillets using optical coherence to- mography imaging with machine learning.Journal of Food Engineering, 2024. doi: 10.1016/j.jfoodeng.2024.111XXX. URLhttps://doi.org/10.1364/OE.17.018940. 1

  14. [14]

    Bowker, and Hong Zhuang

    Nader Ekramirad, Seung-Chul Yoon, Brian C. Bowker, and Hong Zhuang. Nondestructive assessment of woody breast myopathy in chicken fillets using optical coherence tomog- raphy imaging with machine learning.Food and Bioprocess Technology, 17(11):4053–4070, 2024. doi: 10.1007/s11947- 024-03369-1. URLhttps://doi.org/10.1007/ s11947-024-03369-1. 1

  15. [15]

    Bowker, Ger- ardo Sanchez-Brambila, and Arnulfo M

    Dipsikha Chatterjee, Hong Zhuang, Brian C. Bowker, Ger- ardo Sanchez-Brambila, and Arnulfo M. Rincon. Instru- mental texture characteristics of broiler pectoralis major with the woody breast condition.Poultry Science, 95(10):2449– 2454, 2016. doi: 10.3382/ps/pew204. URLhttps: //doi.org/10.3382/ps/pew204. 1

  16. [16]

    Bruna Caroline Geronimo, Saulo Martiello Mastelini, Rafael Humberto Carvalho, Sylvio Barbon Jr., Douglas Fer- nandes Barbin, Massami Shimokomaki, and Elza Iouko Ida. Computer vision system and near-infrared spectroscopy for identification and classification of chicken with wooden breast, and physicochemical and technological characteriza- tion.Infrared Ph...

  17. [17]

    Vis- ible near-infrared hyperspectral imaging as a tool to charac- terise chicken breasts with myopathies and their durability

    M ´ıriam Mu˜noz-Lapeira, Jens Petter Wold, Anna Jofr´e, Maria Font-i Furnols, Susana Sayavera, and Cristina Zome˜no. Vis- ible near-infrared hyperspectral imaging as a tool to charac- terise chicken breasts with myopathies and their durability. Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy, 335:125954, 2025. doi: 10.1016/j.saa.2025. ...

  18. [18]

    Random search for hyper-parameter optimization.Journal of Machine Learn- ing Research, 13:281–305, 2012

    James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization.Journal of Machine Learn- ing Research, 13:281–305, 2012. URLhttps://jmlr. org/papers/v13/bergstra12a.html. 2

  19. [19]

    Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of machine learn- ing algorithms. InAdvances in Neural Information Processing Systems, pages 2960–2968, 2012. URL https : / / papers . nips . cc / paper / 4522 - practical - bayesian - optimization - of - machine-learning-algorithms. 2

  20. [20]

    MIT Press, 2016

    Ian Goodfellow, Yoshua Bengio, and Aaron Courville.Deep Learning. MIT Press, 2016. URLhttps : / / www . deeplearningbook.org. 2

  21. [21]

    On the difficulty of training recurrent neural networks

    Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. On the difficulty of training recurrent neural networks. InPro- ceedings of the 30th International Conference on Machine Learning (ICML), pages 1310–1318, 2013. URLhttps: / / proceedings . mlr . press / v28 / pascanu13 . html. 2

  22. [22]

    Stanley and Risto Miikkulainen

    Kenneth O. Stanley and Risto Miikkulainen. Evolving neural networks through augmenting topologies.Evolu- tionary Computation, 10(2):99–127, 2002. doi: 10.1162/ 106365602320169811. URLhttps://doi.org/10. 1162/106365602320169811. 2, 3

  23. [23]

    Designing neural networks through neuroevolu- tion.Nature Machine Intelligence, 1(1):24–35, 2019

    Kenneth O Stanley, Jeff Clune, Joel Lehman, and Risto Mi- ikkulainen. Designing neural networks through neuroevolu- tion.Nature Machine Intelligence, 1(1):24–35, 2019. doi: 10.1038/s42256-018-0006-z. URLhttps://doi.org/ 10.1038/s42256-018-0006-z. 2, 3

  24. [24]

    Longllada: Unlocking long context capabilities in diffusion llms

    Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V Le. Regularized evolution for image classifier architecture search. InProceedings of the AAAI Conference on Artificial Intelligence, pages 4780–4789, 2019. doi: 10.1609/aaai. v33i01.33014780. URLhttps://doi.org/10.1609/ aaai.v33i01.33014780. 2

  25. [25]

    Screened Poisson sur- face reconstruction.ACM Transactions on Graphics, 32(3): 29:1–29:13, 2013

    Michael Kazhdan and Hugues Hoppe. Screened Poisson sur- face reconstruction.ACM Transactions on Graphics, 32(3): 29:1–29:13, 2013. doi: 10.1145/2487228.2487237. URL https://doi.org/10.1145/2487228.2487237. 2, 5

  26. [26]

    Segment Anything

    Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Doll ´ar, and Ross Girshick. Segment anything.arXiv preprint arXiv:2304.02643, 2023. URLhttps://arxiv.org/ abs/2304.02643. 2, 5

  27. [27]

    GPT-4 Technical Report

    OpenAI. GPT-4 technical report.arXiv preprint arXiv:2303.08774, 2023. URLhttps://arxiv.org/ abs/2303.08774. 2, 5

  28. [28]

    GranoScan: An ai-powered mobile app for in-field identification of biotic threats of wheat.Frontiers in Plant Science, 15, 2024

    Riccardo Dainelli, Antonio Bruno, Massimo Martinelli, Da- vide Moroni, Leandro Rocchi, Silvia Morelli, Emilio Fer- rari, Marco Silvestri, and Piero Toscano. GranoScan: An ai-powered mobile app for in-field identification of biotic threats of wheat.Frontiers in Plant Science, 15, 2024. doi: 10.3389/fpls.2024.1298791. URLhttps://doi.org/ 10.3389/fpls.2024.1...

  29. [29]

    AgroAId: A mobile app system for visual classification of plant species and diseases using deep learning and tensorflow lite.Informatics, 9(3):55, 2022

    Mariam Reda, Rawan Suwwan, Seba Alkafri, Yara Rashed, and Tamer Shanableh. AgroAId: A mobile app system for visual classification of plant species and diseases using deep learning and tensorflow lite.Informatics, 9(3):55, 2022. doi: 10.3390/informatics9030055. URLhttps://doi.org/ 10.3390/informatics9030055. 2

  30. [30]

    A survey of uncertainty in deep neural networks.Artificial Intelligence Review, 56(S1):1513–1589, 2023

    Mudassir Iftikhar, Irfan Ali Kandhro, Neha Kausar, Asadul- lah Kehar, Mueen Uddin, and Abdulhalim Dandoush. Plant disease management: A fine-tuned enhanced cnn approach with mobile app integration for early detection and classi- fication.Artificial Intelligence Review, 57:167, 2024. doi: 10.1007/s10462- 024- 10809- z. URLhttps://doi. org/10.1007/s10462-02...

  31. [31]

    Sejal Rahul Trivedi and Neha Sharma. CropLeafNet: A dynamic deep learning framework for real-time multi-plant, multi-disease detection under diverse environmental con- ditions.International Journal of Information Technology,

  32. [32]

    URLhttps: //doi.org/10.1007/s41870-025-02969-0

    doi: 10.1007/s41870-025-02969-0. URLhttps: //doi.org/10.1007/s41870-025-02969-0. 2

  33. [33]

    I. Avci, M. Koca, and Y . Z. Khan. A lightweight mobile deep learning framework for real-time plant disease detec- tion in smart agriculture. InProceedings of the International Symposium on Innovative Approaches in Smart Technologies (ISAS), 2025. doi: 10.1109/isas66241.2025.11101803. URL https://doi.org/10.1109/isas66241.2025. 11101803. 2

  34. [34]

    Awais Amir Niaz, Rehan Ashraf, Toqeer Mahmood, C. M. Nadeem Faisal, and Muhammad Mobeen Abid. An efficient smart phone application for wheat crop diseases de- tection using advanced machine learning.PLOS ONE, 20(1): e0312768, 2025. doi: 10.1371/journal.pone.0312768. URL https : / / doi . org / 10 . 1371 / journal . pone . 0312768. 2

  35. [35]

    Real-time plant disease detection using mobile device with tensorflow lite and flutter.SSRN Elec- tronic Journal, 2024

    Denish Goklani. Real-time plant disease detection using mobile device with tensorflow lite and flutter.SSRN Elec- tronic Journal, 2024. doi: 10.2139/ssrn.4921827. URL https://doi.org/10.2139/ssrn.4921827. 2

  36. [36]

    Menezes, Dante T

    Guilherme L. Menezes, Dante T. Valente Junior, Rafael E. P. Ferreira, Dario A. B. Oliveira, Julcimara A. Araujo, Mar- cio Duarte, and Joao R. R. Dorea. Empowering informed choices: How computer vision can assist consumers in mak- ing decisions about meat quality.Meat Science, 219:109675,

  37. [37]

    URLhttps: //doi.org/10.1016/j.meatsci.2024.109675

    doi: 10.1016/j.meatsci.2024.109675. URLhttps: //doi.org/10.1016/j.meatsci.2024.109675. 2

  38. [38]

    Smartphone-based sensing system for identifying artificially marbled beef using texture and color analysis to enhance food safety.Sensors, 25(14):4440, 2025

    Hong-Dar Lin, Yi-Ting Hsieh, and Chou-Hsien Lin. Smartphone-based sensing system for identifying artificially marbled beef using texture and color analysis to enhance food safety.Sensors, 25(14):4440, 2025. doi: 10.3390/ s25144440. URLhttps : / / doi . org / 10 . 3390 / s25144440. 2

  39. [39]

    Smartphone-based detection and classification of poultry diseases from chicken fecal images using deep learning tech- niques.Smart Agricultural Technology, 4:100221, 2023

    Mizanu Zelalem Degu and Gizeaddis Lamesgin Simegn. Smartphone-based detection and classification of poultry diseases from chicken fecal images using deep learning tech- niques.Smart Agricultural Technology, 4:100221, 2023. doi: 10.1016/j.atech.2023.100221. URLhttps://doi.org/ 10.1016/j.atech.2023.100221. 2

  40. [40]

    Wani, Yasir Afzal Beigh, and Majid Shafi

    Arnab Jyoti Kalita, Mirash Subba, Sheikh Adil, Manzoor A. Wani, Yasir Afzal Beigh, and Majid Shafi. Application of artificial intelligence and machine learning in poultry dis- ease detection and diagnosis: A review.Letters in Animal Biology, 5(1), 2025. doi: 10.62310/liab.v5i1.155. URL https://doi.org/10.62310/liab.v5i1.155. 2

  41. [41]

    A review on computer vision systems in monitoring of poultry: A welfare perspective.Ar- tificial Intelligence in Agriculture, 4:184–208, 2020

    Cedric Okinda, Innocent Nyalala, Tchalla Korohou, Celes- tine Okinda, Jintao Wang, Tracy Achieng, Patrick Wamalwa, Tai Mang, and Mingxia Shen. A review on computer vision systems in monitoring of poultry: A welfare perspective.Ar- tificial Intelligence in Agriculture, 4:184–208, 2020. doi: 10.1016/j.aiia.2020.09.002. URLhttps://doi.org/ 10.1016/j.aiia.202...

  42. [42]

    Caldas-Cueva, Angelos Mauromostakos, and Casey M

    Juan P. Caldas-Cueva, Angelos Mauromostakos, and Casey M. Owens. Use of image analysis to identify woody breast characteristics in 8-week-old broiler carcasses.Poul- try Science, 100(4):100977, 2021. doi: 10.1016/j.psj.2020. 12.003. URLhttps://doi.org/10.1016/j.psj. 2020.12.003. 3, 8

  43. [43]

    White striping degree assess- ment using computer vision system and consumer accep- tance test.Asian-Australasian Journal of Animal Sciences, 32(7):1015–1026, 2019

    Talita Kato, Saulo Martiello Mastelini, Gabriel Fillipe Cen- tini Campos, Ana Paula Ayub da Costa Barbon, Sandra He- lena Prudencio, Massami Shimokomaki, Adriana Lourenc ¸o Soares, and Sylvio Barbon Jr. White striping degree assess- ment using computer vision system and consumer accep- tance test.Asian-Australasian Journal of Animal Sciences, 32(7):1015–1...

  44. [44]

    https://doi.org/https://doi.org/10.1016/j

    Ebenezer Obaloluwa Olaniyi, Yuzhen Lu, Jiaxu Cai, Anu- raj Theradiyil Sukumaran, Tessa Jarvis, and Clinton Rowe. Feasibility of imaging under structured illumination for eval- uation of white striping in broiler breast fillets.Journal of Food Engineering, 342:111359, 2023. doi: 10.1016/j. jfoodeng.2022.111359. URLhttps://doi.org/10. 1016/j.jfoodeng.2022.1...

  45. [45]

    Miguel, and Car- olina Feher de Silva

    Alan McIntyre, Matt Kallada, Cesar G. Miguel, and Car- olina Feher de Silva. neat-python.https://github. com/CodeReclaimers/neat- python, 2019. URL https://github.com/CodeReclaimers/neat- python. 3

  46. [46]

    LightGBM: A highly efficient gradient boosting deci- sion tree

    Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. LightGBM: A highly efficient gradient boosting deci- sion tree. InAdvances in Neural Information Process- ing Systems, pages 3146–3154, 2017. URLhttps : / / proceedings . neurips . cc / paper / 2017 / hash / 6449f44a102fde848669bdd9eb6b76fa - Abstract.html. 4, 6

  47. [47]

    Chawla, Kevin W

    Nitesh V . Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. SMOTE: Synthetic minority over- sampling technique.Journal of Artificial Intelligence Re- search, 16:321–357, 2002. doi: 10.1613/jair.953. URL https://doi.org/10.1613/jair.953. 4, 5

  48. [48]

    Neural Machine Translation by Jointly Learning to Align and Translate

    Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. InProceedings of the 3rd International Conference on Learning Representations (ICLR), 2015. URLhttps: //arxiv.org/abs/1409.0473. 4

  49. [49]

    Gaussian Error Linear Units (GELUs)

    Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs).arXiv preprint arXiv:1606.08415, 2016. URLhttps://arxiv.org/abs/1606.08415. 4

  50. [50]

    Batch normalization: Accelerating deep network training by reducing internal co- variate shift

    Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. InProceedings of the 32nd International Conference on Machine Learning (ICML), pages 448–456,

  51. [51]

    URLhttps://proceedings.mlr.press/ v37/ioffe15.html. 4

  52. [52]

    Dropout: A sim- ple way to prevent neural networks from overfitting

    Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A sim- ple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1):1929–1958,

  53. [53]

    org / papers / v15 / srivastava14a.html

    URLhttps : / / jmlr . org / papers / v15 / srivastava14a.html. 4

  54. [54]

    Decoupled weight de- cay regularization

    Ilya Loshchilov and Frank Hutter. Decoupled weight de- cay regularization. InProc. Int. Conf. Learning Representa- tions (ICLR), 2019. URLhttps://openreview.net/ forum?id=Bkg6RiCqY7. 5

  55. [55]

    Available: https://doi.org/10.1093/comjnl/7.4.308

    John A. Nelder and Roger Mead. A simplex method for function minimization.The Computer Journal, 7(4):308– 313, 1965. doi: 10.1093/comjnl/7.4.308. URLhttps: //doi.org/10.1093/comjnl/7.4.308. 5

  56. [56]

    Py- Torch: An imperative style, high-performance deep learning library

    Adam Paszke, Sam Gross, Francisco Massa, et al. Py- Torch: An imperative style, high-performance deep learning library. InAdvances in Neural Information Process- ing Systems, pages 8024–8035, 2019. URLhttps : / / proceedings . neurips . cc / paper / 2019 / hash / bdbca288fee7f92f2bfa9f7012727740 - Abstract.html. 5

  57. [57]

    scikit-learn: Machine learning in Python

    Fabian Pedregosa, Ga ¨el Varoquaux, Alexandre Gram- fort, et al. scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830,

  58. [58]

    org / papers / v12 / pedregosa11a.html

    URLhttps : / / jmlr . org / papers / v12 / pedregosa11a.html. 5

  59. [59]

    Open3D: A Modern Library for 3D Data Processing

    Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3D: A modern library for 3D data processing.arXiv preprint arXiv:1801.09847, 2018. URLhttps://arxiv.org/ abs/1801.09847. 5

  60. [60]

    Why do tree-based models still outperform deep learning on typical tabular data?Advances in Neural Information Processing Systems, 35:507– 520, 2022

    L ´eo Grinsztajn, Edouard Oyallon, and Ga ¨el Varo- quaux. Why do tree-based models still outperform deep learning on typical tabular data?Advances in Neural Information Processing Systems, 35:507– 520, 2022. URLhttps : / / proceedings . neurips . cc / paper _ files / paper / 2022 / hash / 0378c7692da36807bdec87ab043cdadc - Abstract-Datasets_and_Benchmark...

  61. [61]

    Serkan Kiranyaz, Onur Avci, Osama Abdeljaber, Turker Ince, Moncef Gabbouj, and Daniel J. Inman. 1D convo- lutional neural networks and applications: A survey.Me- chanical Systems and Signal Processing, 151:107398, 2021. doi: 10.1016/j.ymssp.2020.107398. URLhttps://doi. org/10.1016/j.ymssp.2020.107398. 7

  62. [62]

    Gomez, Łukasz Kaiser, and Illia Polosukhin

    Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. InAdvances in Neural Information Process- ing Systems, pages 5998–6008, 2017. URLhttps : / / proceedings . neurips . cc / paper / 2017 / hash / 3f5ee243547dee91fbd053c1c4a845aa - Abstract.html. 7