Recognition: unknown
Investigation of cardinality classification for bacterial colony counting using explainable artificial intelligence
Pith reviewed 2026-05-10 02:08 UTC · model grok-4.3
The pith
Explainable AI shows high visual similarity between colony classes blocks further gains in bacterial counting accuracy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Applying XAI techniques to MicrobiaNet demonstrates that high visual similarity across cardinality classes in the colony images is the dominant factor preventing accurate classification of groups with three or more individuals, rather than shortcomings in the network or training procedure; this revises prior assertions that the model itself was the primary obstacle.
What carries the argument
Explainable AI analysis of the MicrobiaNet cardinality classifier to isolate the role of visual similarity in classification errors
If this is right
- Models that directly incorporate measures of visual similarity between classes should yield higher accuracy on high-cardinality colony images.
- Density estimation methods may outperform direct cardinality classification when objects within an image are visually similar.
- The same visual-similarity bottleneck likely affects other neural-network classifiers trained on imbalanced image datasets.
Where Pith is reading between the lines
- Testing similarity-aware architectures on existing colony datasets would provide a direct check on whether addressing visual overlap lifts performance.
- The finding may extend to other biological counting tasks where objects overlap or share textures, such as cell or particle enumeration.
- Running the same XAI pipeline on alternative colony-counting networks could test whether visual similarity remains the limiting factor across architectures.
Load-bearing premise
That the explanations produced by the chosen XAI method correctly identify visual similarity as the true cause of errors instead of reflecting artifacts of the XAI technique or dataset.
What would settle it
Train a new classifier that explicitly encodes visual similarity between cardinality classes and measure whether its accuracy on colonies of three or more improves substantially over MicrobiaNet on the same test images.
Figures
read the original abstract
Automatic bacterial colony counting is a highly sought-after technology in modern biological laboratories because it eliminates manual counting effort. Previous work has observed that MicrobiaNet, currently the best-performing cardinality classification model for colony counting, has difficulty distinguishing colonies of three or more individuals. However, it is unclear if this is due to properties of the data together with inherent characteristics of the MicrobiaNet model. By analysing MicrobiaNet with explainable artificial intelligence (XAI), we demonstrate that XAI can provide insights into how data properties constrain cardinality classification performance in colony counting. Our results show that high visual similarity across classes is the key issue hindering further performance improvement, revising prior assertions about MicrobiaNet. These findings suggest future work should focus on models that explicitly incorporate visual similarity or explore density estimation approaches, with broader implications for neural network classifiers trained on imbalanced datasets.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper applies explainable AI (XAI) techniques to analyze the MicrobiaNet model for bacterial colony cardinality classification. It concludes that high visual similarity across cardinality classes (especially for counts of three or more) is the primary performance bottleneck, revising earlier interpretations of MicrobiaNet's limitations, and recommends future models that explicitly handle similarity or shift to density estimation.
Significance. If the XAI analysis is rigorously validated, the work offers a concrete case study of using post-hoc explanations to diagnose data-driven constraints on CNN performance in imbalanced visual classification tasks. This could inform better practices for colony counting automation and analogous problems in medical imaging or object counting where visual similarity and class imbalance coexist.
major comments (2)
- [Abstract and Results] The central claim that XAI demonstrates high visual similarity as the key limiter (revising prior MicrobiaNet assertions) lacks reported quantitative validation or controls for XAI artifacts. No fidelity metrics, counterfactual tests, or comparisons across XAI methods (e.g., gradient-based vs. perturbation-based) are described to establish that attributions reflect true data properties rather than method-specific biases or dataset collection artifacts.
- [Discussion] The assumption that XAI explanations reliably isolate visual similarity as the causal factor for errors on cardinality ≥3 is load-bearing but unsupported by explicit tests. Without ablation on similarity-reduced data, performance gains after targeted interventions, or human evaluation of explanations, the conclusion risks conflating correlation in saliency maps with causation.
minor comments (1)
- [Abstract] The abstract would benefit from naming the specific XAI technique(s) employed and at least one quantitative result (e.g., overlap scores or error correlation) to allow readers to gauge the strength of the visual-similarity finding immediately.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed review. The comments highlight important aspects of rigor in XAI validation that we will address in the revision. Below we respond point by point to the major comments.
read point-by-point responses
-
Referee: [Abstract and Results] The central claim that XAI demonstrates high visual similarity as the key limiter (revising prior MicrobiaNet assertions) lacks reported quantitative validation or controls for XAI artifacts. No fidelity metrics, counterfactual tests, or comparisons across XAI methods (e.g., gradient-based vs. perturbation-based) are described to establish that attributions reflect true data properties rather than method-specific biases or dataset collection artifacts.
Authors: We agree that the original manuscript relies primarily on qualitative interpretation of XAI attributions without explicit quantitative controls. The analysis used established post-hoc methods to reveal consistent patterns of visual similarity across cardinality classes, which revises earlier model-centric interpretations. To strengthen this, we will add fidelity metrics (e.g., insertion/deletion scores), cross-method comparisons, and controls for potential artifacts in the revised manuscript. revision: yes
-
Referee: [Discussion] The assumption that XAI explanations reliably isolate visual similarity as the causal factor for errors on cardinality ≥3 is load-bearing but unsupported by explicit tests. Without ablation on similarity-reduced data, performance gains after targeted interventions, or human evaluation of explanations, the conclusion risks conflating correlation in saliency maps with causation.
Authors: The XAI results demonstrate a strong correlation between highlighted visual features and classification errors for higher cardinalities, supporting the revision of prior assertions. We acknowledge the absence of explicit causal tests such as ablations on similarity-reduced data. Generating such a dataset would require substantial new experimental effort beyond the current scope. We will expand the discussion to clarify the correlational nature of the findings, add suggestions for targeted interventions as future work, and note the value of human evaluation where feasible. revision: partial
Circularity Check
No circularity: XAI analysis applies external methods to pre-existing model and data
full rationale
The paper applies standard XAI techniques (e.g., saliency or attribution methods) to the existing MicrobiaNet model and colony-counting dataset to interpret why performance drops for cardinality classes >=3. The central claim—that high visual similarity across classes is the key limiter—is an empirical observation drawn from the resulting attributions rather than a quantity fitted to the data or presupposed by definition. No equations reduce the result to its inputs by construction, no parameters are renamed as predictions, and the analysis does not depend on load-bearing self-citations or uniqueness theorems from the authors' prior work. The derivation chain is therefore self-contained as an interpretive study using independent tools.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Deep Learning using Rectified Linear Units (ReLU)
Agarap, A.F., 2018. DeepLearningusingRectifiedLinearUnits(ReLU). arXiv:1803.08375 [cs, stat]arXiv:1803.08375
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[2]
An image-processing based automated bac- teria colony counter, in: 2009 24th International Symposium on Com- puter and Information Sciences, IEEE
Ates, H., Gerek, O.N., 2009. An image-processing based automated bac- teria colony counter, in: 2009 24th International Symposium on Com- puter and Information Sciences, IEEE. pp. 18–23
2009
-
[3]
Automated counting of mammalian cell colonies
Barber, P.R., Vojnovic, B., Kelly, J., Mayes, C.R., Boulton, P., Wood- cock, M., Joiner, M.C., 2001. Automated counting of mammalian cell colonies. Physics in Medicine and Biology 46, 63–76
2001
-
[4]
Predictingcom- mercially available antiviral drugs that may act on the novel coronavirus (SARS-CoV-2) through a drug-target interaction deep learning model
Beck, B.R., Shin, B., Choi, Y., Park, S., Kang, K., 2020. Predictingcom- mercially available antiviral drugs that may act on the novel coronavirus (SARS-CoV-2) through a drug-target interaction deep learning model. Computational and Structural Biotechnology Journal 18, 784–790
2020
-
[5]
YOLOv4: Optimal Speed and Accuracy of Object Detection
Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M., 2020. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXivarXiv:2004.10934
work page internal anchor Pith review arXiv 2020
-
[6]
Cascade R-CNN: Delving Into High QualityObjectDetection, in: 2018IEEE/CVFConferenceonComputer Vision and Pattern Recognition, pp
Cai, Z., Vasconcelos, N., 2018. Cascade R-CNN: Delving Into High QualityObjectDetection, in: 2018IEEE/CVFConferenceonComputer Vision and Pattern Recognition, pp. 6154–6162
2018
-
[7]
Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.,
-
[8]
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks, in: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 839–847
2018
-
[9]
An automated bacterial colony counting and classification system
Chen, W.B., Zhang, C., 2009. An automated bacterial colony counting and classification system. Information Systems Frontiers 11, 349–368
2009
-
[10]
Automated count- ing of bacterial colonies by image analysis
Chiang, P.J., Tseng, M.J., He, Z.S., Li, C.H., 2015. Automated count- ing of bacterial colonies by image analysis. Journal of Microbiological Methods 108, 74–82.arXiv:quant-ph/0312207
-
[11]
High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection
Choudhry, P., 2016. High-Throughput Method for Automated Colony and Cell Counting by Digital Image Analysis Based on Edge Detection. PLOS ONE 11, e0148469. 49
2016
-
[12]
Natural Language Processing (Almost) from Scratch
Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P., 2011. Natural Language Processing (Almost) from Scratch. The Journal of Machine Learning Research 12, 2493–2537
2011
-
[13]
Class-Balanced Loss Based on Effective Number of Samples, in: 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pp
Cui, Y., Jia, M., Lin, T.Y., Song, Y., Belongie, S., 2019. Class-Balanced Loss Based on Effective Number of Samples, in: 2019 IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pp. 9260–9269
2019
-
[14]
Ballard, 1981
Dana H. Ballard, 1981. Generalizing the Hough Transform to Detect Arbitrary Shapes 13, 111–122
1981
-
[15]
Use hirescam instead of grad-cam for faithful explanations of convolutional neural networks,
Draelos, R.L., Carin, L., 2021. Use HiResCAM instead of Grad- CAM for faithful explanations of convolutional neural networks. arXiv:2011.08891
-
[16]
Bacterial colony counting by Convolutional Neural Networks, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE
Ferrari, A., Lombardi, S., Signoroni, A., 2015. Bacterial colony counting by Convolutional Neural Networks, in: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), IEEE. pp. 7458–7461
2015
-
[17]
Bacterial colony counting with Convolutional Neural Networks in Digital Microbiology Imaging
Ferrari, A., Lombardi, S., Signoroni, A., 2017. Bacterial colony counting with Convolutional Neural Networks in Digital Microbiology Imaging. Pattern Recognition 61, 629–640
2017
-
[18]
Learning to Count Cells: Applications to lens-free imaging of large fields , 1–6
Flaccavento, G., Lempitsky, V., Pope, I., Barber, P., Zisserman, A., Noble, J., Vojnovic, B., 2011. Learning to Count Cells: Applications to lens-free imaging of large fields , 1–6
2011
-
[19]
Available: https://arxiv.org/abs/2008.02312
Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., Li, B., 2020. Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs. arXiv:2008.02312
-
[20]
OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects
Geissmann, Q., 2013. OpenCFU, a New Free and Open-Source Software to Count Cell Colonies and Other Circular Objects. PLoS ONE 8, 1–10
2013
-
[21]
Pytorch library for cam methods
Gildenblat, J., contributors, 2021. Pytorch library for cam methods. https://github.com/jacobgil/pytorch-grad-cam
2021
-
[22]
Generative adversarial networks
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y., 2020. Generative adversarial networks. Communications of the ACM 63, 139–144. 50
2020
-
[23]
Deep Residual Learning for Image Recognition
He, K., Zhang, X., Ren, S., Sun, J., 2015. Deep Residual Learning for ImageRecognition. 2016IEEEConferenceonComputerVisionandPat- tern Recognition (CVPR) 2016-Decem, 770–778.arXiv:1512.03385
work page internal anchor Pith review arXiv 2015
-
[24]
Ioffe, S., Szegedy, C., 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, JMLR.org, Lille, France. pp. 448–456
2015
-
[25]
AutoCellSeg: Ro- bust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques
Khan, A.U.M., Torelli, A., Wolf, I., Gretz, N., 2018. AutoCellSeg: Ro- bust automatic colony forming unit (CFU)/cell analysis using adaptive image segmentation and easy-to-use post-editing techniques. Scientific Reports 8, 7302
2018
-
[26]
Adam: A Method for Stochastic Optimization
Kingma, D.P., Ba, J., 2014. Adam: A Method for Stochastic Optimiza- tion. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings , 1–15arXiv:1412.6980
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[27]
ImageNet classifica- tion with deep convolutional neural networks, in: NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp
Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. ImageNet classifica- tion with deep convolutional neural networks, in: NIPS’12: Proceedings of the 25th International Conference on Neural Information Processing Systems, pp. 1097–1105
2012
-
[28]
Learning to count objects in images
Lempitsky, V., Zisserman, A., 2010. Learning to count objects in images. Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010 , 1–9
2010
-
[29]
CBNet: A Composite Backbone Network Architecture for Object Detection
Liang, T., Chu, X., Liu, Y., Wang, Y., Tang, Z., Chu, W., Chen, J., Ling, H., 2022. CBNet: A Composite Backbone Network Architecture for Object Detection. IEEE Transactions on Image Processing 31, 6893– 6906
2022
-
[30]
Focal Loss for Dense Object Detection
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P., 2020. Focal Loss for Dense Object Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence 42, 318–327
2020
-
[31]
An image analysis-based approach for automated counting of cancer cell nuclei in tissue sections
Loukas, C.G., Wilson, G.D., Vojnovic, B., Linney, A., 2003. An image analysis-based approach for automated counting of cancer cell nuclei in tissue sections. Cytometry 55A, 30–42. 51
2003
-
[32]
Visualizing Data using t-SNE
van der Maaten, L., Hinton, G., 2008. Visualizing Data using t-SNE. Journal of Machine Learning Research 9, 2579–2605
2008
-
[33]
Deep neural networks approach to microbial colony detection – a comparative analysis
Majchrowska, S., Pawłowski, J., Czerep, N., Górecki, A., Kuciński, J., Golan, T., 2021a. Deep neural networks approach to microbial colony detection – a comparative analysis. arXiv:2108.10103 [cs, eess, q-bio] arXiv:2108.10103
-
[34]
arXiv:2108.01234 [cs, q-bio]arXiv:2108.01234
Majchrowska, S., Pawłowski, J., Guła, G., Bonus, T., Hanas, A., Loch, A., Pawlak, A., Roszkowiak, J., Golan, T., Drulis-Kawa, Z., 2021b. AGAR a microbial colony dataset for deep learning detection. arXiv:2108.01234 [cs, q-bio]arXiv:2108.01234
-
[35]
Automatic particle and bacterial colony counter
Mansberg, H.P., 1957. Automatic particle and bacterial colony counter. Science 126, 823–827
1957
-
[36]
Semi-automatic prototype system for bacterial colony counting, in: 2016 International Conference on Smart Systems and Technologies (SST), IEEE
Matic, T., Vidovic, I., Siladi, E., Tkalec, F., 2016. Semi-automatic prototype system for bacterial colony counting, in: 2016 International Conference on Smart Systems and Technologies (SST), IEEE. pp. 205– 210
2016
-
[37]
Explainable artifi- cial intelligence: A comprehensive review
Minh, D., Wang, H.X., Li, Y.F., Nguyen, T.N., 2022. Explainable artifi- cial intelligence: A comprehensive review. Artificial Intelligence Review 55, 3503–3568
2022
-
[38]
Automated Bacteria Colony Counting on Agar Plates Using Machine Learning
Mohammad Khan, F., Gupta, R., Sekhri, S., 2021. Automated Bacteria Colony Counting on Agar Plates Using Machine Learning. Journal of Environmental Engineering 147, 04021066
2021
-
[39]
Eigen-CAM: Class Activation Map using Principal Components, in: 2020 International Joint Conference on Neural Networks (IJCNN), pp
Muhammad, M.B., Yeasin, M., 2020. Eigen-CAM: Class Activation Map using Principal Components, in: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–7
2020
-
[40]
The Ten Commandments of Ethical Medical AI
Muller, H., Mayrhofer, M.T., Van Veen, E.B., Holzinger, A., 2021. The Ten Commandments of Ethical Medical AI. Computer 54, 119–123
2021
-
[41]
Counting colonies of clonogenic assays by using densitometric software
Niyazi, M., Niyazi, I., Belka, C., 2007. Counting colonies of clonogenic assays by using densitometric software. Radiation Oncology 2, 3–5
2007
-
[42]
Feature Visualization
Olah, C., Mordvintsev, A., Schubert, L., 2017. Feature Visualization. Distill 2, e7. 52
2017
-
[43]
Li- bra R-CNN: Towards Balanced Learning for Object Detection, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp
Pang, J., Chen, K., Shi, J., Feng, H., Ouyang, W., Lin, D., 2019. Li- bra R-CNN: Towards Balanced Learning for Object Detection, in: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 821–830
2019
-
[44]
PyTorch: An Imperative Style, High-Performance Deep Learning Library , 12
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S., 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library , 12
2019
-
[45]
Scikit-learn: Machine Learning in Python
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Van- derplas, J., Passos, A., Cournapeau, D., 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 2825– 2830
2011
-
[46]
Learning To Count Everything, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp
Ranjan, V., Sharma, U., Nguyen, T., Hoai, M., 2021. Learning To Count Everything, in: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3393–3402
2021
-
[47]
Ren, S., He, K., Girshick, R., Sun, J., 2017. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence 39, 1137– 1149.arXiv:1506.01497
-
[48]
Transparency of deep neural networks for medical image analysis: A review of interpretability methods
Salahuddin, Z., Woodruff, H.C., Chatterjee, A., Lambin, P., 2022. Transparency of deep neural networks for medical image analysis: A review of interpretability methods. Computers in Biology and Medicine 140, 105111
2022
-
[49]
The European Le- gal Framework for Medical AI, in: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E
Schneeberger, D., Stöger, K., Holzinger, A., 2020. The European Le- gal Framework for Medical AI, in: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (Eds.), Machine Learning and Knowledge Extraction, Springer International Publishing, Cham. pp. 209–226
2020
-
[50]
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), pp
Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D., 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization, in: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 618–626. 53
2017
-
[51]
Very Deep Convolutional Networks for Large-Scale Image Recognition
Simonyan, K., Zisserman, A., 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition.arXiv:1409.1556
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[52]
Active learning strategies for phenotypic profiling of high-content screens
Smith, K., Horvath, P., 2014. Active learning strategies for phenotypic profiling of high-content screens. Journal of Biomolecular Screening 19, 685–695
2014
-
[53]
EfficientDet: Scalable and Efficient Object Detection, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp
Tan, M., Pang, R., Le, Q.V., 2020. EfficientDet: Scalable and Efficient Object Detection, in: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778–10787
2020
-
[54]
Wattenberg, M., Viégas, F., Johnson, I., 2016. How to use t-sne effectively. Distill URL:http://distill.pub/2016/misread-tsne, doi:10.23915/distill.00002
-
[55]
Detection and segmen- tation of cell nuclei in virtual microscopy images: A minimum-model approach
Wienert, S., Heim, D., Saeger, K., Stenzinger, A., Beil, M., Hufnagl, P., Dietel, M., Denkert, C., Klauschen, F., 2012. Detection and segmen- tation of cell nuclei in virtual microscopy images: A minimum-model approach. Scientific Reports 2, 1–7
2012
-
[56]
An Effective and Robust Method for Automatic Bacterial Colony Enumeration, in: International Conference on Semantic Computing (ICSC 2007), IEEE
Zhang, C., Chen, W.B., 2007. An Effective and Robust Method for Automatic Bacterial Colony Enumeration, in: International Conference on Semantic Computing (ICSC 2007), IEEE. pp. 581–588
2007
-
[57]
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.,
-
[58]
arXiv:1512.04150 [cs]arXiv:1512.04150
Learning Deep Features for Discriminative Localization. arXiv:1512.04150 [cs]arXiv:1512.04150
-
[59]
Deformable DETR: Deformable Transformers for End-to-End Object Detection
Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J., 2021. Deformable DETR: Deformable Transformers for End-to-End Object Detection. arXiv:2010.04159. 54
work page internal anchor Pith review arXiv 2021
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.