Recognition: unknown
AMO-ENE: Attention-based Multi-Omics Fusion Model for Outcome Prediction in Extra Nodal Extension and HPV-associated Oropharyngeal Cancer
Pith reviewed 2026-05-10 17:13 UTC · model grok-4.3
The pith
An attention-based fusion model combines CT-derived nodal features with clinical data to predict recurrence and survival in HPV-positive oropharyngeal cancer.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors describe an end-to-end system whose core is a hierarchical semi-supervised 3D segmentation stage that identifies and delineates intranodal and extranodal extension despite low contrast and annotation variability. Radiomics and learned features extracted from these segmentations feed an imaging-detected ENE grading classifier. These nodal descriptors are then combined with primary tumor characteristics inside an attention-based multi-omics fusion model that produces dynamic outcome predictions, with the entire pipeline evaluated for both prognostic value of the ENE label and superiority over baseline staging criteria on an internal patient cohort.
What carries the argument
The attention-based multi-omics fusion model, which uses learned attention weights to integrate radiomics and deep features from segmented nodal extension with clinical variables for outcome forecasting.
If this is right
- The automated ENE grading adds measurable prognostic information beyond current staging systems for HPV-positive cases.
- The pipeline removes the need for time-consuming manual contouring of nodal extension on planning CTs.
- Fusing nodal imaging features with primary tumor and clinical data yields stronger forecasts of metastatic recurrence and survival than either source alone.
- The approach lowers barriers to incorporating imaging-detected ENE into routine clinical decision-making for radiation planning.
Where Pith is reading between the lines
- The same fusion structure could be retrained on other head-and-neck sites or imaging modalities to test whether attention-based integration generalizes.
- If the model output is used to adjust radiation fields or chemotherapy intensity, follow-up studies could measure whether those adjustments change actual patient survival.
- Extending the segmentation stage to include primary tumor margins might further tighten the link between automated imaging and treatment personalization.
Load-bearing premise
The semi-supervised segmentation model can reliably trace low-contrast extranodal extension boundaries even when manual labels are inconsistent, and the results from this single internal group of patients will hold for new cases without external or prospective checks.
What would settle it
An external validation set or prospective cohort in which the model's predicted ENE status shows poor agreement with expert review or its outcome forecasts no longer outperform standard clinical staging variables.
Figures
read the original abstract
Extranodal extension (ENE) is an emerging prognostic factor in human papillomavirus (HPV)-associated oropharyngeal cancer (OPC), although it is currently omitted as a clinical staging criteria. Recent works have advocated for the inclusion of iENE as a prognostic marker in HPV-positive OPC staging. However, several practical limitations continue to hinder its clinical integration, including inconsistencies in segmentation, low contrast in the periphery of metastatic lymph nodes on CT imaging, and laborious manual annotations. To address these limitations, we propose a fully automated end-to-end pipeline that uses computed tomography (CT) images with clinical data to assess the status of nodal ENE and predict treatment outcomes. Our approach includes a hierarchical 3D semi-supervised segmentation model designed to detect and delineate relevant iENE from radiotherapy planning CT scans. From these segmentations, a set of radiomics and deep features are extracted to train an imaging-detected ENE grading classifier. The predicted ENE status is then evaluated for its prognostic value and compared with existing staging criteria. Furthermore, we integrate these nodal features with primary tumor characteristics in a multimodal, attention-based outcome prediction model, providing a dynamic framework for outcome prediction. Our method is validated in an internal cohort of 397 HPV-positive OPC patients treated with radiation therapy or chemoradiotherapy between 2009 and 2020. For outcome prediction at the 2-year mark, our pipeline surpassed baseline models with 88.2% (4.8) in AUC for metastatic recurrence, 79.2% (7.4) for overall survival, and 78.1% (8.6) for disease-free survival. We also obtain a concordance index of 83.3% (6.5) for metastatic recurrence, 71.3% (8.9) for overall survival, and 70.0% (8.1) for disease-free survival, making it feasible for clinical decision making.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript proposes AMO-ENE, a fully automated end-to-end pipeline for HPV-associated oropharyngeal cancer that employs a hierarchical 3D semi-supervised segmentation model on radiotherapy planning CT scans to delineate intranodal extranodal extension (iENE), extracts radiomics and deep features for ENE grading, and integrates these with primary tumor characteristics and clinical data via an attention-based multi-omics fusion model to predict 2-year outcomes (metastatic recurrence, overall survival, disease-free survival). The approach is validated on a single internal cohort of 397 patients (2009-2020), reporting AUCs of 88.2% (4.8), 79.2% (7.4), and 78.1% (8.6) respectively that surpass baselines, along with concordance indices, and concludes that the pipeline is feasible for clinical decision making.
Significance. If the performance generalizes beyond the single internal cohort, the work could support clinical integration of iENE as a prognostic marker by automating segmentation on low-contrast CT images with inconsistent annotations. The attention-based multimodal fusion of nodal and tumor features with clinical data offers a dynamic framework for outcome prediction that addresses practical limitations in current staging. The semi-supervised segmentation component is a targeted response to annotation challenges, though its reliability remains unquantified in the provided description.
major comments (3)
- [Abstract] Abstract: The reported performance metrics (e.g., AUC 88.2% (4.8) for metastatic recurrence) and claim of clinical feasibility rest on an internal 397-patient cohort without any description of cross-validation strategy, train/test split details, handling of class imbalance, or statistical comparison methods against baselines. This is load-bearing for the central superiority and feasibility claims.
- [Abstract] Abstract and Methods (implied): The pipeline depends on the hierarchical 3D semi-supervised segmentation model to reliably delineate low-contrast iENE despite acknowledged annotation inconsistencies, yet no quantitative segmentation metrics (e.g., Dice scores, sensitivity/specificity) or ablation on segmentation accuracy are referenced, which directly affects the validity of downstream radiomics/deep feature extraction and ENE grading.
- [Abstract] Abstract: The validation is limited to a single internal cohort (2009-2020) with no external validation, multi-center testing, or prospective evaluation described, undermining the generalization of the reported AUCs and concordance indices to support claims of clinical decision-making feasibility.
minor comments (2)
- [Title] The title uses 'Extra Nodal Extension' which should be standardized to 'Extranodal Extension' for consistency with the abstract and field terminology.
- [Abstract] The abstract reports standard deviations in parentheses (e.g., 4.8) but does not clarify whether these represent standard deviation across folds, bootstraps, or another variability measure.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our manuscript. We provide point-by-point responses to the major comments and indicate the revisions made to address them.
read point-by-point responses
-
Referee: [Abstract] Abstract: The reported performance metrics (e.g., AUC 88.2% (4.8) for metastatic recurrence) and claim of clinical feasibility rest on an internal 397-patient cohort without any description of cross-validation strategy, train/test split details, handling of class imbalance, or statistical comparison methods against baselines. This is load-bearing for the central superiority and feasibility claims.
Authors: We agree that the abstract would benefit from more methodological detail to support the reported metrics. The full manuscript describes these aspects in the Methods section. We have revised the abstract to include a concise summary of the cross-validation strategy, train/test split details, class imbalance handling, and statistical comparison methods against baselines. revision: yes
-
Referee: [Abstract] Abstract and Methods (implied): The pipeline depends on the hierarchical 3D semi-supervised segmentation model to reliably delineate low-contrast iENE despite acknowledged annotation inconsistencies, yet no quantitative segmentation metrics (e.g., Dice scores, sensitivity/specificity) or ablation on segmentation accuracy are referenced, which directly affects the validity of downstream radiomics/deep feature extraction and ENE grading.
Authors: We concur that quantitative evaluation of the segmentation model is essential. In the revised manuscript, we have incorporated segmentation performance metrics such as Dice scores, sensitivity, and specificity, as well as an ablation analysis on the effect of segmentation accuracy on the subsequent feature extraction and outcome prediction tasks. revision: yes
-
Referee: [Abstract] Abstract: The validation is limited to a single internal cohort (2009-2020) with no external validation, multi-center testing, or prospective evaluation described, undermining the generalization of the reported AUCs and concordance indices to support claims of clinical decision-making feasibility.
Authors: We recognize the importance of external validation for broader applicability. We have updated the abstract, results, and discussion sections to explicitly note the single-institution nature of the cohort and to moderate the language regarding clinical feasibility, emphasizing the need for further validation. However, we do not have external datasets available for inclusion in this revision. revision: partial
- External validation on multi-center or prospective data, which is not available in the current study.
Circularity Check
No circularity in derivation chain; pipeline is standard ML training and internal validation
full rationale
The paper presents a multi-stage pipeline (hierarchical 3D semi-supervised segmentation for iENE, radiomics/deep feature extraction, ENE grading classifier, then attention-based multimodal fusion for outcome prediction) trained and evaluated on one internal 397-patient cohort. No equations, derivations, or self-citations are shown that reduce the reported AUC/C-index values to quantities defined by construction from the same fitted parameters. The abstract and description frame the results as empirical validation of a trained model rather than tautological renaming or self-referential fitting. Absence of full methods text prevents ruling out subtle data leakage, but no load-bearing circular step is exhibited by the provided text.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Semi-supervised learning can produce accurate segmentations of low-contrast metastatic lymph nodes on CT despite annotation inconsistencies
Reference graph
Works this paper leans on
-
[1]
Abboud, H
Z. Abboud, H. Lombaert, and S. Kadoury. Sparse bayesian networks: Effi- cient uncertainty quantification in medical image analysis. InInternational Conference on Medical Image Computing and Computer-Assisted Interven- tion, pages 675–684. Springer, 2024
2024
-
[2]
personalized
M. B. Amin, F. L. Greene, S. B. Edge, C. C. Compton, J. E. Gershenwald, R. K. Brookland, L. Meyer, D. M. Gress, D. R. Byrd, and D. P. Winchester. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more "personalized" approach to cancer staging.CA: a cancer journal for clinicians, 67(2):93–99, Mar. 2017
2017
-
[3]
Andrearczyk, V
V. Andrearczyk, V. Oreiller, M. Abobakr, A. Akhavanallaf, P. Balermpas, S. Boughdad, L. Capriotti, J. Castelli, C. C. Le Rest, P. Decazes, R. Cor- reia, D. El-Habashy, H. Elhalawani, C. D. Fuller, M. Jreige, Y. Khamis, A. La Greca, A. Mohamed, M. Naser, J. O. Prior, S. Ruan, S. Tanadini- Lang, O. Tankyevych, Y. Salimi, M. Vallières, P. Vera, D. Visvikis, ...
2022
-
[4]
K. M. Boehm, P. Khosravi, R. Vanguri, J. Gao, and S. P. Shah. Harnessing multimodaldataintegrationtoadvanceprecisiononcology.Nature Reviews. Cancer, 22(2):114–126, Feb. 2022
2022
-
[5]
M. J. Cardoso, W. Li, R. Brown, N. Ma, E. Kerfoot, Y. Wang, B. Murrey, A. Myronenko, C. Zhao, D. Yang, V. Nath, Y. He, Z. Xu, A. Hatamizadeh, A. Myronenko, W. Zhu, Y. Liu, M. Zheng, Y. Tang, I. Yang, M. Zephyr, B. Hashemian, S. Alle, M. Z. Darestani, C. Budd, M. Modat, T. Ver- cauteren, G. Wang, Y. Li, Y. Hu, Y. Fu, B. Gorman, H. Johnson, 30 B. Genereaux,...
work page internal anchor Pith review arXiv 2022
- [6]
-
[7]
C. Chen, J. Miao, D. Wu, A. Zhong, Z. Yan, S. Kim, J. Hu, Z. Liu, L. Sun, X. Li, T. Liu, P.-A. Heng, and Q. Li. MA-SAM: Modality-agnostic SAM adaptation for 3D medical image segmentation.Medical Image Analysis, 98:103310, Dec. 2024
2024
- [8]
-
[9]
XGBoost: A Scalable Tree Boosting System
T. Chen and C. Guestrin. XGBoost: A Scalable Tree Boosting Sys- tem. InProceedings of the 22nd ACM SIGKDD International Confer- ence on Knowledge Discovery and Data Mining, pages 785–794, Aug. 2016. arXiv:1603.02754 [cs]
work page Pith review arXiv 2016
-
[10]
Davidson-Pilon
C. Davidson-Pilon. lifelines: survival analysis in python.Journal of Open Source Software, 4(40):1317, 2019
2019
-
[11]
G. S. Dayan, G. Hénique, H. Bahig, K. Nelson, C. Brodeur, A. Christopou- los, E. Filion, P.-F. Nguyen-Tan, B. O’Sullivan, T. Ayad, et al. Artificial intelligence model for imaging-based extranodal extension detection and outcome prediction in human papillomavirus- positive oropharyngeal can- cer.JAMA Otolaryngology–Head & Neck Surgery, 2025
2025
-
[12]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Un- terthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, June 2021. arXiv:2010.11929 [cs]
work page internal anchor Pith review Pith/arXiv arXiv 2021
-
[13]
Falcon and The PyTorch Lightning team
W. Falcon and The PyTorch Lightning team. PyTorch Lightning, Mar. 2019
2019
-
[14]
arXiv preprint arXiv:2201.01266 , year=
A. Hatamizadeh, V. Nath, Y. Tang, D. Yang, H. Roth, and D. Xu. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images, Jan. 2022. arXiv:2201.01266 [eess]
-
[15]
He and et al
K. He and et al. Masked Autoencoders Are Scalable Vision Learners, Dec
- [16]
-
[17]
Y. He, V. Nath, D. Yang, Y. Tang, A. Myronenko, and D. Xu. SwinUNETR-V2: Stronger Swin Transformers with Stagewise Convolu- tions for 3D Medical Image Segmentation. In H. Greenspan, A. Mad- abhushi, P. Mousavi, S. Salcudean, J. Duncan, T. Syeda-Mahmood, and R. Taylor, editors,Medical Image Computing and Computer Assisted In- tervention – MICCAI 2023, page...
2023
-
[18]
Hiyama, H
T. Hiyama, H. Kuno, T. Nagaki, K. Sekiya, S. Oda, S. Fujii, R. Hayashi, and T. Kobayashi. Extra-nodal extension in head and neck cancer: how radiologists can help staging and treatment planning.Japanese Journal of Radiology, 38(6):489–506, June 2020
2020
-
[20]
F. Hoebers, E. Yu, B. O’Sullivan, A. A. Postma, W. M. Palm, E. Bartlett, J. Lee, S. Stock, S. Koyfman, J. Su, W. Xu, and S. H. Huang. Augmenting inter-rater concordance of radiologic ex- tranodal extension in HPV-positive oropharyngeal carcinoma: A mul- ticenter study.Head & Neck, 44(11):2361–2369, 2022. _eprint: https://onlinelibrary.wiley.com/doi/pdf/10...
-
[21]
arXiv preprint arXiv:2404.09556 , year=
F. Isensee, T. Wald, C. Ulrich, M. Baumgartner, S. Roy, K. Maier-Hein, and P. F. Jaeger. nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation, July 2024. arXiv:2404.09556 [cs]
-
[22]
B. H. Kann, S. Aneja, G. V. Loganadane, J. R. Kelly, S. M. Smith, R. H. Decker, J. B. Yu, H. S. Park, W. G. Yarbrough, A. Malhotra, B. A. Burt- ness, and Z. A. Husain. Pretreatment Identification of Head and Neck Cancer Nodal Metastasis and Extranodal Extension Using Deep Learning Neural Networks.Scientific Reports, 8(1):14036, Sept. 2018. Publisher: Natu...
2018
-
[23]
Kazmierski, M
M. Kazmierski, M. Welch, S. Kim, C. McIntosh, K. Rey-McIntyre, S. H. Huang, T. Patel, T. Tadic, M. Milosevic, F.-F. Liu, A. Ryczkowski, J. Kazmierska, Z. Ye, D. Plana, H. J. Aerts, B. H. Kann, S. V. Bratman, A. J. Hope, and B. Haibe-Kains. Multi-institutional Prognostic Modeling in Head and Neck Cancer: Evaluating Impact and Generalizability of Deep Learn...
2023
-
[24]
Kharytaniuk, P
N. Kharytaniuk, P. Molony, S. Boyle, G. O’Leary, R. Werner, C. Hef- fron, L. Feeley, and P. Sheahan. Association of Extracapsular Spread With 32 Survival According to Human Papillomavirus Status in Oropharynx Squa- mous Cell Carcinoma and Carcinoma of Unknown Primary Site.JAMA otolaryngology– head & neck surgery, 142(7):683–690, July 2016
2016
-
[25]
A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick. Segment Anything, Apr. 2023. arXiv:2304.02643 [cs]
work page internal anchor Pith review arXiv 2023
-
[26]
Krizhevsky, I
A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. Burges, L. Bottou, and K. Q. Weinberger, editors,Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc., 2012
2012
-
[27]
W. T. Le, E. Vorontsov, F. P. Romero, L. Seddik, M. M. Elsharief, P. F. Nguyen-Tan, D. Roberge, H. Bahig, and S. Kadoury. Cross-institutional outcome prediction for head and neck cancer patients using self-attention neural networks.Scientific Reports, 12(1):3183, Feb. 2022
2022
-
[28]
J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang. Segment anything in medical images.Nature Communications, 15(1):654, Jan. 2024. Publisher: Nature Publishing Group
2024
-
[29]
M. Meng, L. Bi, D. Feng, and J. Kim. Radiomics-enhanced Deep Multi-task Learning for Outcome Prediction in Head and Neck Cancer, Nov. 2022
2022
-
[30]
A. Myronenko, M. M. R. Siddiquee, D. Yang, Y. He, and D. Xu. Auto- mated head and neck tumor segmentation from 3D PET/CT, Sept. 2022. arXiv:2209.10809 [eess]
-
[31]
M. M. Oken, R. H. Creech, D. C. Tormey, J. Horton, T. E. Davis, E. T. McFadden, and P. P. Carbone. Toxicity and response criteria of the East- ern Cooperative Oncology Group.American Journal of Clinical Oncology, 5(6):649–655, Dec. 1982
1982
-
[32]
S. Pai, D. Bontempi, I. Hadzic, V. Prudente, M. Sokač, T. L. Chaunzwa, S. Bernatz, A. Hosny, R. H. Mak, N. J. Birkbak, and H. J. W. L. Aerts. Foundation model for cancer imaging biomarkers.Nature Machine Intelli- gence, 6(3):354–367, Mar. 2024. Publisher: Nature Publishing Group
2024
-
[33]
J. Peng, Y. Lu, L. Chen, K. Qiu, F. Chen, J. Liu, W. Xu, W. Zhang, Y. Zhao, Z. Yu, and J. Ren. The prognostic value of machine learning techniques versus cox regression model for head and neck cancer.Methods, 205:123–132, Sept. 2022
2022
-
[34]
K. Peng, D. Zhou, and S. Gong. OAR-UNet: Enhancing Long-Distance Dependencies for Head and Neck OAR Segmentation.Electronics, 13(18):3771, Jan. 2024. Number: 18 Publisher: Multidisciplinary Digital Publishing Institute. 33
2024
-
[35]
Rebaud, T
L. Rebaud, T. Escobar, F. Khalid, K. Girum, and I. Buvat. Simplicity Is All You Need: Out-of-the-Box nnUNet Followed by Binary-Weighted Ra- diomic Model for Segmentation and Outcome Prediction in Head and Neck PET/CT. InHead and Neck Tumor Segmentation and Outcome Predic- tion: Third Challenge, HECKTOR 2022, Held in Conjunction with MIC- CAI 2022, Singapo...
2022
-
[36]
U-Net: Convolutional Networks for Biomedical Image Segmentation
O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional Networks for Biomedical Image Segmentation, May 2015. arXiv:1505.04597 [cs]
work page internal anchor Pith review Pith/arXiv arXiv 2015
-
[37]
B. S. Rose, J.-H. Jeong, S. K. Nath, S. M. Lu, and L. K. Mell. Population- Based Study of Competing Mortality in Head and Neck Cancer.Journal of Clinical Oncology, 29(26):3503–3509, Sept. 2011. Publisher: Wolters Kluwer
2011
-
[38]
Saber, M
R. Saber, M. Tonneau, S. Bahig, J. Malo, W. Belkaid, M. Messaoudene, N. Bouchard, F. Coulombe, P. Joubert, H. Bahig, S. Turcotte, B. Routy, and S. Kadoury. Feature Tokenizer-Transformers with Self-Training for The Prediction of PD-L1 Expression of Non-Small Cell Lung Cancer from CT. In2024 IEEE International Symposium on Biomedical Imaging (ISBI), pages 1...
2024
-
[39]
Sinha, D
P. Sinha, D. Kallogjeri, H. Gay, W. L. Thorstad, J. S. Lewis, R. Chernock, B. Nussenbaum, and B. H. Haughey. High metastatic node number, not extracapsular spread or N-classification is a node-related prognosticator in transorally-resected, neck-dissected p16-positive oropharynx cancer.Oral Oncology, 51(5):514–520, May 2015
2015
-
[40]
Vallières, E
M. Vallières, E. Kay-Rivest, L. J. Perrin, X. Liem, C. Furstoss, H. J. W. L. Aerts, N. Khaouam, P. F. Nguyen-Tan, C.-S. Wang, K. Sultanem, J. Seun- tjens, and I. El Naqa. Radiomics strategies for risk assessment of tumour failure in head-and-neck cancer.Scientific Reports, 7(1):10117, Aug. 2017. Publisher: Nature Publishing Group
2017
-
[41]
J. J. M. van Griethuysen, A. Fedorov, C. Parmar, A. Hosny, N. Aucoin, V. Narayan, R. G. H. Beets-Tan, J.-C. Fillion-Robin, S. Pieper, and H. J. W.L.Aerts. ComputationalRadiomicsSystemtoDecodetheRadiographic Phenotype.Cancer Research, 77(21):e104–e107, Nov. 2017
2017
-
[42]
R. S. Vanguri, J. Luo, A. T. Aukerman, J. V. Egger, C. J. Fong, N. Horvat, A. Pagano, J. d. A. B. Araujo-Filho, L. Geneslaw, H. Rizvi, et al. Mul- timodal integration of radiology, pathology and genomics for prediction of response to pd-(l) 1 blockade in patients with non-small cell lung cancer. Nature cancer, 3(10):1151–1164, 2022
2022
- [43]
-
[44]
Wodzinski
M. Wodzinski. Benchmark of Deep Encoder-Decoder Architectures for Head and Neck Tumor Segmentation in Magnetic Resonance Images: Contribution to the HNTSMRG Challenge. In K. A. Wahid, C. Dede, M. A. Naser, and C. D. Fuller, editors,Head and Neck Tumor Segmen- tation for MR-Guided Applications, pages 204–213, Cham, 2025. Springer Nature Switzerland
2025
-
[45]
Xiong, B
X. Xiong, B. J. Smith, S. A. Graves, M. M. Graham, J. M. Buatti, and R. R. Beichel. Head and Neck Cancer Segmentation in FDG PET Im- ages: Performance Comparison of Convolutional Neural Networks and Vi- sion Transformers.Tomography, 9(5):1933–1948, Oct. 2023. Number: 5 Publisher: Multidisciplinary Digital Publishing Institute
1933
-
[46]
C.-N. Yu, R. Greiner, H.-C. Lin, and V. Baracos. Learning Patient-Specific Cancer Survival Distributions as a Sequence of Dependent Regressors. In Advances in Neural Information Processing Systems, volume 24. Curran Associates, Inc., 2011. 35
2011
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.