Recognition: 2 theorem links
· Lean TheoremGated Differential Linear Attention: A Linear-Time Decoder for High-Fidelity Medical Segmentation
Pith reviewed 2026-05-15 17:26 UTC · model grok-4.3
The pith
Gated differential linear attention enables high-fidelity medical image segmentation at linear computational cost.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that performing differential subtraction between two kernelized attention branches over complementary query and key subspaces, combined with a data-dependent gate for token refinement and fused with a parallel local token-mixing branch using depthwise convolution, produces sharper boundary-critical features for medical segmentation while preserving linear O(N) complexity when instantiated in a pretrained PVT-based encoder-decoder.
What carries the argument
Gated differential linear attention mixer that subtracts outputs of two kernelized attention branches to suppress redundancy before fusing with a local convolution branch.
If this is right
- State-of-the-art accuracy on 2D medical segmentation benchmarks spanning CT, MRI, ultrasound, and dermoscopy.
- Favorable accuracy-efficiency trade-off relative to closely related linear and full-attention baselines.
- Linear O(N) scaling that supports practical clinical deployment on standard hardware.
- Improved boundary preservation by countering the diffuse aggregation common in standard linear attention.
- Seamless integration into pretrained PVT encoder-decoder pipelines without raising asymptotic cost.
Where Pith is reading between the lines
- The subtraction principle may transfer to other dense-prediction tasks such as 3D volume segmentation where token counts grow rapidly.
- Combining the gated differential path with quantization or pruning could push real-time inference speeds further in hospital settings.
- The reliance on complementary subspaces suggests that explicit subspace design choices could be tuned per modality to reduce cross-modality retraining.
- Similar local-global fusion patterns might improve efficiency in related areas like retinal vessel segmentation or histopathology tiling.
Load-bearing premise
The differential subtraction between the two kernelized attention branches reliably suppresses redundant responses while preserving boundary-critical signals across diverse medical imaging modalities without introducing new artifacts.
What would settle it
Segmentation metrics such as Dice score and Hausdorff distance degrade on a held-out medical dataset when the differential subtraction is removed or replaced by simple addition of the two branches.
Figures
read the original abstract
Medical image segmentation requires models that preserve fine anatomical boundaries while remaining practical for clinical deployment. Transformers capture long-range dependencies but incur quadratic attention cost, whereas CNNs are efficient but less effective at global reasoning. Linear attention offers \(\mathcal{O}(N)\) scaling, but often produces diffuse feature aggregation that weakens boundary-sensitive prediction. We introduce a gated differential linear-attention mixer for medical image segmentation. Its global path, Gated Differential Linear Attention (GDLA), performs differential subtraction between two kernelized attention branches over complementary query/key subspaces to suppress redundant responses, and employs a data-dependent gate for token refinement. A parallel local token-mixing branch with depthwise convolution strengthens neighborhood interactions for better refinement, and the two branches are fused while preserving \(\mathcal{O}(N)\) complexity. When instantiated in a pretrained Pyramid Vision Transformer (PVT)-based encoder--decoder model, \name achieves state-of-the-art results on the evaluated 2D medical segmentation benchmarks spanning CT, MRI, ultrasound, and dermoscopy, with a favorable accuracy--efficiency trade-off over closely related baselines. The code is publicly available at \href{https://github.com/xmindflow/gdla}{https://github.com/xmindflow/gdla}.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces Gated Differential Linear Attention (GDLA), a linear-time attention mixer for medical image segmentation. GDLA performs differential subtraction between two kernelized attention branches over complementary query/key subspaces, applies a data-dependent gate for token refinement, and fuses this global path with a parallel local depthwise convolution branch while preserving O(N) complexity. When plugged into a pretrained Pyramid Vision Transformer (PVT) encoder-decoder, the model claims state-of-the-art results on 2D benchmarks spanning CT, MRI, ultrasound, and dermoscopy with a favorable accuracy-efficiency trade-off over baselines. Public code is released.
Significance. If the empirical claims hold after proper validation, the work provides a practical advance in efficient transformer decoders for medical segmentation by targeting the diffuse aggregation problem in linear attention via differential operations. Public code release strengthens reproducibility. However, the absence of any quantitative tables, ablation studies, error analysis, or attention diagnostics in the provided text means the central claim of reliable boundary preservation via differential subtraction remains unverified and cannot yet be assessed for impact.
major comments (2)
- [GDLA formulation (global path description)] The central empirical claim (SOTA results via differential subtraction preserving boundary signals) rests on an untested assumption. No formal analysis of the subtraction operator, no attention-map diagnostics, and no ablation isolating it from the data-dependent gate or local branch are provided, leaving the weakest assumption unaddressed.
- [Results and Experiments] The abstract states SOTA results and O(N) complexity with favorable trade-off, yet the manuscript provides no quantitative tables, ablation details, or error analysis. Without these, the soundness of the accuracy-efficiency claims cannot be evaluated.
minor comments (2)
- [Method] Notation for the two kernelized branches and the differential operator should be defined with explicit equations rather than descriptive text only.
- [Abstract] The abstract claims 'favorable accuracy--efficiency trade-off over closely related baselines' but does not name the baselines or report specific metrics (e.g., Dice, FLOPs) in the summary.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and for recognizing the potential of GDLA as a practical advance in linear-time decoders for medical segmentation. We address each major comment below, clarifying the formulation and committing to strengthen the empirical validation in revision.
read point-by-point responses
-
Referee: [GDLA formulation (global path description)] The central empirical claim (SOTA results via differential subtraction preserving boundary signals) rests on an untested assumption. No formal analysis of the subtraction operator, no attention-map diagnostics, and no ablation isolating it from the data-dependent gate or local branch are provided, leaving the weakest assumption unaddressed.
Authors: We agree that the current manuscript would be strengthened by explicit analysis of the differential operator. In the revised version we will add: (i) a short derivation showing how subtraction between the two kernelized branches over complementary subspaces suppresses diffuse responses while retaining high-frequency boundary signals; (ii) attention-map visualizations on representative CT/MRI slices; and (iii) targeted ablations that isolate the differential subtraction from the data-dependent gate and the parallel depthwise-convolution branch. These additions will directly address the untested-assumption concern. revision: yes
-
Referee: [Results and Experiments] The abstract states SOTA results and O(N) complexity with favorable trade-off, yet the manuscript provides no quantitative tables, ablation details, or error analysis. Without these, the soundness of the accuracy-efficiency claims cannot be evaluated.
Authors: We acknowledge that the version reviewed did not contain the full experimental section. The complete manuscript includes quantitative tables reporting Dice/IoU scores on the four 2D benchmarks (CT, MRI, ultrasound, dermoscopy), wall-clock latency and FLOPs measurements confirming linear scaling, component-wise ablations, and boundary-error analysis. In revision we will expand these tables, add the missing error analysis, and ensure all numbers are cross-referenced to the new attention diagnostics and ablations mentioned above. revision: yes
Circularity Check
No circularity in architectural proposal or empirical claims
full rationale
The paper introduces GDLA as a novel linear-attention mixer using differential subtraction between kernelized branches, a data-dependent gate, and a parallel depthwise-convolution branch, all fused at O(N) cost. No equations or derivation steps are shown that reduce the claimed boundary preservation or SOTA results to a fitted quantity defined in terms of the target metric, a self-referential definition, or a load-bearing self-citation chain. The performance claims rest on external benchmark evaluations rather than any closed-form reduction to the paper's own inputs, satisfying the criteria for a self-contained contribution.
Axiom & Free-Parameter Ledger
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
performs differential subtraction between two kernelized attention branches over complementary query/key subspaces to suppress redundant responses
-
IndisputableMonolith/Foundation/AlphaCoordinateFixation.leancostAlphaLog_high_calibrated_iff unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
ϕ(·) = ELU(·) + 1
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Dataset of breast ultrasound images.Data in brief, 28:104863, 2020
Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy. Dataset of breast ultrasound images.Data in brief, 28:104863, 2020. 6, 1
work page 2020
-
[2]
Attention deeplabv3+: Multi-level con- text attention mechanism for skin lesion segmentation
Reza Azad, Maryam Asadi-Aghbolaghi, Mahmood Fathy, and Sergio Escalera. Attention deeplabv3+: Multi-level con- text attention mechanism for skin lesion segmentation. In European conference on computer vision, pages 251–266. Springer, 2020. 7
work page 2020
-
[3]
Beyond self- attention: Deformable large kernel attention for medical im- age segmentation, 2023
Reza Azad, Leon Niggemeier, Michael Huttemann, Amirhossein Kazerouni, Ehsan Khodapanah Aghdam, Yury Velichko, Ulas Bagci, and Dorit Merhof. Beyond self- attention: Deformable large kernel attention for medical im- age segmentation, 2023. 2, 6
work page 2023
-
[4]
Olivier Bernard, Alain Lalande, Clement Zotti, Freder- ick Cervenansky, Xin Yang, Pheng-Ann Heng, Irem Cetin, Karim Lekadir, Oscar Camara, Miguel Angel Gonzalez Ballester, et al. Deep learning techniques for automatic mri cardiac multi-structures segmentation and diagnosis: is the problem solved?IEEE transactions on medical imaging, 37 (11):2514–2525, 2018. 6, 1
work page 2018
-
[5]
CENet: Context Enhance- ment Network for Medical Image Segmentation
Afshin Bozorgpour, Sina Ghorbani Kolahi, Reza Azad, Ilker Hacihaliloglu, and Dorit Merhof. CENet: Context Enhance- ment Network for Medical Image Segmentation . Inproceed- ings of Medical Image Computing and Computer Assisted Intervention – MICCAI 2025. Springer Nature Switzerland,
work page 2025
-
[6]
Efficientvit: Lightweight multi-scale attention for high- resolution dense prediction
Han Cai, Junyan Li, Muyan Hu, Chuang Gan, and Song Han. Efficientvit: Lightweight multi-scale attention for high- resolution dense prediction. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 17302–17313, 2023. 1, 2, 5
work page 2023
-
[7]
Swin-unet: Unet-like pure transformer for medical image segmentation
Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xi- aopeng Zhang, Qi Tian, and Manning Wang. Swin-unet: Unet-like pure transformer for medical image segmentation. InProceedings of the European Conference on Computer Vi- sion Workshops(ECCVW), 2022. 1, 2, 6, 7
work page 2022
-
[8]
TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation
Jieneng Chen, Yongyi Lu, Qihang Yu, Xiangde Luo, Ehsan Adeli, Yan Wang, Le Lu, Alan L Yuille, and Yuyin Zhou. Transunet: Transformers make strong encoders for medi- cal image segmentation.arXiv preprint arXiv:2102.04306,
work page internal anchor Pith review Pith/arXiv arXiv
-
[9]
Encoder-decoder with atrous separable convolution for semantic image segmentation
Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018. 2, 7
work page 2018
-
[10]
Bo Dong, Wenhai Wang, Deng-Ping Fan, Jinpeng Li, Huazhu Fu, and Ling Shao. Polyp-pvt: Polyp segmen- tation with pyramid vision transformers.arXiv preprint arXiv:2108.06932, 2021. 1, 2
-
[11]
Pranet: Parallel reverse attention network for polyp segmentation
Deng-Ping Fan, Ge-Peng Ji, Tao Zhou, Geng Chen, Huazhu Fu, Jianbing Shen, and Ling Shao. Pranet: Parallel reverse attention network for polyp segmentation. InInternational conference on medical image computing and computer- assisted intervention, pages 263–273. Springer, 2020. 7
work page 2020
-
[12]
Mustansar Fiaz, Mubashir Noman, Hisham Cholakkal, Rao Muhammad Anwer, Jacob Hanna, and Fahad Shah- baz Khan. Guided-attention and gated-aggregation network for medical image segmentation.Pattern Recognition, 156: 110812, 2024. 2
work page 2024
-
[13]
Visual attention network.Compu- tational visual media, 9(4):733–752, 2023
Meng-Hao Guo, Cheng-Ze Lu, Zheng-Ning Liu, Ming-Ming Cheng, and Shi-Min Hu. Visual attention network.Compu- tational visual media, 9(4):733–752, 2023. 2
work page 2023
-
[14]
Dongchen Han, Ziyi Wang, Zhuofan Xia, Yizeng Han, Yi- fan Pu, Chunjiang Ge, Jun Song, Shiji Song, Bo Zheng, and Gao Huang. Demystify mamba in vision: A linear attention perspective.Advances in neural information processing sys- tems, 37:127181–127203, 2024. 1, 2
work page 2024
-
[15]
Moein Heidari, Amirhossein Kazerouni, Milad Soltany, Reza Azad, Ehsan Khodapanah Aghdam, Julien Cohen- Adad, and Dorit Merhof. Hiformer: Hierarchical multi-scale representations using transformers for medical image seg- mentation. InProceedings of the IEEE/CVF Winter Confer- ence on Applications of Computer Vision, pages 6202–6212,
-
[16]
Unet 3+: A full-scale connected unet for medical image segmentation
Huimin Huang, Lanfen Lin, Ruofeng Tong, Hongjie Hu, Qiaowei Zhang, Yutaro Iwamoto, Xianhua Han, Yen-Wei Chen, and Jian Wu. Unet 3+: A full-scale connected unet for medical image segmentation. InICASSP 2020-2020 IEEE international conference on acoustics, speech and sig- nal processing (ICASSP), pages 1055–1059. Ieee, 2020. 1, 2
work page 2020
-
[17]
Xiaohong Huang, Zhifang Deng, Dandan Li, Xueguang Yuan, and Ying Fu. Missformer: An effective transformer for 2d medical image segmentation.IEEE Transactions on Medical Imaging, 2022. 2, 7
work page 2022
-
[18]
Transformers are RNNs: Fast autoregres- sive transformers with linear attention
Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and Franc ¸ois Fleuret. Transformers are RNNs: Fast autoregres- sive transformers with linear attention. InProceedings of the 37th International Conference on Machine Learning, pages 5156–5165. PMLR, 2020. 1, 2, 4, 6, 7
work page 2020
-
[19]
Sina Ghorbani Kolahi, Seyed Kamal Chaharsooghi, Tok- tam Khatibi, Afshin Bozorgpour, Reza Azad, Moein Hei- dari, Ilker Hacihaliloglu, and Dorit Merhof. Msa2net: Multi- scale adaptive attention-guided network for medical image segmentation.arXiv preprint arXiv:2407.21640, 2024. 6
-
[20]
Moganet: Multi-order gated aggregation network.arXiv preprint arXiv:2211.03295, 2022
Siyuan Li, Zedong Wang, Zicheng Liu, Cheng Tan, Haitao Lin, Di Wu, Zhiyuan Chen, Jiangbin Zheng, and Stan Z Li. Moganet: Multi-order gated aggregation network.arXiv preprint arXiv:2211.03295, 2022. 2
-
[21]
Decoupled weight de- cay regularization
Ilya Loshchilov and Frank Hutter. Decoupled weight de- cay regularization. InInternational Conference on Learning Representations, 2019. 1
work page 2019
-
[22]
Ange Lou, Shuyue Guan, and Murray Loew. Dc-unet: re- thinking the u-net architecture with dual channel efficient cnn for medical image segmentation. InMedical Imaging 2021: Image Processing, pages 758–768. SPIE, 2021. 1, 2
work page 2021
-
[23]
Caranet: context axial reverse attention network for segmen- tation of small medical objects
Ange Lou, Shuyue Guan, Hanseok Ko, and Murray H Loew. Caranet: context axial reverse attention network for segmen- tation of small medical objects. InMedical Imaging 2022: Image Processing, pages 81–92. SPIE, 2022. 7
work page 2022
-
[24]
Ph2: A public database for the analysis of dermoscopic im- ages.Dermoscopy image analysis, 2, 2015
Teresa Mendonc ¸a, M Celebi, T Mendonca, and J Marques. Ph2: A public database for the analysis of dermoscopic im- ages.Dermoscopy image analysis, 2, 2015. 6, 1
work page 2015
-
[25]
Attention U-Net: Learning Where to Look for the Pancreas
Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, et al. Atten- tion u-net: Learning where to look for the pancreas.arXiv preprint arXiv:1804.03999, 2018. 1, 2, 7
work page internal anchor Pith review Pith/arXiv arXiv 2018
-
[26]
The devil in lin- ear transformer
Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Ling- peng Kong, Nick Barnes, and Yiran Zhong. The devil in lin- ear transformer. InProceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7025–7041, Abu Dhabi, United Arab Emirates, 2022. Asso- ciation for Computational Linguistics. 1, 2, 4
work page 2022
-
[27]
Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
Zihan Qiu, Zekun Wang, Bo Zheng, Zeyu Huang, Kaiyue Wen, Songlin Yang, Rui Men, Le Yu, Fei Huang, Suozhi Huang, et al. Gated attention for large language mod- els: Non-linearity, sparsity, and attention-sink-free.arXiv preprint arXiv:2505.06708, 2025. 1, 2
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[28]
Md Mostafijur Rahman and Radu Marculescu. Multi-scale hierarchical vision transformer with cascaded attention de- coding for medical image segmentation. InMedical Imaging with Deep Learning (MIDL), 2023. 6
work page 2023
-
[29]
Emcad: Efficient multi-scale convolutional atten- tion decoding for medical image segmentation
Md Mostafijur Rahman, Mustafa Munir, and Radu Mar- culescu. Emcad: Efficient multi-scale convolutional atten- tion decoding for medical image segmentation. InProceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11769–11779, 2024. 2, 3, 6, 7
work page 2024
-
[30]
U- net: Convolutional networks for biomedical image segmen- tation
Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U- net: Convolutional networks for biomedical image segmen- tation. InInternational Conference on Medical image com- puting and computer-assisted intervention, pages 234–241. Springer, 2015. 1, 2, 7
work page 2015
-
[31]
Vm-unet: Vision mamba unet for medical image segmentation.arXiv preprint arXiv:2402.02491, 2024
Jiacheng Ruan and Suncheng Xiang. Vm-unet: Vision mamba unet for medical image segmentation.arXiv preprint arXiv:2402.02491, 2024. 6
-
[32]
GLU Variants Improve Transformer
Noam Shazeer. Glu variants improve transformer.arXiv preprint arXiv:2002.05202, 2020. 2, 5
work page internal anchor Pith review Pith/arXiv arXiv 2002
-
[33]
Efficient attention: Attention with lin- ear complexities
Zhuoran Shen, Mingyuan Zhang, Haiyu Zhao, Shuai Yi, and Hongsheng Li. Efficient attention: Attention with lin- ear complexities. InProceedings of the IEEE/CVF winter conference on applications of computer vision, pages 3531– 3539, 2021. 1, 2
work page 2021
-
[34]
Yao-Hung Hubert Tsai, Shaojie Bai, Makoto Yamada, Louis- Philippe Morency, and Ruslan Salakhutdinov. Transformer dissection: a unified understanding of transformer’s atten- tion via the lens of kernel.arXiv preprint arXiv:1908.11775,
-
[35]
Philipp Tschandl, Cliff Rosendahl, and Harald Kittler. The ham10000 dataset, a large collection of multi-source der- matoscopic images of common pigmented skin lesions.Sci- entific data, 5(1):1–9, 2018. 6, 1
work page 2018
-
[36]
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. InAdvances in Neu- ral Information Processing Systems. Curran Associates, Inc.,
-
[37]
Haonan Wang, Peng Cao, Jiaqi Wang, and Osmar R Zaiane. Uctransnet: rethinking the skip connections in u-net from a channel-wise perspective with transformer. InProceed- ings of the AAAI conference on artificial intelligence, pages 2441–2449, 2022. 2, 7
work page 2022
-
[38]
Mixed transformer u-net for medical image segmentation
Hongyi Wang, Shiao Xie, Lanfen Lin, Yutaro Iwamoto, Xian-Hua Han, Yen-Wei Chen, and Ruofeng Tong. Mixed transformer u-net for medical image segmentation. In ICASSP 2022-2022 IEEE international conference on acous- tics, speech and signal processing (ICASSP), pages 2390–
work page 2022
-
[39]
Pyramid vision transformer: A versatile backbone for dense prediction without convolutions
Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. InProceedings of the IEEE/CVF international conference on computer vision, pages 568–578, 2021. 1
work page 2021
-
[40]
Sana: Efficient high-resolution text-to-image synthesis with linear diffusion transformers
Enze Xie, Junsong Chen, Junyu Chen, Han Cai, Haotian Tang, Yujun Lin, Zhekai Zhang, Muyang Li, Ligeng Zhu, Yao Lu, et al. Sana: Efficient high-resolution text-to-image synthesis with linear diffusion transformers. InThe Thir- teenth International Conference on Learning Representa- tions, 2025. 5
work page 2025
-
[41]
Tianzhu Ye, Li Dong, Yuqing Xia, Yutao Sun, Yi Zhu, Gao Huang, and Furu Wei. Differential transformer. InThe Thirteenth International Conference on Learning Represen- tations, 2025. 1, 3, 4, 6, 7
work page 2025
-
[42]
Transfuse: Fus- ing transformers and cnns for medical image segmentation
Yundong Zhang, Huiye Liu, and Qiang Hu. Transfuse: Fus- ing transformers and cnns for medical image segmentation. InInternational conference on medical image computing and computer-assisted intervention, pages 14–24. Springer,
-
[43]
7 Gated Differential Linear Attention: A Linear-Time Decoder for High-Fidelity Medical Segmentation Supplementary Material
-
[44]
Preliminaries 6.1. Multi-Head Softmax Attention Let the input beX∈R N×d model, whereNis the number of tokens (e.g.,N=HWfor a 2D grid) andd model is the channel width. The softmax attention [36] operates with the following steps. Input Linear Projections.The inputXis linearly trans- formed into queriesQ, keysK, and valuesV∈R N×d k via learned weight matric...
-
[45]
Experiment Details This appendix complements Section 4 of the main paper with additional details on datasets, implementation settings, evaluation metrics, and supplementary experimental results. 7.1. Dataset and Implementation Details Synapse Multi-Organ.We follow the TransUNet proto- col [8] onSynapse(30 CT scans), using 18 scans for train- ing and 12 fo...
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.