pith. machine review for the scientific record. sign in

arxiv: 2604.26517 · v1 · submitted 2026-04-29 · 💻 cs.CV · q-bio.CB

Recognition: unknown

MTCurv: Deep learning for direct microtubule curvature mapping in noisy fluorescence microscopy images

Authors on Pith no claims yet

Pith reviewed 2026-05-07 13:51 UTC · model grok-4.3

classification 💻 cs.CV q-bio.CB
keywords microtubulecurvature estimationdeep learningfluorescence microscopyU-Netregressionsegmentation-freesynthetic data
0
0 comments X

The pith

A neural network directly predicts microtubule curvature maps from noisy fluorescence images without segmentation.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper seeks to establish that curvature of microtubules can be estimated accurately at each pixel directly from raw noisy microscope images by training a deep network on synthetic examples. This matters because microtubule shape indicates their stiffness and how cells respond to mechanical stress or disease, but traditional methods require first outlining the filaments, which introduces errors in low-contrast or noisy conditions. The authors adapt a residual U-Net with attention gates and add a loss term that penalizes inconsistent gradients in the predicted curvature field to keep predictions smooth and realistic. On two test sets of increasing challenge, the model recovers local curvatures reliably, and the work finds that Spearman correlation tracks prediction quality better than common perceptual image metrics. The result is both a usable software tool for biologists studying filament mechanics and guidance on how to design regression networks for geometric properties.

Core claim

MTCurv is a segmentation-free deep learning method that regresses pixel-wise microtubule curvature values from noisy fluorescence microscopy images by training an attention-based residual U-Net on synthetic data annotated with ground-truth curvatures and using a gradient-aware loss to enforce spatial coherence in the output maps.

What carries the argument

Attention-based residual U-Net performing direct regression of curvature values, trained with a loss that combines mean squared error and a gradient consistency term to reduce hallucinations and enforce coherence.

If this is right

  • Microtubule curvature can be quantified reliably even when background fluorescence or partial visibility would cause segmentation-based methods to fail.
  • Correlation coefficients, especially Spearman rank correlation, serve as more appropriate evaluation metrics for curvature regression than standard perceptual or blind image quality scores.
  • Both residual skip connections in the encoder and attention mechanisms in the decoder contribute measurably to accurate curvature recovery.
  • The approach yields a practical, open-source tool for analyzing filament geometry in cellular imaging studies.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Direct regression frameworks like this could be extended to other biological curvilinear structures such as actin filaments or neuronal processes where segmentation is equally difficult.
  • By removing the segmentation bottleneck, the method opens the way to real-time or high-throughput analysis of cytoskeletal dynamics in live-cell imaging.
  • Lessons on metric selection for geometry-aware regression may apply to related tasks like vessel curvature estimation in medical angiography.
  • If the synthetic data generation is made more realistic, generalization performance on diverse imaging conditions could improve further.

Load-bearing premise

That training exclusively on synthetic images with perfect pixel-wise curvature ground truth produces a model that generalizes to real experimental fluorescence images containing uncharacterized noise and artifacts.

What would settle it

Manual tracing of microtubules in real high-SNR images followed by independent curvature calculation that shows large systematic differences from the network's predicted curvature maps in low-contrast regions.

Figures

Figures reproduced from arXiv: 2604.26517 by Achraf Ait Laydi, H\'el\`ene Bouvrais, Sidi Mohamed Sid'El Moctar, Yousef El Mourabit.

Figure 3
Figure 3. Figure 3: Curvature predictions and error maps using MSE loss. (A) Ground truth; (B, C) Predic￾tions on the MicSim_FluoCurv simple and complex datasets; and Error maps for the (D-F) simple and (G-I) complex predictions: Differential error, Root Squared Error, and Curvature Error. We compared the error maps obtained for predictions on the simple and complex datasets to identify the most discriminative representations… view at source ↗
read the original abstract

Accurate quantification of the geometry of curvilinear biological structures is essential for understanding cellular mechanics and disease-related morphological alterations. Microtubule curvature is a key descriptor of filament rigidity and mechanical perturbations. However, reliable curvature extraction from fluorescence microscopy images remains challenging due to noise, low contrast, and partial filament visibility. Existing approaches rely on segmentation pipelines with pre or post-processing, which are highly sensitive to segmentation errors and often fail under adverse imaging conditions. In this work, we propose MTCurv, a deep learning framework for direct, segmenta-tion-free regression of microtubule curvature maps from noisy microscopy images. Leveraging a synthetic dataset with pixel-wise curvature annotations, we reformulated curvature estimation as a regression problem and adapted an attention-based residual U-Net. To reduce hallucinations and enforce spatial coherence, we introduced a gradient-aware loss combining Mean Squared Error with a gradient consistency term. Beyond model and loss design, we evaluated commonly used regression and image quality metrics, revealing that many perceptual and blind metrics are poorly suited for curvature estimation. Correlation-based metrics, particularly Spearman correlation, emerged as more reliable indicators of curvature prediction quality. Experiments on two datasets of increasing difficulty demonstrated that MTCurv accurately recovers local microtubule curvatures, even in the presence of background fluorescence. Ablation studies highlighted the contribution of both residual encoding and attention-based decoding. Overall, this work provides a practical tool for filament curvature analysis and methodological insights for geometry-aware regression in biomedical imaging. Datasets and code are made available.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript proposes MTCurv, an attention-based residual U-Net trained on synthetic data with pixel-wise curvature annotations to perform direct, segmentation-free regression of local microtubule curvature maps from noisy fluorescence microscopy images. It introduces a gradient-aware loss (MSE plus gradient consistency) to promote spatial coherence, analyzes the suitability of various regression and image-quality metrics (highlighting Spearman correlation), and reports experiments plus ablations on two datasets of increasing difficulty, claiming accurate curvature recovery even in the presence of background fluorescence. Datasets and code are released.

Significance. If the generalization and accuracy claims hold, the work offers a practical alternative to error-prone segmentation pipelines for quantifying curvilinear structures under realistic imaging conditions, with potential utility in cellular mechanics and disease studies. The emphasis on metric selection for geometry-aware regression and the public release of data/code are clear strengths that support reproducibility and broader adoption.

major comments (2)
  1. [Experiments section] Experiments section: the reported results on the two datasets of increasing difficulty support the accuracy claim but omit error bars, standard deviations across runs, or statistical significance tests on the metric values (including Spearman correlations). This weakens the quantitative backing for robustness under background fluorescence and domain shift from synthetic training data.
  2. [Abstract and Experiments section] Abstract and Experiments section: the central claim that MTCurv 'accurately recovers local microtubule curvatures, even in the presence of background fluorescence' is load-bearing, yet the manuscript does not clarify whether the real fluorescence images possessed quantitative ground-truth curvature annotations or whether evaluation relied on qualitative inspection or indirect proxies. Without this, the domain-transfer assumption remains insufficiently tested.
minor comments (2)
  1. [Abstract] Abstract: 'segmenta-tion-free' contains a typographical hyphenation error and should read 'segmentation-free'.
  2. [Method section] The loss function description would benefit from explicit details on how the gradient-consistency weighting factor is selected and whether sensitivity analysis was performed.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their thoughtful review and constructive comments on our manuscript. We address each of the major comments below and outline the revisions we will make to strengthen the paper.

read point-by-point responses
  1. Referee: Experiments section: the reported results on the two datasets of increasing difficulty support the accuracy claim but omit error bars, standard deviations across runs, or statistical significance tests on the metric values (including Spearman correlations). This weakens the quantitative backing for robustness under background fluorescence and domain shift from synthetic training data.

    Authors: We acknowledge that the absence of error bars, standard deviations, and statistical tests limits the strength of our quantitative claims. To address this, we will conduct additional experiments by training the model with multiple random seeds and report the mean and standard deviation for all reported metrics, including the Spearman correlation. We will also perform statistical significance tests (e.g., paired t-tests or Wilcoxon tests) to compare methods where appropriate. These additions will be included in the revised Experiments section. revision: yes

  2. Referee: Abstract and Experiments section: the central claim that MTCurv 'accurately recovers local microtubule curvatures, even in the presence of background fluorescence' is load-bearing, yet the manuscript does not clarify whether the real fluorescence images possessed quantitative ground-truth curvature annotations or whether evaluation relied on qualitative inspection or indirect proxies. Without this, the domain-transfer assumption remains insufficiently tested.

    Authors: The referee raises a valid point regarding the evaluation on real data. Our synthetic dataset provides pixel-wise ground-truth curvature annotations for quantitative evaluation. However, the real fluorescence microscopy dataset does not have such quantitative annotations, as they are not readily available. Evaluation on this dataset was performed through qualitative visual inspection of the predicted curvature maps, assessing coherence with expected microtubule structures and robustness to background fluorescence. We will revise the abstract and the Experiments section to explicitly distinguish between quantitative results on synthetic data and qualitative assessment on real data. This will better clarify the domain transfer evaluation and avoid overstatement of the accuracy claims for real images. revision: yes

Circularity Check

0 steps flagged

No circularity in empirical deep learning pipeline

full rationale

The paper presents an empirical DL regression framework (attention residual U-Net with gradient-aware loss) trained on synthetic pixel-wise curvature labels and evaluated on held-out synthetic and real fluorescence datasets. No mathematical derivation chain exists; performance claims rest on experimental metrics (Spearman correlation, ablation studies) rather than any prediction or result that reduces to its own inputs by construction. The approach is self-contained against external benchmarks and does not invoke self-citations or ansatzes as load-bearing premises.

Axiom & Free-Parameter Ledger

2 free parameters · 1 axioms · 0 invented entities

The central claim rests on the fidelity of the synthetic dataset matching real image statistics and the effectiveness of the gradient consistency term in the loss for spatial coherence.

free parameters (2)
  • U-Net model weights
    Learned from the synthetic training dataset to fit the regression task.
  • loss weighting factor
    Balance between MSE and gradient consistency term chosen to reduce hallucinations.
axioms (1)
  • domain assumption Synthetic data distribution sufficiently matches real noisy fluorescence microscopy images for generalization
    Invoked to justify training on synthetic data and testing on real datasets.

pith-pipeline@v0.9.0 · 5585 in / 1272 out tokens · 51286 ms · 2026-05-07T13:51:19.997555+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

42 extracted references · 5 canonical work pages · 1 internal anchor

  1. [1]

    2 CNRS, Univ

    MTCurv: Deep learning for direct microtubule curvature mapping in noisy fluorescence microscopy images Achraf Ait Laydi1,2[0009-0004-8898-1903], Sidi Mohamed Sid’El Moctar2[0009-0009-5490-1916], Yousef El Mourabit1[0000-0001-7851-3816], and Hélène Bouvrais2[0000-0003-1128-1322] 1 TIAD Laboratory, Sciences and Technology Faculty, Sultan Moulay Slimane Univ...

  2. [2]

    and restricting analyses to qualitative observations, even though quantitative measurements would be more in-formative. We foresaw that the deep learning algorithms accounting for the contextual information within the images to predict features will succeed in estimating geometric properties by considering the neighboring pixels. By avoiding the segmentat...

  3. [3]

    7257 30.7504 ± 1.6976 3.7 x 10-16 (ii) Correlation R2 ­ [0 1] 0.4044 ± 1.0562 0.1990 ± 0.9006 0.14 & Statistical Pearson ­ [0 1] 0.8332 ± 0.0237 0.6677 ± 0.0474 2 x 10-66 Spearman ­ [-1 1] 0.5445 ± 0.0197 0.4931 ± 0.0220 1.3 x 10-41 EVS ­ [0 1] 0.4212 ± 1.0014 0.2226 ± 0.8172 0.13 (iii) Perceptual VSI ­ [0 1] 0.9579 ± 0.0093 0.9376 ± 0.0105 1.1 x 10-32 & ...

  4. [4]

    Ablation study using the MSE-Gradient loss on the MicSim_FluoCurv complex dataset (mean ± std; Student’s t-test between MTCurv and its variants). Metrics U-Net MTCurv_noAtt MTCurv_noRes MTCurv NRMSE ¯ 0.8380 ± 0.1271 0.8418 ± 0.1763 0.8309 ± 0.1922 0.8284 ± 0.1842 p = 0.67 p = 0.63 p = 0.97 R2 ­ 0.2818 ± 0.2654 0.2607 ± 0.4239 0.2730 ± 0.4963 0.2802 ± 0.4...

  5. [5]

    that include biomedical images of retinal blood vessels and of corneal nerves, respectively. We foresee that MTCurv will be rel-evant for biomedical research, offering new opportunities to investigate cellular me-chanics, develop innovative therapeutic strategies, and enhance diagnostic capabilities through more precise characterization of curvilinear bio...

  6. [6]

    Journal of Hypertension 41, 830-837 (2023) 14

    Xue, C.C., Li, C., Hu, J.F., Wei, C.C., Wang, H., Ahemaitijiang, K., Zhang, Q., Chen, D.N., Zhang, C., Li, F., Zhang, J., Jonas, J.B., Wang, Y.X.: Retinal vessel caliber and tortuosity and prediction of 5-year incidence of hypertension. Journal of Hypertension 41, 830-837 (2023) 14

  7. [7]

    Academic radiology 12, 1232-1240 (2005)

    Bullitt, E., Zeng, D., Gerig, G., Aylward, S., Joshi, S., Smith, J.K., Lin, W., Ewend, M.G.: Vessel tortuosity and brain tumor malignancy: a blinded study1. Academic radiology 12, 1232-1240 (2005)

  8. [8]

    The Ocular Surface 15, 15-47 (2017)

    Cruzat, A., Qazi, Y., Hamrah, P.: In Vivo Confocal Microscopy of Corneal Nerves in Health and Disease. The Ocular Surface 15, 15-47 (2017)

  9. [9]

    Neuroscience 114, 265-273 (2002)

    Grace, E., Rabiner, C., Busciglio, J.: Characterization of neuronal dystrophy induced by fibrillar amyloid β: implications for Alzheimer’s disease. Neuroscience 114, 265-273 (2002)

  10. [10]

    Philosophical Transactions of the Royal Society B 375, 20190557 (2020)

    Röper, K.: Microtubules enter centre stage for morphogenesis. Philosophical Transactions of the Royal Society B 375, 20190557 (2020)

  11. [11]

    BioEssays 42, 1900244 (2020)

    Matis, M.: The Mechanical Role of Microtubules in Tissue Remodeling. BioEssays 42, 1900244 (2020)

  12. [12]

    Bioorganic & medicinal chemistry 22, 5040-5049 (2014)

    Brunden, K.R., Trojanowski, J.Q., Smith III, A.B., Lee, V.M.-Y., Ballatore, C.: Microtubule-stabilizing agents as potential therapeutics for neurodegenerative disease. Bioorganic & medicinal chemistry 22, 5040-5049 (2014)

  13. [13]

    Frontiers in Pharmacology 13, (2022)

    Lafanechère, L.: The microtubule cytoskeleton: An old validated target for novel therapeutic drugs. Frontiers in Pharmacology 13, (2022)

  14. [14]

    MBoC 28, 333-345 (2017)

    Zhang, Z., Nishimura, Y., Kanchanawong, P.: Extracting microtubule networks from superresolution single-molecule localization microscopy data. MBoC 28, 333-345 (2017)

  15. [15]

    elegans zygote division

    Cueff, L., Huet, E., Schmitt, L., Pastezeur, S., Coquil, M., Savary, T., Pécréaux, J., Bouvrais, H.: Microtubule stiffening by doublecortin-domain protein ZYG-8 contributes to spindle orientation during C. elegans zygote division. bioRxiv 2024.2011.2029.624795 (2025)

  16. [16]

    Methods in Cell Biology, vol

    Bicek, A.D., Tüzel, E., Kroll, D.M., Odde, D.J.: Analysis of Microtubule Curvature. Methods in Cell Biology, vol. 83, pp. 237-268. Academic Press (2007)

  17. [17]

    PNAS 109, 2913-2918 (2012)

    Risca, V.I., Wang, E.B., Chaudhuri, O., Chia, J.J., Geissler, P.L., Fletcher, D.A.: Actin filament curvature biases branching direction. PNAS 109, 2913-2918 (2012)

  18. [18]

    Pallavicini, C., Monastra, A., Bardeci, N.G., Wetzler, D., Levi, V., Bruno, L.: Characterization of microtubule buckling in living cells. Eur. Biophysics J. 46, 581-594 (2017)

  19. [19]

    Biophysical journal 121, 1813-1822 (2022)

    Wisanpitayakorn, P., Mickolajczyk, K.J., Hancock, W.O., Vidali, L., Tüzel, E.: Measurement of the persistence length of cytoskeletal filaments using curvature distributions. Biophysical journal 121, 1813-1822 (2022)

  20. [20]

    Scientific Reports 13, 8870 (2023)

    Nishida, K., Matsumura, K., Tamura, M., Nakamichi, T., Shimamori, K., Kuragano, M., Kabir, A.M.R., Kakugo, A., Kotani, S., Nishishita, N., Tokuraku, K.: Effects of three microtubule-associated proteins (MAP2, MAP4, and Tau) on microtubules’ physical properties and neurite morphology. Scientific Reports 13, 8870 (2023)

  21. [21]

    iScience 27, 110907 (2024)

    Ju, H., Skibbe, H., Fukui, M., Yoshimura, S.H., Naoki, H.: Machine learning-guided reconstruction of cytoskeleton network from live-cell AFM images. iScience 27, 110907 (2024)

  22. [22]

    Soft matter (2025)

    Blob, A., Ventzke, D., Rölleke, U., Nies, G., Munk, A., Schaedel, L., Köster, S.: Global alignment and local curvature of microtubules in mouse fibroblasts are robust against perturbations of vimentin and actin. Soft matter (2025)

  23. [23]

    Biophysical journal 106, 2625-2635 (2014)

    Pallavicini, C., Levi, V., Wetzler, D.E., Angiolini, J.F., Benseñor, L., Despósito, M.A., Bruno, L.: Lateral Motion and Bending of Microtubules Studied with a New Single-Filament Tracking Routine in Living Cells. Biophysical journal 106, 2625-2635 (2014)

  24. [24]

    Zenodo (2025)

    Bouvrais, H., Crespo, M.: MicSim_FluoMT: Two synthetic datasets of images of fluorescent microtubules (Ait Laydi et al., 2025). Zenodo (2025)

  25. [25]

    A novel attention mechanism for noise-adaptive and robust segmentation of microtubules in microscopy images

    Ait Laydi, A., Cueff, L., Crespo, M., Mourabit, Y.E., Bouvrais, H.: Adaptive Attention Residual U-Net for curvilinear structure segmentation in fluorescence microscopy and biomedical images. arXiv preprint arXiv:2507.07800 (2025)

  26. [26]

    Communications Biology 8, 320 (2025)

    Papereux, S., Leconte, L., Valades-Cruz, C.A., Liu, T., Dumont, J., Chen, Z., Salamero, J., Kervrann, C., Badoual, A.: DeepCristae, a CNN for the restoration of mitochondria cristae in live microscopy images. Communications Biology 8, 320 (2025)

  27. [27]

    Communications Biology 6, 674 (2023)

    Ebrahimi, V., Stephan, T., Kim, J., Carravilla, P., Eggeling, C., Jakobs, S., Han, K.Y.: Deep learning enables fast, gentle STED microscopy. Communications Biology 6, 674 (2023)

  28. [28]

    Nature Comm

    Chen, R., Tang, X., Zhao, Y., Shen, Z., Zhang, M., Shen, Y., Li, T., Chung, C.H.Y., Zhang, L., Wang, J., Cui, B., Fei, P., Guo, Y., Du, S., Yao, S.: Single-frame deep-learning super-resolution microscopy for intracellular dynamics imaging. Nature Comm. 14, 2854 (2023)

  29. [29]

    Deep multi-scale video prediction beyond mean square error

    Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440 (2015)

  30. [30]

    IEEE Transactions on Biomedical Engineering 65, 2720-2730 (2018)

    Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., Wang, Q., Shen, D.: Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering 65, 2720-2730 (2018)

  31. [31]

    arXiv preprint arXiv:2011.01118 (2020)

    Siddique, N., Sidike, P., Elkin, C., Devabhaktuni, V.: U-Net and its variants for medical image segmentation: theory and applications. arXiv preprint arXiv:2011.01118 (2020)

  32. [32]

    Scientific Reports 15, 13428 (2025)

    Zhang, R., Jiang, G.: Exploring a multi-path U-net with probability distribution attention and cascade dilated convolution for precise retinal vessel segmentation in fundus images. Scientific Reports 15, 13428 (2025)

  33. [33]

    IEEE signal processing magazine 26, 98-117 (2009)

    Wang, Z., Bovik, A.C.: Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE signal processing magazine 26, 98-117 (2009)

  34. [34]

    Information Fusion 79, 124-145 (2022)

    Chen, H., He, X., Qing, L., Wu, Y., Ren, C., Sheriff, R.E., Zhu, C.: Real-world single image super-resolution: A brief review. Information Fusion 79, 124-145 (2022)

  35. [35]

    Interdisciplinary Journal of Information, Knowledge, and Management 14, 045-076 (2019)

    Botchkarev, A.: A new typology design of performance metrics to measure errors in machine learning regression algorithms. Interdisciplinary Journal of Information, Knowledge, and Management 14, 045-076 (2019)

  36. [36]

    : Democratising deep learning for microscopy with ZeroCostDL4Mic

    von Chamier, L., et al. : Democratising deep learning for microscopy with ZeroCostDL4Mic. Nature Communications 12, 2276 (2021)

  37. [37]

    signal, image and video processing 16, 1143-1151 (2022)

    Lu, Z., Chen, Y.: Single image super-resolution based on a modified U-net with mixed gradient loss. signal, image and video processing 16, 1143-1151 (2022)

  38. [38]

    Optik 280, 170750 (2023)

    Ge, L., Dou, L.: G-Loss: A loss function with gradient information for super-resolution. Optik 280, 170750 (2023)

  39. [39]

    In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, pp

    Chen, C., Yang, X., Huang, R., Hu, X., Huang, Y., Lu, X., Zhou, X., Luo, M., Ye, Y., Shuang, X., Miao, J., Xiong, Y., Ni, D.: Fine-Grained Correlation Loss for Regression. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2022, pp. 663-672. Springer Nature Switzerland, (2022)

  40. [40]

    Zenodo (2025)

    Cueff, L., Pecreaux, J., Bouvrais, H.: MicReal_FluoMT: A dataset of microscopy images with stained microtubules (Ait Laydi et al., 2025). Zenodo (2025)

  41. [41]

    IEEE transactions on medical imaging 23, 501-509 (2004)

    Staal, J., Abràmoff, M.D., Niemeijer, M., Viergever, M.A., Van Ginneken, B.: Ridge-based vessel segmentation in color images of the retina. IEEE transactions on medical imaging 23, 501-509 (2004)

  42. [42]

    Zenodo (2024)

    iMed: CORN: Corneal nerve fiber dataset. Zenodo (2024)