Recognition: no theorem link
Optimizing Deep Learning Photometric Redshifts for the Roman Space Telescope with HST/CANDELS
Pith reviewed 2026-05-16 02:21 UTC · model grok-4.3
The pith
A new semi-supervised model PITA outperforms other methods for photometric redshifts by training on both labeled redshifts and all available images and colors.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
PITA (Photo-z Inference with a Triple-task Algorithm) outperforms template-based, classical machine-learning, fully-supervised, and self-supervised photo-z methods on HST/CANDELS imaging. It does so by training with a three-part loss that incorporates images and colors for every object and redshifts when they are available, resulting in a latent space that varies smoothly with magnitude, color, and redshift and maintains high accuracy even when the labeled training set is substantially reduced.
What carries the argument
PITA's three-part loss function that jointly optimizes image reconstruction, color reconstruction, and redshift prediction on both labeled and unlabeled data to enforce smoothness in the learned latent space.
If this is right
- Semi-supervised deep learning extracts useful information from the hundreds of millions of Roman images and colors that lack redshift labels.
- PITA maintains superior accuracy even when the spectroscopic training set is cut by a large factor.
- Self-supervised training alone produces a latent space with large color and redshift fluctuations that degrade photo-z performance.
- Both fully-supervised and semi-supervised deep networks beat traditional template and photometry-based methods on space-based imaging.
Where Pith is reading between the lines
- The same triple-task structure could be adapted to other sparse-label inference problems in astronomy such as morphological classification or transient detection.
- Testing PITA on Roman-specific image simulations with realistic noise would be the clearest next step to confirm generalization.
- If the smoothness property proves robust, PITA-style models might reduce the number of spectroscopic follow-up observations needed for large surveys.
Load-bearing premise
That gains measured on HST/CANDELS data will carry over to the noise, depth, and resolution properties of actual Roman Space Telescope observations without domain shift or overfitting.
What would settle it
Direct comparison of PITA's photo-z scatter and outlier fraction against competing methods on a set of simulated Roman images whose true redshifts are known.
Figures
read the original abstract
Photometric redshifts (photo-$z$'s) will be crucial for studies of galaxy evolution, large-scale structure, and transients with the Nancy Grace Roman Space Telescope. Deep learning methods leverage pixel-level information from ground-based images to achieve the best photo-$z$'s for low-redshift galaxies, but their efficacy at higher redshifts with deep, space-based imaging remains largely untested. We used Hubble Space Telescope CANDELS optical and near-infrared imaging to evaluate fully-supervised, self-supervised, and semi-supervised deep learning photo-$z$ algorithms out to $z\sim3$. Compared to template-based and classical machine learning photometry methods, the fully-supervised and semi-supervised models achieved better performance. Our new semi-supervised model, PITA (Photo-$z$ Inference with a Triple-task Algorithm), outperformed all others by learning from unlabeled and labeled data through a three-part loss function that incorporates images and colors for all objects as well as redshifts when available. PITA produces a latent space that varies smoothly in magnitude, color, and redshift, resulting in the best photo-$z$ performance even when the redshift training set was significantly reduced. In contrast, the self-supervised approach produced a latent space with significant color and redshift fluctuations that hindered photo-$z$ inference. Looking forward to Roman, we recommend using semi supervised deep learning to take full advantage of the information contained in the hundreds of millions of high-resolution images and color measurements, together with the limited redshift measurements available, to achieve the most accurate photo-$z$ estimates for both faint and bright sources.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces PITA, a new semi-supervised deep learning model for photometric redshift estimation that employs a triple-task loss function incorporating images and colors for all objects plus redshifts when available. Using HST/CANDELS optical/NIR imaging, it compares fully-supervised, self-supervised, and semi-supervised approaches against template-based and classical ML photometry methods, claiming that PITA achieves the best performance up to z~3, produces a smoother latent space in magnitude/color/redshift, and maintains accuracy even with substantially reduced labeled training data. The authors recommend semi-supervised deep learning for the Roman Space Telescope to exploit its hundreds of millions of high-resolution images alongside limited spectroscopic redshifts.
Significance. If the empirical results on CANDELS hold, the work is significant for Roman photo-z applications because it demonstrates that semi-supervised methods can leverage abundant unlabeled imaging data to improve accuracy and robustness when spectroscopic labels are sparse. The explicit comparison of supervision regimes and the emphasis on latent-space smoothness provide concrete guidance for survey pipelines that must handle both faint and bright sources.
major comments (2)
- [Abstract and concluding section] Abstract and final paragraph: the forward recommendation for Roman rests on the untested assumption that performance gains and latent-space smoothness observed on CANDELS will survive the shift to Roman's imaging properties (different NIR filter set, wider field, distinct PSF and noise). No domain-adaptation experiment, filter-convolution test, or simulated Roman catalog is described, making this a load-bearing claim for the paper's primary application.
- [Results section] Results section (performance tables and reduced-label experiments): the abstract asserts that PITA 'outperformed all others' and remained best 'even when the redshift training set was significantly reduced,' yet the provided text supplies no quantitative metrics (bias, σ, outlier fraction, R²), error bars, or exact label fractions used in the ablation. Without these numbers the magnitude and statistical significance of the claimed gains cannot be assessed.
minor comments (2)
- [Methods] The three-part loss function is described only at a high level; an explicit equation (or pseudocode) showing the weighting between the image reconstruction, color, and redshift terms would improve reproducibility.
- [Results] Latent-space visualizations should include quantitative measures of smoothness (e.g., gradient norms or correlation lengths in redshift) rather than relying solely on qualitative inspection.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our manuscript. We have addressed each of the major comments in detail below and made revisions to the manuscript accordingly.
read point-by-point responses
-
Referee: [Abstract and concluding section] Abstract and final paragraph: the forward recommendation for Roman rests on the untested assumption that performance gains and latent-space smoothness observed on CANDELS will survive the shift to Roman's imaging properties (different NIR filter set, wider field, distinct PSF and noise). No domain-adaptation experiment, filter-convolution test, or simulated Roman catalog is described, making this a load-bearing claim for the paper's primary application.
Authors: We agree that the performance on CANDELS does not automatically guarantee identical gains for Roman due to differences in imaging characteristics. CANDELS serves as the closest current analog to Roman's space-based NIR imaging, allowing us to test the semi-supervised approach in a relevant regime. In the revised manuscript, we have added a new subsection in the discussion that explicitly compares the properties of CANDELS and Roman, highlights potential differences, and includes stronger caveats on the extrapolation. We have revised the abstract and concluding section to frame our recommendation as a suggestion for future pipelines based on the demonstrated advantages, rather than a definitive claim. We note that full validation would require Roman-specific simulations, which we propose as future work. revision: partial
-
Referee: [Results section] Results section (performance tables and reduced-label experiments): the abstract asserts that PITA 'outperformed all others' and remained best 'even when the redshift training set was significantly reduced,' yet the provided text supplies no quantitative metrics (bias, σ, outlier fraction, R²), error bars, or exact label fractions used in the ablation. Without these numbers the magnitude and statistical significance of the claimed gains cannot be assessed.
Authors: The referee is correct that the main text as presented lacks explicit numerical values for the performance metrics and exact label fractions in the narrative, although the tables contain them. To address this, we have revised the results section and abstract to include specific quantitative summaries, such as the values of bias, scatter, outlier rates, and R² for key comparisons, along with the label fractions tested (10%, 25%, 50%, and 100%). Error bars from bootstrapping or cross-validation are now referenced in the text. These changes make the magnitude of the improvements clear without altering the underlying results. revision: yes
Circularity Check
No circularity: empirical ML model comparison on external CANDELS benchmark
full rationale
The paper presents an empirical comparison of fully-supervised, self-supervised, and semi-supervised deep learning models for photometric redshift estimation on HST/CANDELS imaging data. PITA is introduced as a new triple-task semi-supervised architecture whose performance is measured directly against baselines on held-out labeled data; no mathematical derivation, uniqueness theorem, or fitted parameter is claimed to predict the target metric. All reported gains are statistical outcomes of training and evaluation on the same external dataset, with no self-citation chain or ansatz that reduces the central claim to its own inputs by construction. Generalization to Roman is stated as a forward recommendation rather than a derived result.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
2021, arXiv e-prints, arXiv:2101.04293, doi: 10.48550/arXiv.2101.04293
Mustafa, M. 2021, arXiv e-prints, arXiv:2101.04293, doi: 10.48550/arXiv.2101.04293
-
[2]
Ahumada, R., Allende Prieto, C., Almeida, A., et al. 2020, ApJS, 249, 3, doi: 10.3847/1538-4365/ab929e Ait Ouahmed, R., Arnouts, S., Pasquet, J., Treyer, M., &
-
[3]
2024, A&A, 683, A26, doi: 10.1051/0004-6361/202347395
Bertin, E. 2024, A&A, 683, A26, doi: 10.1051/0004-6361/202347395
-
[4]
Akins, H. B., Casey, C. M., Berg, D. A., et al. 2025a, ApJL, 980, L29, doi: 10.3847/2041-8213/adab76
-
[5]
Akins, H. B., Casey, C. M., Lambrides, E., et al. 2025b, ApJ, 991, 37, doi: 10.3847/1538-4357/ade984
-
[6]
Akiyama, M., Ueda, Y., Watson, M. G., et al. 2015, PASJ, 67, 82, doi: 10.1093/pasj/psv050
-
[7]
Report of the Dark Energy Task Force
Albrecht, A., Bernstein, G., Cahn, R., et al. 2006, arXiv e-prints, astro, doi: 10.48550/arXiv.astro-ph/0609591
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.astro-ph/0609591 2006
-
[8]
Almosallam, I. A., Jarvis, M. J., & Roberts, S. J. 2016, MNRAS, 462, 726, doi: 10.1093/mnras/stw1618
-
[9]
2025, ApJ, 978, 90, doi: 10.3847/1538-4357/ad8b30
Antwi-Danso, J., Papovich, C., Esdaile, J., et al. 2025, ApJ, 978, 90, doi: 10.3847/1538-4357/ad8b30
-
[11]
1999, , 302, 735, 10.1046/j.1365-8711.1999.02174.x
Arnouts, S., Cristiani, S., Moscardini, L., et al. 1999, Monthly Notices of the Royal Astronomical Society, 310, 540, doi: 10.1046/j.1365-8711.1999.02978.x
-
[12]
Ashmead, F., Newman, J. A., Andrews, B. H., et al. 2025, arXiv e-prints, arXiv:2512.09032, doi: 10.48550/arXiv.2512.09032 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068 Astropy Collaboration, Price-Whelan, A. M., Sip˝ ocz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc...
-
[13]
2010, A&A, 512, A12, doi: 10.1051/0004-6361/200913626
Balestra, I., Mainieri, V., Popesso, P., et al. 2010, A&A, 512, A12, doi: 10.1051/0004-6361/200913626
-
[14]
Balogh, M. L., McGee, S. L., Wilman, D. J., et al. 2011, MNRAS, 412, 2303, doi: 10.1111/j.1365-2966.2010.18052.x
-
[15]
Balogh, M. L., van der Burg, R. F. J., Muzzin, A., et al. 2021, MNRAS, 500, 358, doi: 10.1093/mnras/staa3008
-
[16]
Barger, A. J., Cowie, L. L., & Wang, W.-H. 2008, ApJ, 689, 687, doi: 10.1086/592735
-
[17]
Barro, G., P´ erez-Gonz´ alez, P. G., Cava, A., et al. 2019, ApJS, 243, 22, doi: 10.3847/1538-4365/ab23f2
-
[18]
Beck, R., Dobos, L., Budav´ ari, T., Szalay, A. S., & Csabai, I. 2016, MNRAS, 460, 1371, doi: 10.1093/mnras/stw1009 Ben´ ıtez, N. 2000, ApJ, 536, 571, doi: 10.1086/308947
-
[19]
Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164
-
[20]
Blazek, J. A., Clowe, D., Collett, T. E., et al. 2022, arXiv e-prints, arXiv:2204.01992, doi: 10.48550/arXiv.2204.01992
-
[21]
On the Opportunities and Risks of Foundation Models
Bommasani, R., Hudson, D. A., Adeli, E., et al. 2021, arXiv e-prints, arXiv:2108.07258, doi: 10.48550/arXiv.2108.07258
work page internal anchor Pith review Pith/arXiv arXiv doi:10.48550/arxiv.2108.07258 2021
-
[22]
J., Smit, R., Schouws, S., et al
Bouwens, R. J., Smit, R., Schouws, S., et al. 2022, ApJ, 931, 160, doi: 10.3847/1538-4357/ac5a4a
-
[23]
Bradshaw, E. J., Almaini, O., Hartley, W. G., et al. 2013, MNRAS, 433, 194, doi: 10.1093/mnras/stt715
-
[24]
Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, ApJ, 686, 1503, doi: 10.1086/591786
work page internal anchor Pith review doi:10.1086/591786 2008
-
[25]
Brammer, G. B., van Dokkum, P. G., Franx, M., et al. 2012, ApJS, 200, 13, doi: 10.1088/0067-0049/200/2/13
-
[26]
Brinch, M., Greve, T. R., Sanders, D. B., et al. 2024, MNRAS, 527, 6591, doi: 10.1093/mnras/stad3409
-
[27]
2025, A&A, 694, A218, doi: 10.1051/0004-6361/202451448
Brinch, M., Jin, S., Gobat, R., et al. 2025, A&A, 694, A218, doi: 10.1051/0004-6361/202451448
-
[28]
2015, MNRAS, 446, 2394, doi: 10.1093/mnras/stu2117 Calabr` o, A., Daddi, E., Cassata, P., et al
Brusa, M., Bongiorno, A., Cresci, G., et al. 2015, MNRAS, 446, 2394, doi: 10.1093/mnras/stu2117 Calabr` o, A., Daddi, E., Cassata, P., et al. 2018, ApJL, 862, L22, doi: 10.3847/2041-8213/aad33e
-
[29]
M., Berta, S., B´ ethermin, M., et al
Casey, C. M., Berta, S., B´ ethermin, M., et al. 2012, ApJ, 761, 140, doi: 10.1088/0004-637X/761/2/140
-
[30]
M., Cooray, A., Capak, P., et al
Casey, C. M., Cooray, A., Capak, P., et al. 2015, ApJL, 808, L33, doi: 10.1088/2041-8205/808/2/L33
-
[31]
M., Cooray, A., Killi, M., et al
Casey, C. M., Cooray, A., Killi, M., et al. 2017, ApJ, 840, 101, doi: 10.3847/1538-4357/aa6cb1
-
[32]
Casey, C. M., Zavala, J. A., Aravena, M., et al. 2019, ApJ, 887, 55, doi: 10.3847/1538-4357/ab52ff
-
[33]
Champagne, J. B., Casey, C. M., Zavala, J. A., et al. 2021, ApJ, 913, 110, doi: 10.3847/1538-4357/abf4e6
-
[34]
2022, ApJ, 929, 159, doi: 10.3847/1538-4357/ac61df
Chen, C.-C., Liao, C.-L., Smail, I., et al. 2022, ApJ, 929, 159, doi: 10.3847/1538-4357/ac61df
-
[35]
2020, in International conference on machine learning, PMLR, 1597–1607
Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. 2020, in International conference on machine learning, PMLR, 1597–1607
work page 2020
-
[36]
2021, Advances in Neural Information Processing Systems, 34, 11834
Chen, T., Luo, C., & Li, L. 2021, Advances in Neural Information Processing Systems, 34, 11834
work page 2021
-
[37]
Coil, A. L., Blanton, M. R., Burles, S. M., et al. 2011, ApJ, 741, 8, doi: 10.1088/0004-637X/741/1/8
-
[38]
2001, MNRAS, 322, 231, doi: 10.1046/j.1365-8711.2001.04022.x
Colless, M., Dalton, G., Maddox, S., et al. 2001, MNRAS, 328, 1039, doi: 10.1046/j.1365-8711.2001.04902.x
- [39]
-
[40]
2015, A&A, 575, A40, doi: 10.1051/0004-6361/201424767 26Khederlarian et al
Comparat, J., Richard, J., Kneib, J.-P., et al. 2015, A&A, 575, A40, doi: 10.1051/0004-6361/201424767 26Khederlarian et al
-
[41]
2013, ARA&A, 51, 393, doi: 10.1146/annurev-astro-082812-141017
Conroy, C. 2013, ARA&A, 51, 393, doi: 10.1146/annurev-astro-082812-141017
work page internal anchor Pith review doi:10.1146/annurev-astro-082812-141017 2013
-
[42]
Cooper, M. C., Aird, J. A., Coil, A. L., et al. 2011, ApJS, 193, 14, doi: 10.1088/0067-0049/193/1/14
-
[43]
Cooper, M. C., Yan, R., Dickinson, M., et al. 2012, MNRAS, 425, 2116, doi: 10.1111/j.1365-2966.2012.21524.x
-
[44]
Cooper, O. R., Casey, C. M., Akins, H. B., et al. 2024, ApJ, 970, 50, doi: 10.3847/1538-4357/ad4c6c
-
[45]
Crenshaw, J. F., Kalmbach, J. B., Gagliano, A., et al. 2024, AJ, 168, 80, doi: 10.3847/1538-3881/ad54bf
-
[46]
2004, ApJ, 617, 746, doi: 10.1086/425569
Daddi, E., Cimatti, A., Renzini, A., et al. 2004, ApJ, 617, 746, doi: 10.1086/425569
-
[47]
Daddi, E., Valentino, F., Rich, R. M., et al. 2021, A&A, 649, A78, doi: 10.1051/0004-6361/202038700
-
[48]
Damjanov, I., Zahid, H. J., Geller, M. J., Fabricant, D. G., & Hwang, H. S. 2018, ApJS, 234, 21, doi: 10.3847/1538-4365/aaa01c
-
[49]
Darvish, B., Scoville, N. Z., Martin, C., et al. 2020, ApJ, 892, 8, doi: 10.3847/1538-4357/ab75c3
-
[50]
Davis, M., Faber, S. M., Newman, J., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4834, Discoveries and Research Prospects from 6- to 10-Meter-Class Telescopes II, ed. P. Guhathakurta, 161–172, doi: 10.1117/12.457897
-
[51]
Data Release 1 of the Dark Energy Spectroscopic Instrument
Davis, M., Guhathakurta, P., Konidaris, N. P., et al. 2007, ApJL, 660, L1, doi: 10.1086/517931 DESI Collaboration, Adame, A. G., Aguilar, J., et al. 2024, AJ, 168, 58, doi: 10.3847/1538-3881/ad3217 DESI Collaboration, Abdul-Karim, M., Adame, A. G., et al. 2025, arXiv e-prints, arXiv:2503.14745, doi: 10.48550/arXiv.2503.14745 D’Eugenio, C., Daddi, E., Goba...
work page internal anchor Pith review Pith/arXiv arXiv doi:10.1086/517931 2007
-
[52]
Dey, B., Andrews, B. H., Newman, J. A., et al. 2022, MNRAS, 515, 5285, doi: 10.1093/mnras/stac2105
-
[53]
Diener, C., Lilly, S. J., Ledoux, C., et al. 2015, ApJ, 802, 31, doi: 10.1088/0004-637X/802/1/31 Dolatpour Fathkouhi, A., & Fox, G. C. 2024, arXiv e-prints, arXiv:2409.01825, doi: 10.48550/arXiv.2409.01825
-
[54]
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Dosovitskiy, A. 2020, arXiv preprint arXiv:2010.11929
work page internal anchor Pith review Pith/arXiv arXiv 2020
-
[55]
Endsley, R., & Stark, D. P. 2022, MNRAS, 511, 6042, doi: 10.1093/mnras/stac524
-
[56]
Epinat, B., Contini, T., Mercier, W., et al. 2024, A&A, 683, A205, doi: 10.1051/0004-6361/202348038 Euclid Collaboration, Desprez, G., Paltani, S., et al. 2020, A&A, 644, A31, doi: 10.1051/0004-6361/202039403 Euclid Collaboration, Saglia, R., De Nicola, S., et al. 2022, A&A, 664, A196, doi: 10.1051/0004-6361/202243604 Euclid Collaboration, Siudek, M., Hue...
-
[57]
Faber, S. M., Phillips, A. C., Kibrick, R. I., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1657–1669, doi: 10.1117/12.460346
-
[58]
2019,, 1.4 doi: 10.5281/zenodo.3828935
Falcon, W., & The PyTorch Lightning team. 2019,, 1.4 doi: 10.5281/zenodo.3828935
-
[59]
2011, A&A, 529, A72, doi: 10.1051/0004-6361/200913498
Faure, C., Anguita, T., Alloin, D., et al. 2011, A&A, 529, A72, doi: 10.1051/0004-6361/200913498
-
[60]
Feldmann, R., Carollo, C. M., Porciani, C., et al. 2006, MNRAS, 372, 565, doi: 10.1111/j.1365-2966.2006.10930.x
-
[61]
Forrest, B., Marsan, Z. C., Annunziatella, M., et al. 2020, ApJ, 903, 47, doi: 10.3847/1538-4357/abb819
-
[62]
2022, ApJ, 938, 109, doi: 10.3847/1538-4357/ac8747
Forrest, B., Wilson, G., Muzzin, A., et al. 2022, ApJ, 938, 109, doi: 10.3847/1538-4357/ac8747
-
[63]
Forrest, B., Cooper, M. C., Muzzin, A., et al. 2024, ApJ, 977, 51, doi: 10.3847/1538-4357/ad8b1c
-
[64]
Forrest, B., Shen, L., Lemaux, B. C., et al. 2025, ApJ, 985, 61, doi: 10.3847/1538-4357/adc252
-
[65]
Fu, H., Yan, L., Scoville, N. Z., et al. 2010, ApJ, 722, 653, doi: 10.1088/0004-637X/722/1/653
-
[66]
2024, ApJ, 963, 51, doi: 10.3847/1538-4357/ad1ce5
Fu, S., Jiang, L., Ning, Y., Liu, W., & Pan, Z. 2024, ApJ, 963, 51, doi: 10.3847/1538-4357/ad1ce5
-
[67]
2013, ApJS, 206, 10, doi: 10.1088/0067-0049/206/2/10
Galametz, A., Grazian, A., Fontana, A., et al. 2013, ApJS, 206, 10, doi: 10.1088/0067-0049/206/2/10
-
[68]
2024, A&A, 687, A288, doi: 10.1051/0004-6361/202348623
Gentile, F., Talia, M., Daddi, E., et al. 2024, A&A, 687, A288, doi: 10.1051/0004-6361/202348623
-
[69]
R., Leauthaud, A., Bundy, K., et al
George, M. R., Leauthaud, A., Bundy, K., et al. 2011, ApJ, 742, 125, doi: 10.1088/0004-637X/742/2/125
-
[70]
2017, A&A, 599, A95, doi: 10.1051/0004-6361/201629852
Gobat, R., Daddi, E., Strazzullo, V., et al. 2017, A&A, 599, A95, doi: 10.1051/0004-6361/201629852
-
[71]
Graham, M. L., Connolly, A. J., Ivezi´ c,ˇZ., et al. 2018, AJ, 155, 1, doi: 10.3847/1538-3881/aa99d4
-
[72]
Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, ApJS, 197, 35, doi: 10.1088/0067-0049/197/2/35
-
[73]
2023, arXiv e-prints, arXiv:2301.05712, doi: 10.48550/arXiv.2301.05712
Gui, J., Chen, T., Zhang, J., et al. 2023, arXiv e-prints, arXiv:2301.05712, doi: 10.48550/arXiv.2301.05712
-
[74]
Guo, Y., Ferguson, H. C., Giavalisco, M., et al. 2013, ApJS, 207, 24, doi: 10.1088/0067-0049/207/2/24
-
[75]
Harish, S., Wold, I. G. B., Malhotra, S., et al. 2022, ApJ, 934, 167, doi: 10.3847/1538-4357/ac7cf1
-
[76]
Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
-
[77]
, archivePrefix = "arXiv", eprint =
Harrison, C. M., Alexander, D. M., Mullaney, J. R., et al. 2016, MNRAS, 456, 1195, doi: 10.1093/mnras/stv2727 Deep Learning Photo-z’s for Roman27
-
[78]
2018, ApJ, 858, 77, doi: 10.3847/1538-4357/aabacf
Hasinger, G., Capak, P., Salvato, M., et al. 2018, ApJ, 858, 77, doi: 10.3847/1538-4357/aabacf
-
[79]
He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. 2020, in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9729–9738
work page 2020
-
[80]
2016, in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778
He, K., Zhang, X., Ren, S., & Sun, J. 2016, in Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778
work page 2016
-
[81]
2012, ApJ, 744, 149, doi: 10.1088/0004-637X/744/2/149
McCarthy, P. 2012, ApJ, 744, 149, doi: 10.1088/0004-637X/744/2/149
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.