pith. machine review for the scientific record. sign in

arxiv: 2605.07522 · v1 · submitted 2026-05-08 · 💻 cs.CL

Recognition: 2 theorem links

· Lean Theorem

WeatherSyn: An Instruction Tuning MLLM For Weather Forecasting Report Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-11 01:48 UTC · model grok-4.3

classification 💻 cs.CL
keywords weather forecastingmultimodal large language modelsinstruction tuningreport generationMLLMzero-shot generalizationweather reports
0
0 comments X

The pith

A specialized multimodal model trained on a new weather dataset produces better forecast reports than general-purpose systems.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper defines the Weather Forecasting Report task and builds the first instruction-tuning dataset for it, covering 31 US cities across eight weather aspects. It then trains WeatherSyn, an MLLM fine-tuned specifically for turning multi-source weather data into structured reports. On the authors' evaluation, WeatherSyn beats leading closed-source MLLMs on standard metrics, with the largest gains on structurally complex aspects, and it maintains performance when tested on different regions without further training. This approach addresses the inefficiency of manual weather report writing, which currently burdens forecasters with data overload. The result supplies a concrete starting point for domain-adapted MLLMs that could support daily planning and agricultural decisions.

Core claim

We propose the Weather Forecasting Report task, construct the first instruction-tuning dataset covering 31 American cities and eight weather aspects, and develop WeatherSyn, the first MLLM specialized for this task. On our dataset, WeatherSyn consistently outperforms leading closed-source MLLMs across multiple metrics, with particular strength on structurally complex weather aspects, and it exhibits strong transferability to different geographic regions, indicating zero-shot generalization capability.

What carries the argument

Instruction tuning of an MLLM on a custom dataset built for the Weather Forecasting Report task, which maps multi-source weather inputs to structured natural-language reports.

If this is right

  • Automated generation can reduce the manual effort and information overload currently required to produce usable weather reports.
  • Specialized instruction tuning yields clear gains over general models on domain tasks that involve complex structure and multi-source data.
  • Zero-shot regional transfer means the same model can be deployed in new locations without collecting new labeled reports.
  • The dataset and training recipe provide a reusable template for building other MLLMs focused on scientific or operational reporting.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same dataset-construction plus instruction-tuning pattern could be applied to neighboring domains such as air-quality or climate-impact reporting.
  • If the model were connected to live data streams, it could produce on-demand, updated reports for individual users rather than static daily summaries.
  • Direct measurement of whether the generated reports actually improve agricultural or personal planning decisions would be a stronger test of practical value than text metrics alone.

Load-bearing premise

The newly built instruction-tuning dataset is representative of real-world weather forecasting needs and standard MLLM metrics adequately measure report quality and usefulness.

What would settle it

WeatherSyn would be shown weaker if independent tests on fresh real-world weather data from new cities or with metrics that track actual user decision quality found it no better than untuned closed-source MLLMs.

Figures

Figures reproduced from arXiv: 2605.07522 by Hong Cheng, Jia Li, Juepeng Zheng, Nuo Chen, Yang Liu, Zinan Zheng.

Figure 1
Figure 1. Figure 1: The pipeline of the weather forecast reporting process and the challenges in open-ended weather forecast report generation forecasting. WFR is designed to directly generate human￾readable weather reports from the initial atmospheric con￾ditions at a given time t and position p. By eliminating dependence on intermediate numerical forecasts, this ap￾proach streamlines the forecasting process and facilitates … view at source ↗
Figure 2
Figure 2. Figure 2: Construction of the WSInstruct weather forecast report dataset and the three-stage training strategy [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Weighted F1 scores of generated weather reports across different forecast days (Day 1–Day 4) for three weather aspects [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Performances with increasing number of reports for each question generalization. As shown in [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: (A) Trained on randomly selected cities, tested on other cities. (B) Trained on cities in the northern United States, tested on cities in the southern United States. (C) Trained on cities in the eastern United States, tested on cities in the western United States. Albuquerque New Mexico Burlington Vermont Honolulu Hawaii Charleston West Virginia Sterling Virginia Flagstaff Arizona Buffalo New York Columbia… view at source ↗
Figure 6
Figure 6. Figure 6: Regional performance evaluation based on weighted F1 Score across representative U.S. cities. severe weather event prediction. Similarly, CLLMate (Li et al., 2024) proposed a multimodal dataset for reasoning about weather parameters and predicting severe weather in real-world scenarios, formulating the problem as a multiple￾choice task through a hierarchical categorization of weather and climate events. Om… view at source ↗
Figure 7
Figure 7. Figure 7: The impact of different forecast horizon of the visual input on training [PITH_FULL_IMAGE:figures/full_fig_p017_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Performances with increasing number of reports for each question We further report the reference-based metric with the results summarized in [PITH_FULL_IMAGE:figures/full_fig_p017_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: (A) Trained on randomly selected cities, tested on other cities. (B) Trained on cities in the northern United States, tested on cities in the southern United States. (C) Trained on cities in the eastern United States, tested on cities in the western United States. Temporal Forecast Extraction Prompt You are an expert meteorological forecaster. Your task is to extract structured daily forecast information ,… view at source ↗
Figure 10
Figure 10. Figure 10: A typical case for Blacksburg, Virginia Visual Input Example: Report: Ground Truth: <<20220413, Wednesday>> Report: A strong cold front Wednesday night brings more showers and isolated thunderstorms. <<20220414, Thursday>> Report: The strong cold front continues into Thursday morning bringing more showers and isolated thunderstorms.<<20220415, Friday>> Report: Drier for Friday. WeatherSyn: <<20220413, Wed… view at source ↗
Figure 11
Figure 11. Figure 11: A typical case for Charleston, West Virginia 25 [PITH_FULL_IMAGE:figures/full_fig_p025_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: A typical case for Great Falls, Montana Visual Input Example: Report: Ground Truth: <<20221218, Sunday>> Report: Dry and cool high pressure builds in from the west Sunday.<<20221219, Monday>> Report: Dry and cool high pressure builds in from the west into Monday.<<20221220, Tuesday>> Report: A weak low pressure system tracking across the Gulf brings a chance of light precipitation mainly across the southe… view at source ↗
Figure 13
Figure 13. Figure 13: A typical case for Great Greer, South Carolina 26 [PITH_FULL_IMAGE:figures/full_fig_p026_13.png] view at source ↗
Figure 14
Figure 14. Figure 14: A typical case for Las Vegas, Nevada Visual Input Example: Report: Ground Truth: <<20221110, Thursday>> Report: Dry and a little warmer with locally gusty northeast winds in the mountain passes and canyons.\n\n<<20221111, Friday>> Report: Dry and a little warmer with locally gusty northeast winds in the mountain passes and canyons.<<20221112, Saturday>> Report: Weak onshore flow will begin to spread cooli… view at source ↗
Figure 15
Figure 15. Figure 15: A typical case for San Diego, California 27 [PITH_FULL_IMAGE:figures/full_fig_p027_15.png] view at source ↗
read the original abstract

Accurate weather forecast reporting enables individuals and communities to better plan daily activities and agricultural operations. However, the current reporting process primarily relies on manual analysis of multi-source data, which leads to information overload and reduced efficiency. With the development of multimodal large language models (MLLMs), leveraging data-driven models to analyze and generate reports in the weather forecasting domain remains largely underexplored. In this work, we propose the Weather Forecasting Report (WFR) task and construct the first instruction-tuning dataset for this task, named~\DatasetNameL, which covers 31 cities in America and 8 weather aspects. Based on this corpus, we develop the first model, \ModelNameL, specialized in generating weather forecast reports. Evaluation across multiple metrics on our dataset shows that \ModelNameL~ consistently outperforms leading closed-source MLLMs, particularly on structurally complex weather aspects. We further analyze its performance across diverse geographic regions and weather aspects. \ModelNameL~ demonstrates strong transferability across different regions, highlighting its zero-shot generalization capability. \ModelNameL~offers valuable insight for developing MLLMs specialized in weather report generation. .

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces the Weather Forecasting Report (WFR) task and constructs the first instruction-tuning dataset (covering 31 US cities and 8 weather aspects) for it. It develops WeatherSyn, an MLLM instruction-tuned on this corpus to generate weather forecast reports from multimodal inputs. The central empirical claim is that WeatherSyn outperforms leading closed-source MLLMs across multiple automatic metrics (especially on structurally complex aspects) and exhibits strong zero-shot regional transferability.

Significance. If the evaluation holds after proper validation, the work supplies the first public benchmark dataset and specialized model for automated weather-report generation, addressing a practical bottleneck in meteorological services. It demonstrates the viability of domain-adapted MLLMs for structured scientific reporting and could serve as a template for other data-intensive fields requiring factual, multi-aspect text output.

major comments (3)
  1. [Abstract / Evaluation] Abstract and Evaluation section: the claim that WeatherSyn 'consistently outperforms leading closed-source MLLMs' is load-bearing yet unsupported by any reported metric values, baseline names, statistical significance tests, or variance across runs. Without these, the data-to-claim link cannot be assessed.
  2. [Dataset Construction] Dataset section: the reference reports used as ground truth are not described as expert-authored or cross-validated against official meteorological sources. If they are synthetic or rule-derived, outperformance on automatic metrics does not establish meteorological accuracy or practical utility.
  3. [Evaluation] Evaluation section: no human evaluation or correlation analysis is provided to show that the chosen automatic metrics (BLEU/ROUGE/METEOR or similar) track meteorological correctness and report usefulness; this assumption is required for the generalization and superiority claims.
minor comments (2)
  1. [Abstract] Abstract contains LaTeX placeholders (∼DatasetNameL, ModelNameL) that should be expanded to the actual names for readability.
  2. [Tables] Ensure all tables reporting metric scores include exact baseline model versions, prompt templates, and any data-release statement.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback, which highlights important areas for strengthening the empirical claims and transparency of our work. We respond to each major comment below, indicating revisions where the manuscript will be updated.

read point-by-point responses
  1. Referee: [Abstract / Evaluation] Abstract and Evaluation section: the claim that WeatherSyn 'consistently outperforms leading closed-source MLLMs' is load-bearing yet unsupported by any reported metric values, baseline names, statistical significance tests, or variance across runs. Without these, the data-to-claim link cannot be assessed.

    Authors: We agree that the abstract should include concrete supporting evidence rather than a high-level claim. The full evaluation section (Section 4) already reports results across BLEU, ROUGE, METEOR, and additional metrics against specific closed-source baselines (GPT-4o, Claude-3-Opus, Gemini-1.5-Pro). In the revision we will (1) insert key numerical results and baseline names into the abstract, (2) add paired statistical significance tests, and (3) report standard deviations across three random seeds to quantify variance. revision: yes

  2. Referee: [Dataset Construction] Dataset section: the reference reports used as ground truth are not described as expert-authored or cross-validated against official meteorological sources. If they are synthetic or rule-derived, outperformance on automatic metrics does not establish meteorological accuracy or practical utility.

    Authors: The reference reports are derived from official NOAA forecast products and historical observations for the 31 cities, mapped to the eight weather aspects via structured extraction. They are not independently authored by meteorologists for this dataset. We will expand the Dataset Construction subsection to explicitly document the source data, extraction rules, and any cross-checks performed. We will also qualify the claims about meteorological accuracy to reflect this provenance. revision: partial

  3. Referee: [Evaluation] Evaluation section: no human evaluation or correlation analysis is provided to show that the chosen automatic metrics (BLEU/ROUGE/METEOR or similar) track meteorological correctness and report usefulness; this assumption is required for the generalization and superiority claims.

    Authors: We acknowledge that automatic metrics alone are insufficient to fully validate meteorological correctness. Our evaluation follows standard NLG practice for report generation but lacks direct human correlation. In the revision we will add a dedicated limitations paragraph citing prior work on metric-human correlations in scientific text generation and will include a small-scale human study (usefulness and factual accuracy ratings on a 50-example subset) if resources permit; otherwise we will clearly flag the absence as a limitation. revision: partial

Circularity Check

0 steps flagged

No circularity; empirical claims rest on held-out evaluation against external closed-source MLLMs

full rationale

The paper introduces a new task and self-constructed WFR dataset, fine-tunes WeatherSyn, and reports metric-based outperformance versus closed-source MLLMs plus regional transfer. No equations, no fitted parameters renamed as predictions, no uniqueness theorems, and no self-citation chains appear in the provided text. All load-bearing claims are direct empirical comparisons on held-out splits, which remain falsifiable against external models and do not reduce to the inputs by construction.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work introduces a new task and dataset but relies entirely on existing MLLM architectures and standard instruction-tuning procedures without new mathematical constructs.

axioms (1)
  • domain assumption Standard multimodal LLM training and evaluation assumptions hold for the weather domain
    Invoked implicitly when claiming outperformance and zero-shot transfer from the constructed dataset.

pith-pipeline@v0.9.0 · 5509 in / 1048 out tokens · 44395 ms · 2026-05-11T01:48:19.401772+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

103 extracted references · 103 canonical work pages · 9 internal anchors

  1. [1]

    Subseasonal to seasonal prediction project: Bridging the gap between weather and climate , author=

  2. [2]

    Qwen2.5-VL Technical Report

    Qwen2. 5-vl technical report , author=. arXiv preprint arXiv:2502.13923 , year=

  3. [3]

    Advances in Neural Information Processing Systems , volume=

    Vrsbench: A versatile vision-language benchmark dataset for remote sensing image understanding , author=. Advances in Neural Information Processing Systems , volume=

  4. [4]

    Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

    Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks , author=. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition , pages=

  5. [5]

    arXiv preprint arXiv:2406.09838 , year=

    Vision-language models meet meteorology: Developing models for extreme weather events detection with heatmaps , author=. arXiv preprint arXiv:2406.09838 , year=

  6. [6]

    Advances in neural information processing systems , volume=

    Visual instruction tuning , author=. Advances in neural information processing systems , volume=

  7. [7]

    Simon Lang, Mihai Alexe, Matthew Chantry, Jesper Dramsch, Florian Pinault, Baudouin Raoult, Mariana C

    GraphCast: Learning skillful medium-range global weather forecasting , author=. arXiv preprint arXiv:2212.12794 , year=

  8. [8]

    NeurIPS , volume=

    SubseasonalClimateUSA: a dataset for subseasonal forecasting and benchmarking , author=. NeurIPS , volume=

  9. [9]

    VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs

    Videollama 2: Advancing spatial-temporal modeling and audio understanding in video-llms , author=. arXiv preprint arXiv:2406.07476 , year=

  10. [10]

    2023 , eprint=

    Scaling Relationship on Learning Mathematical Reasoning with Large Language Models , author=. 2023 , eprint=

  11. [11]

    arXiv preprint arXiv:2412.01091 , year=

    DuoCast: Duo-Probabilistic Meteorology-Aware Model for Extended Precipitation Nowcasting , author=. arXiv preprint arXiv:2412.01091 , year=

  12. [12]

    Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models

    Video-chatgpt: Towards detailed video understanding via large vision and language models , author=. arXiv preprint arXiv:2306.05424 , year=

  13. [13]

    Proceedings of the 2021 ACM conference on fairness, accountability, and transparency , pages=

    On the dangers of stochastic parrots: Can language models be too big? , author=. Proceedings of the 2021 ACM conference on fairness, accountability, and transparency , pages=

  14. [14]

    Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

    Sentence-bert: Sentence embeddings using siamese bert-networks , author=. arXiv preprint arXiv:1908.10084 , year=

  15. [15]

    Alignment of language agents

    Alignment of language agents , author=. arXiv preprint arXiv:2103.14659 , year=

  16. [16]

    IEEE Transactions on Geoscience and Remote Sensing , volume=

    EarthGPT: A universal multimodal large language model for multisensor image comprehension in remote sensing domain , author=. IEEE Transactions on Geoscience and Remote Sensing , volume=. 2024 , publisher=

  17. [17]

    Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

    Geochat: Grounded large vision-language model for remote sensing , author=. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition , pages=

  18. [18]

    On the Opportunities and Risks of Foundation Models

    On the opportunities and risks of foundation models , author=. arXiv preprint arXiv:2108.07258 , year=

  19. [19]

    Ai, edge and iot-based smart agriculture , pages=

    Precision agriculture: Weather forecasting for future farming , author=. Ai, edge and iot-based smart agriculture , pages=. 2022 , publisher=

  20. [20]

    Proximal Policy Optimization Algorithms

    Proximal policy optimization algorithms , author=. arXiv preprint arXiv:1707.06347 , year=

  21. [21]

    Advances in neural information processing systems , volume=

    Training language models to follow instructions with human feedback , author=. Advances in neural information processing systems , volume=

  22. [22]

    Advances in neural information processing systems , volume=

    Direct preference optimization: Your language model is secretly a reward model , author=. Advances in neural information processing systems , volume=

  23. [23]

    Secrets of rlhf in large language models part i: Ppo.arXiv preprint arXiv:2307.04964, 2023

    Secrets of rlhf in large language models part i: Ppo , author=. arXiv preprint arXiv:2307.04964 , year=

  24. [24]

    Geophysical Research Letters , volume=

    Regional heatwave prediction using graph neural network and weather station data , author=. Geophysical Research Letters , volume=. 2023 , publisher=

  25. [25]

    Document clustering: TF-IDF approach , year=

    Bafna, Prafulla and Pramod, Dhanya and Vaidya, Anagha , booktitle=. Document clustering: TF-IDF approach , year=

  26. [26]

    Advances in Neural Information Processing Systems , volume=

    Digital typhoon: Long-term satellite image dataset for the spatio-temporal modeling of tropical cyclones , author=. Advances in Neural Information Processing Systems , volume=

  27. [27]

    Advances in Neural Information Processing Systems , volume=

    Sevir: A storm event imagery dataset for deep learning applications in radar and satellite meteorology , author=. Advances in Neural Information Processing Systems , volume=

  28. [28]

    Proceedings of the AAAI Conference on Artificial Intelligence , volume=

    Vhm: Versatile and honest vision language model for remote sensing image analysis , author=. Proceedings of the AAAI Conference on Artificial Intelligence , volume=

  29. [29]

    Machine Learning for Health (ML4H) , pages=

    Med-flamingo: a multimodal medical few-shot learner , author=. Machine Learning for Health (ML4H) , pages=. 2023 , organization=

  30. [30]

    Advances in Neural Information Processing Systems , volume=

    Llava-med: Training a large language-and-vision assistant for biomedicine in one day , author=. Advances in Neural Information Processing Systems , volume=

  31. [31]

    arXiv preprint arXiv:2310.20246 , year=

    Breaking language barriers in multilingual mathematical reasoning: Insights and observations , author=. arXiv preprint arXiv:2310.20246 , year=

  32. [32]

    2020 , publisher=

    spaCy: Industrial-strength natural language processing in python , author=. 2020 , publisher=

  33. [33]

    Bulletin of the American Meteorological Society , volume=

    Improving and promoting subseasonal to seasonal prediction , author=. Bulletin of the American Meteorological Society , volume=

  34. [34]

    2011 , publisher=

    Intraseasonal variability in the atmosphere-ocean climate system , author=. 2011 , publisher=

  35. [35]

    Monthly Weather Review , volume=

    The predictors and forecast skill of Northern Hemisphere teleconnection patterns for lead times of 3--4 weeks , author=. Monthly Weather Review , volume=

  36. [36]

    npj Climate and Atmospheric Science , volume=

    FuXi: A cascade machine learning forecasting system for 15-day global weather forecast , author=. npj Climate and Atmospheric Science , volume=

  37. [37]

    Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead,

    Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead , author=. arXiv preprint arXiv:2304.02948 , year=

  38. [38]

    NeurIPS , year=

    Attention is all you need , author=. NeurIPS , year=

  39. [39]

    Journal of Advances in Modeling Earth Systems , volume=

    WeatherBench 2: A benchmark for the next generation of data-driven global weather models , author=. Journal of Advances in Modeling Earth Systems , volume=. 2024 , publisher=

  40. [40]

    Forecasting global weather with graph neural net- works,

    Forecasting global weather with graph neural networks , author=. arXiv preprint arXiv:2202.07575 , year=

  41. [41]

    Meteorological Applications , volume=

    Literature survey of subseasonal-to-seasonal predictions in the southern hemisphere , author=. Meteorological Applications , volume=

  42. [42]

    Journal of Advances in Modeling Earth Systems , volume=

    WeatherBench: a benchmark data set for data-driven weather forecasting , author=. Journal of Advances in Modeling Earth Systems , volume=

  43. [43]

    Journal of Advances in Modeling Earth Systems , volume=

    Improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere , author=. Journal of Advances in Modeling Earth Systems , volume=

  44. [44]

    Journal of Advances in Modeling Earth Systems , volume=

    Data-driven medium-range weather prediction with a resnet pretrained on climate simulations: A new model for weatherbench , author=. Journal of Advances in Modeling Earth Systems , volume=

  45. [45]

    2015 , publisher=

    The modelling infrastructure of the Integrated Forecasting System: Recent advances and future challenges , author=. 2015 , publisher=

  46. [46]

    Quarterly Journal of the Royal Meteorological Society , volume=

    The ERA5 global reanalysis , author=. Quarterly Journal of the Royal Meteorological Society , volume=

  47. [47]

    Journal of Advances in Modeling Earth Systems , volume=

    Can machines learn to predict weather? Using deep learning to predict gridded 500-hPa geopotential height from historical weather data , author=. Journal of Advances in Modeling Earth Systems , volume=

  48. [48]

    Bulletin of the American Meteorological Society , volume=

    Advances in the subseasonal prediction of extreme events: Relevant case studies across the globe , author=. Bulletin of the American Meteorological Society , volume=

  49. [49]

    Geophysical Research Letters , volume=

    Toward data-driven weather and climate forecasting: Approximating a simple general circulation model with deep learning , author=. Geophysical Research Letters , volume=

  50. [50]

    Theoretical and applied climatology , volume=

    Monthly prediction of air temperature in Australia and New Zealand with machine learning algorithms , author=. Theoretical and applied climatology , volume=

  51. [51]

    Nature Communications , volume=

    A machine learning model that outperforms conventional global subseasonal forecast models , author=. Nature Communications , volume=

  52. [52]

    Energy Conversion and Management , volume=

    One dimensional convolutional neural network architectures for wind prediction , author=. Energy Conversion and Management , volume=

  53. [53]

    NIPS , volume=

    Convolutional LSTM network: A machine learning approach for precipitation nowcasting , author=. NIPS , volume=

  54. [54]

    Bulletin of the American Meteorological Society , volume=

    The community earth system model: a framework for collaborative research , author=. Bulletin of the American Meteorological Society , volume=

  55. [55]

    Bulletin of the American Meteorological Society , volume=

    The subseasonal to seasonal (S2S) prediction project database , author=. Bulletin of the American Meteorological Society , volume=

  56. [56]

    Journal of Advances in Modeling Earth Systems , volume=

    Sub-seasonal forecasting with a large ensemble of deep-learning weather prediction models , author=. Journal of Advances in Modeling Earth Systems , volume=

  57. [57]

    KDD , pages=

    Xgboost: A scalable tree boosting system , author=. KDD , pages=

  58. [58]

    AAAI , volume=

    Learning and dynamical models for sub-seasonal climate forecasting: Comparison and collaboration , author=. AAAI , volume=

  59. [59]

    nature climate change , volume=

    Harnessing AI and computing to advance climate modelling and prediction , author=. nature climate change , volume=

  60. [60]

    KDD , pages=

    Improving subseasonal forecasting in the western US with machine learning , author=. KDD , pages=

  61. [61]

    CoRR , volume =

    John Guibas and Morteza Mardani and Zongyi Li and Andrew Tao and Anima Anandkumar and Bryan Catanzaro , title =. CoRR , volume =

  62. [62]

    CoRR , volume =

    Jonathan Ho and Nal Kalchbrenner and Dirk Weissenborn and Tim Salimans , title =. CoRR , volume =

  63. [63]

    Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics , volume =

    Wu, Liming and Hou, Zhichao and Yuan, Jirui and Rong, Yu and Huang, Wenbing , booktitle =. Equivariant Spatio-Temporal Attentive Graph Networks to Simulate Physical Dynamics , volume =

  64. [64]

    ICML , pages=

    Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting , author=. ICML , pages=

  65. [65]

    arXiv preprint arXiv:2404.10024 , year=

    Climode: Climate and weather forecasting with physics-informed neural odes , author=. arXiv preprint arXiv:2404.10024 , year=

  66. [66]

    The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 , volume=

    Multiscale structural similarity for image quality assessment , author=. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003 , volume=

  67. [67]

    arXiv preprint arXiv:2405.07395 , year=

    CaFA: Global Weather Forecasting with Factorized Attention on Sphere , author=. arXiv preprint arXiv:2405.07395 , year=

  68. [68]

    arXiv preprint arXiv:2402.00712 , year=

    Chaosbench: A multi-channel, physics-based benchmark for subseasonal-to-seasonal climate prediction , author=. arXiv preprint arXiv:2402.00712 , year=

  69. [69]

    Nature Communications , volume=

    Adaptive bias correction for improved subseasonal forecasting , author=. Nature Communications , volume=

  70. [70]

    ICML , pages=

    ClimaX: A foundation model for weather and climate , author=. ICML , pages=

  71. [71]

    An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

    An image is worth 16x16 words: Transformers for image recognition at scale , author=. arXiv preprint arXiv:2010.11929 , year=

  72. [72]

    Journal of climate , volume=

    The NCEP climate forecast system version 2 , author=. Journal of climate , volume=. 2014 , publisher=

  73. [73]

    Quarterly journal of the royal meteorological society , volume=

    The ECMWF ensemble prediction system: Methodology and validation , author=. Quarterly journal of the royal meteorological society , volume=. 1996 , publisher=

  74. [74]

    Geoscientific Model Development , volume=

    The met office global coupled model 2.0 (GC2) configuration , author=. Geoscientific Model Development , volume=. 2015 , publisher=

  75. [75]

    Fourier Neural Operator for Parametric Partial Differential Equations

    Fourier neural operator for parametric partial differential equations , author=. arXiv preprint arXiv:2010.08895 , year=

  76. [76]

    AAAI , pages =

    Yang Liu and Yu Rong and Zhuoning Guo and Nuo Chen and Tingyang Xu and Fugee Tsung and Jia Li , title =. AAAI , pages =

  77. [77]

    Yang Liu and Liang Chen and Xiangnan He and Jiaying Peng and Zibin Zheng and Jie Tang , title =

  78. [78]

    arXiv preprint arXiv:2504.09940 , year=

    TianQuan-S2S: A Subseasonal-to-Seasonal Global Weather Model via Incorporate Climatology State , author=. arXiv preprint arXiv:2504.09940 , year=

  79. [79]

    KDD , pages =

    Zinan Zheng and Yang Liu and Jia Li and Jianhua Yao and Yu Rong , title =. KDD , pages =

  80. [80]

    Cirt: Global subseasonal-to-seasonal forecasting with geometry-inspired transformer.arXiv preprint arXiv:2502.19750, 2025

    Cirt: Global subseasonal-to-seasonal forecasting with geometry-inspired transformer , author=. arXiv preprint arXiv:2502.19750 , year=

Showing first 80 references.