pith. machine review for the scientific record. sign in

arxiv: 2603.07053 · v2 · submitted 2026-03-07 · 💻 cs.AI · cs.SY· eess.SY

Recognition: no theorem link

Animating Petascale Time-varying Data on Commodity Hardware with LLM-assisted Scripting

Authors on Pith no claims yet

Pith reviewed 2026-05-15 15:05 UTC · model grok-4.3

classification 💻 cs.AI cs.SYeess.SY
keywords petascale visualizationtime-varying data3D animationLLM scriptingcommodity hardwarecloud data accessNASA climate data
0
0 comments X

The pith

A framework lets scientists animate petascale time-varying data on ordinary workstations using natural-language prompts to an LLM.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Scientists working with enormous time-varying datasets such as petascale climate models normally need specialized hardware, expert teams, and slow data transfers to produce 3D animations. This paper presents a framework that runs on a standard commodity workstation and lets domain scientists describe the desired animation through a conversational interface powered by large language models. The system uses a Generalized Animation Descriptor to define keyframe-based animations, pulls only the needed data slices from cloud repositories, and renders locally with a tailored pipeline. Tests on NASA climate-oceanographic datasets larger than one petabyte show rough drafts ready in minutes and refined high-resolution versions in up to two hours. If the approach holds, visualization shifts from a post-analysis bottleneck to a routine step that any scientist can perform iteratively during normal workflow.

Core claim

The paper establishes a user-friendly framework for creating 3D animations of petascale time-varying data on commodity workstations. The framework rests on four contributions: a Generalized Animation Descriptor that supplies a keyframe-based adaptable abstraction for specifying animations, efficient cloud data access to minimize transfer overhead, a tailored rendering system that operates on local hardware, and an LLM-assisted conversational interface that converts natural-language prompts into accurate sampling criteria and animation parameters. Two case studies confirm the method on NASA datasets exceeding 1 PB, one using prior-knowledge sampling criteria and one deriving parameters from a

What carries the argument

The Generalized Animation Descriptor (GAD), a keyframe-based abstraction that encodes adaptable animation parameters, paired with the LLM-assisted scripting module that translates natural-language prompts into sampling criteria.

If this is right

  • Domain scientists can generate rough animation drafts within minutes and refine them by adding higher-resolution data without restarting the workflow.
  • Visualization no longer requires dedicated graphics experts or access to high-performance computing clusters.
  • Iterative sharing of scientific results with the community becomes feasible within typical post-analysis time budgets.
  • Data management overhead drops because only requested subsets are streamed from cloud repositories rather than full dataset transfers.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same pattern of cloud access plus LLM scripting could support animation of time-varying data in other fields that produce petascale outputs, such as astrophysical simulations or high-resolution fluid dynamics.
  • The framework's separation of prompt interpretation from rendering suggests it could later incorporate real-time data streams once repositories expose live subsets.
  • Wider adoption would increase demand for standardized cloud-hosted scientific datasets with queryable metadata that the GAD can reference directly.

Load-bearing premise

The framework assumes that large language models can reliably translate natural-language prompts into correct sampling criteria and animation parameters without significant errors or hallucinations that would invalidate the visualizations.

What would settle it

Apply the LLM interface to a known petascale dataset with a prompt whose correct sampling region is already established by expert analysis, then compare the resulting animation against the ground-truth expert version for mismatches in selected data or visual artifacts.

Figures

Figures reproduced from arXiv: 2603.07053 by Aashish Panta, Alper Sahistan, Amy A. Gooch, Ishrat Jahan Eliza, Valerio Pascucci, Xuan Huang, Zhimin Li.

Figure 1
Figure 1. Figure 1: Our versatile framework generates a region of interest keyframe animations from petascale data using a generalized animation descriptor (GAD) file [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Our framework. Unlike traditional visualizations, where the user faces an initial big data management challenge with large-than-disk, cloud-hosted [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Our novel Generalized Animation Descriptor (GAD) file format. GAD describes an animation as a sequence of keyframes stored in an individual file. [PITH_FULL_IMAGE:figures/full_fig_p004_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Our simple interactive viewer provides a quick interface for users to [PITH_FULL_IMAGE:figures/full_fig_p005_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: An ocean surface (Z=0) salinity map, ranging from 33 to 38 g/kg. [PITH_FULL_IMAGE:figures/full_fig_p006_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: A user produces an animation of the ocean temperature in the Agulhas retroflection (turns back to itself) over time from January 20, 2020, to April 19, [PITH_FULL_IMAGE:figures/full_fig_p007_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: The user improves upon the animation in Figure 6 with our interactive viewer tool to include opacities. By reducing opacity in cooler regions while [PITH_FULL_IMAGE:figures/full_fig_p007_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Portion of a user’s session with AI-assisted scripting to display the salinity of the Mediterranean Sea. The first region-of-interest request results in [PITH_FULL_IMAGE:figures/full_fig_p007_8.png] view at source ↗
read the original abstract

Scientists face significant visualization challenges as time-varying datasets grow in speed and volume, often requiring specialized infrastructure and expertise to handle massive datasets. Petascale climate models generated in NASA laboratories require a dedicated group of graphics and media experts and access to high-performance computing resources. Scientists may need to share scientific results with the community iteratively and quickly. However, the time-consuming trial-and-error process incurs significant data transfer overhead and far exceeds the time and resources allocated for typical post-analysis visualization tasks, disrupting the production workflow. Our paper introduces a user-friendly framework for creating 3D animations of petascale, time-varying data on a commodity workstation. Our contributions: (i) Generalized Animation Descriptor (GAD) with a keyframe-based adaptable abstraction for animation, (ii) efficient data access from cloud-hosted repositories to reduce data management overhead, (iii) tailored rendering system, and (iv) an LLM-assisted conversational interface as a scripting module to allow domain scientists with no visualization expertise to create animations of their region of interest. We demonstrate the framework's effectiveness with two case studies: first, by generating animations in which sampling criteria are specified based on prior knowledge, and second, by generating AI-assisted animations in which sampling parameters are derived from natural-language user prompts. In all cases, we use large-scale NASA climate-oceanographic datasets that exceed 1PB in size yet achieve a fast turnaround time of 1 minute to 2 hours. Users can generate a rough draft of the animation within minutes, then seamlessly incorporate as much high-resolution data as needed for the final version.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper introduces a framework for generating 3D animations of petascale time-varying datasets on commodity hardware. It defines four contributions: a Generalized Animation Descriptor (GAD) for keyframe-based animation abstraction, efficient cloud-based data access, a tailored rendering system, and an LLM-assisted conversational scripting interface. Effectiveness is shown via two case studies on NASA climate-oceanographic datasets exceeding 1 PB, achieving turnaround times of 1 minute to 2 hours using either prior-knowledge sampling or natural-language prompts.

Significance. If the claims hold, the work could substantially lower barriers for domain scientists to produce visualizations of massive time-varying data without HPC infrastructure or visualization expertise. The integration of LLM-assisted scripting with cloud data access and a generalized descriptor represents a practical engineering advance that could accelerate iterative analysis in fields such as climate modeling.

major comments (2)
  1. [Case Studies / Abstract] The AI-assisted case study (described in the abstract and case-studies section) reports successful turnaround times but provides no quantitative validation of the LLM component: no error rates, hallucination frequency, prompt-template details, or comparison of generated sampling criteria against ground-truth parameters. This directly undermines the central claim that the conversational interface reliably produces valid visualizations for petascale data.
  2. [Contributions / Methods] The manuscript offers only high-level sketches of the GAD, cloud access mechanism, and tailored renderer (abstract and contributions list). No performance benchmarks, memory profiles, or error analysis for the rendering pipeline on >1 PB datasets are supplied, making it impossible to assess whether the commodity-hardware claim is load-bearing or merely aspirational.
minor comments (1)
  1. [Abstract] The abstract states that users can 'seamlessly incorporate as much high-resolution data as needed,' but the manuscript does not clarify the data-subsetting or progressive-refinement mechanism that enables this transition from draft to final version.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for their constructive feedback, which highlights important areas for strengthening the manuscript's rigor. We address each major comment below and will revise the paper to incorporate additional details and validation where needed.

read point-by-point responses
  1. Referee: [Case Studies / Abstract] The AI-assisted case study (described in the abstract and case-studies section) reports successful turnaround times but provides no quantitative validation of the LLM component: no error rates, hallucination frequency, prompt-template details, or comparison of generated sampling criteria against ground-truth parameters. This directly undermines the central claim that the conversational interface reliably produces valid visualizations for petascale data.

    Authors: We agree that the current manuscript lacks quantitative metrics for the LLM component, which is a valid concern for substantiating reliability claims. The case studies demonstrate successful turnaround times using natural-language prompts, but without error rates or comparisons. In the revised version, we will add a new evaluation subsection with observed hallucination frequencies from our experiments, full prompt templates, error rates on sampling criteria generation, and direct comparisons of LLM-derived parameters against the ground-truth values used in the prior-knowledge case study. revision: yes

  2. Referee: [Contributions / Methods] The manuscript offers only high-level sketches of the GAD, cloud access mechanism, and tailored renderer (abstract and contributions list). No performance benchmarks, memory profiles, or error analysis for the rendering pipeline on >1 PB datasets are supplied, making it impossible to assess whether the commodity-hardware claim is load-bearing or merely aspirational.

    Authors: We acknowledge that the descriptions of the GAD, cloud access mechanism, and tailored renderer are currently high-level. To address this, the revised manuscript will expand the methods section with concrete implementation details, performance benchmarks (including rendering times and resource consumption on commodity hardware for >1 PB datasets), memory profiles, and error analysis of the rendering pipeline. These additions will provide evidence that the commodity-hardware approach is practical rather than aspirational. revision: yes

Circularity Check

0 steps flagged

No circularity; engineering contributions are independent

full rationale

The paper presents four engineering contributions (GAD abstraction, cloud data access, rendering system, LLM scripting interface) demonstrated on external NASA datasets >1 PB. No equations, fitted parameters, predictions, or derivations appear that reduce to inputs by construction. Case studies use prior-knowledge and prompt-derived parameters without self-referential fitting or load-bearing self-citations.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 2 invented entities

The central claim rests on new software abstractions and interfaces with standard domain assumptions about cloud data availability and LLM capabilities; no free parameters are fitted to data, and the invented entities are the core contributions of the framework.

axioms (2)
  • domain assumption Petascale time-varying datasets are hosted in accessible cloud repositories allowing efficient partial access.
    This underpins the contribution of reduced data management overhead via cloud access.
  • ad hoc to paper Large language models can interpret natural language prompts to produce valid animation parameters and sampling criteria.
    This is required for the LLM-assisted conversational interface to function as described.
invented entities (2)
  • Generalized Animation Descriptor (GAD) no independent evidence
    purpose: Keyframed adaptable abstraction for defining animations of time-varying data.
    New abstraction introduced to enable flexible animation creation.
  • LLM-assisted conversational interface no independent evidence
    purpose: Scripting module allowing domain scientists without visualization expertise to generate animations via natural language.
    New interface component central to the user-friendly aspect of the framework.

pith-pipeline@v0.9.0 · 5615 in / 1569 out tokens · 75653 ms · 2026-05-15T15:05:40.888390+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

53 extracted references · 53 canonical work pages

  1. [1]

    Cinematic scientific visualization: the art of communicating science,

    K. Borkiewicz, A. Christensen, H.-N. Kostis, G. Shirah, and R. Wyatt, “Cinematic scientific visualization: the art of communicating science,” inACM SIGGRAPH 2019 Courses, pp. 1–273, 2019

  2. [2]

    Introduction to cinematic scientific visualization,

    K. Borkiewicz, A. Christensen, R. Wyatt, and E. T. Wright, “Introduction to cinematic scientific visualization,” inACM SIGGRAPH 2020 Courses, pp. 1–267, 2020

  3. [3]

    A new frontier in science communication? what we know about how public audiences re- spond to cinematic scientific visualization,

    E. A. Jensen, K. M. Borkiewicz, and J. P. Naiman, “A new frontier in science communication? what we know about how public audiences re- spond to cinematic scientific visualization,”Frontiers in Communication, vol. 7, p. 840631, 2022

  4. [4]

    DY AMOND: the DYnamics of the atmospheric general circulation modeled on non- hydrostatic domains,

    B. Stevens, M. Satoh, L. Auger, J. Biercamp, C. S. Bretherton, X. Chen, P. Düben, F. Judt, M. Khairoutdinov, D. Klocke,et al., “DY AMOND: the DYnamics of the atmospheric general circulation modeled on non- hydrostatic domains,”Progress in Earth and Planetary Science, vol. 6, no. 1, pp. 1–17, 2019

  5. [5]

    DYnamics of the atmospheric general circulation modelled on non-hydrostatic domains phase

    NASA, “DYnamics of the atmospheric general circulation modelled on non-hydrostatic domains phase.” https://gmao.gsfc.nasa.gov/global_ mesoscale/dyamond_phaseII/data_access/

  6. [6]

    Case study #4: Examining data sets in real time, VIS-5D and WIS-AD for visualizing earth and space science computations,

    B. Hibbard and B. Paul, “Case study #4: Examining data sets in real time, VIS-5D and WIS-AD for visualizing earth and space science computations,”ACM SIGGRAPH’94 Course# 27, Visualizing and Exam- ining Large Scientific Data Sets: A Focus on the Physical and Natural Sciences, 1994

  7. [7]

    Big data analysis and visualization: challenges and solutions,

    K.-H. Yoo, C. K. Leung, and A. Nasridinov, “Big data analysis and visualization: challenges and solutions,” 2022

  8. [8]

    Large datasets at a glance: combining textures and colors in scientific visualization,

    C. Healey and J. Enns, “Large datasets at a glance: combining textures and colors in scientific visualization,”IEEE Transactions on Visualiza- tion and Computer Graphics, vol. 5, no. 2, pp. 145–167, 1999

  9. [9]

    Narrative in situ visual analysis for large-scale ocean eddy evolution,

    X. Han, X. Yu, G. Li, J. Liu, Y . Zhao, and G. Shan, “Narrative in situ visual analysis for large-scale ocean eddy evolution,”IEEE Computer Graphics and Applications, vol. 42, no. 3, pp. 65–73, 2022

  10. [10]

    Scientific storytelling using visualization,

    K.-L. Ma, I. Liao, J. Frazier, H. Hauser, and H.-N. Kostis, “Scientific storytelling using visualization,”IEEE Computer Graphics and Appli- cations, vol. 32, no. 1, pp. 12–19, 2011

  11. [11]

    Visualization as a tool for understanding,

    H. W. De Regt, “Visualization as a tool for understanding,”Perspectives on Science, vol. 22, no. 3, pp. 377–396, 2014

  12. [12]

    Visualization for scientific discovery, decision-making, and communi- cation,

    P.-T. Bremer, G. Tourassi, W. Bethel, K. Gaither, V . Pascucci, and W. Xu, “Visualization for scientific discovery, decision-making, and communi- cation,” tech. rep., US Department of Energy (USDOE), Washington DC (United States). Office of . . . , 2023

  13. [13]

    A. H. Squillacote, J. Ahrens, C. Law, B. Geveci, K. Moreland, and B. King,The ParaView guide, vol. 366. Kitware Clifton Park, NY , 2007

  14. [14]

    VisIt: an end-user tool for visualizing and analyzing very large data,

    L. Data, “VisIt: an end-user tool for visualizing and analyzing very large data,” inHigh Performance Visualization, pp. 395–410, Chapman and Hall/CRC, 2012

  15. [15]

    Visualizing with VTK: a tutorial,

    W. J. Schroeder, L. S. Avila, and W. Hoffman, “Visualizing with VTK: a tutorial,”IEEE Computer Graphics and Applications, vol. 20, no. 5, pp. 20–27, 2000

  16. [16]

    The ViSUS visualization framework,

    V . Pascucci, G. Scorzelli, B. Summa, P.-T. Bremer, A. Gyulassy, C. Christensen, S. Philip, and S. Kumar, “The ViSUS visualization framework,” inHigh Performance Visualization, pp. 439–452, Chapman and Hall/CRC, 2012

  17. [17]

    Web-based visualization and analytics of petascale data: Equity as a tide that lifts all boats,

    A. Panta, X. Huang, N. McCurdy, D. Ellsworth, A. A. Gooch, G. Scorzelli, H. Torres, P. Klein, G. A. Ovando-Montejo, and V . Pas- cucci, “Web-based visualization and analytics of petascale data: Equity as a tide that lifts all boats,” in2024 IEEE 14th Symposium on Large Data Analysis and Visualization (LDAV), pp. 1–11, 2024

  18. [18]

    Camera control in computer graphics,

    M. Christie, P. Olivier, and J.-M. Normand, “Camera control in computer graphics,” inComputer Graphics Forum, vol. 27, pp. 2197–2218, Wiley Online Library, 2008

  19. [19]

    A multi-criteria approach to camera motion design for volume data animation,

    W.-H. Hsu, Y . Zhang, and K.-L. Ma, “A multi-criteria approach to camera motion design for volume data animation,”IEEE Transactions on Visualization and Computer Graphics, vol. 19, no. 12, pp. 2792–2801, 2013

  20. [20]

    Storytelling via navigation: a novel approach to animation for scientific visualization,

    I. Liao, W.-H. Hsu, and K.-L. Ma, “Storytelling via navigation: a novel approach to animation for scientific visualization,” inSmart Graphics: 12th International Symposium, SG 2014, Taipei, Taiwan, August 27-29,

  21. [21]

    1–14, Springer, 2014

    Proceedings 12, pp. 1–14, Springer, 2014

  22. [22]

    Automatic camera path generation for graph navigation in 3D,

    A. Ahmed and P. Eades, “Automatic camera path generation for graph navigation in 3D,” inProceedings of the 2005 Asia-Pacific Symposium on Information Visualisation-Volume 45, pp. 27–32, 2005

  23. [23]

    A study of transfer function generation for time-varying volume data,

    T. Jankun-Kelly and K.-L. Ma, “A study of transfer function generation for time-varying volume data,” inVolume Graphics 2001: Proceedings of the Joint IEEE TCVG and Eurographics Workshop in Stony Brook, New York, USA, June 21–22, 2001, pp. 51–65, Springer, 2001

  24. [24]

    Adaptive transfer functions: improved multiresolution visualization of medical models,

    J. Díaz-García, P. Brunet, I. Navazo, F. Perez, and P.-P. Vázquez, “Adaptive transfer functions: improved multiresolution visualization of medical models,”The Visual Computer, vol. 32, pp. 835–845, 2016

  25. [25]

    Computer-generated key-frame animation,

    N. Burtnyk and M. Wein, “Computer-generated key-frame animation,” Journal of the SMPTE, vol. 80, no. 3, pp. 149–153, 1971

  26. [26]

    Keyframe animation and motion capture for creating animation: a survey and perception from industry people,

    M. Izani, A. Eshaq,et al., “Keyframe animation and motion capture for creating animation: a survey and perception from industry people,” in Proceedings. Student Conference on Research and Development, 2003. SCORED 2003., pp. 154–159, IEEE, 2003

  27. [27]

    Automatic animation for time- varying data visualization,

    L. Yu, A. Lu, W. Ribarsky, and W. Chen, “Automatic animation for time- varying data visualization,”Computer Graphics Forum, vol. 29, no. 7, pp. 2271–2280, 2010

  28. [28]

    AniViz: a template-based animation tool for volume visualization,

    H. Akiba, C. Wang, and K.-L. Ma, “AniViz: a template-based animation tool for volume visualization,”IEEE Computer Graphics and Applica- tions, vol. 30, no. 5, pp. 61–71, 2009

  29. [29]

    Picture-driven animation,

    R. M. Baecker, “Picture-driven animation,” inProceedings of the May 14-16, 1969, Spring Joint Computer Conference, pp. 273–288, 1969

  30. [30]

    ADR visualization: a generalized framework for ranking large-scale scientific data using analysis-driven refinement,

    B. Nouanesengsy, J. Woodring, J. Patchett, K. Myers, and J. Ahrens, “ADR visualization: a generalized framework for ranking large-scale scientific data using analysis-driven refinement,” in2014 IEEE 4th Symposium on Large Data Analysis and Visualization (LDAV), pp. 43– 50, IEEE, 2014

  31. [31]

    Real-time 4D animation on a 3D graphics workstation,

    C. M. Beshers and S. K. Feiner, “Real-time 4D animation on a 3D graphics workstation,” inProc. Graphics Interface, vol. 88, pp. 1–7, 1988

  32. [32]

    PhenoFlow: a human-LLM driven visual analytics system for exploring large and complex stroke datasets,

    J. Kim, S. Lee, H. Jeon, K.-J. Lee, H.-J. Bae, B. Kim, and J. Seo, “PhenoFlow: a human-LLM driven visual analytics system for exploring large and complex stroke datasets,” 2024

  33. [33]

    ChartGPT: leveraging LLMs to generate charts from abstract natural language,

    Y . Tian, W. Cui, D. Deng, X. Yi, Y . Yang, H. Zhang, and Y . Wu, “ChartGPT: leveraging LLMs to generate charts from abstract natural language,”IEEE Transactions on Visualization and Computer Graphics, vol. 31, pp. 1731–1745, Mar. 2025

  34. [34]

    Accessible text descriptions for upset plots,

    “Accessible text descriptions for upset plots,” inComputer Graphics Forum, vol. 44, p. e70102, Wiley Online Library, 2025

  35. [35]

    Natural language interface for data visualization: Harnessing the power of large language models,

    B. Praneeth, A. K. Singh, J. K. Raju, Mohana, and B. Suma, “Natural language interface for data visualization: Harnessing the power of large language models,” in2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 1–6, June 2024

  36. [36]

    Nl4dv: A toolkit for generating analytic specifications for data visualization from natural language queries,

    A. Narechania, A. Srinivasan, and J. Stasko, “Nl4dv: A toolkit for generating analytic specifications for data visualization from natural language queries,”IEEE Transactions on Visualization and Computer Graphics, vol. 27, no. 2, pp. 369–379, 2020

  37. [37]

    Chat2vis: Generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models,

    P. Maddigan and T. Susnjak, “Chat2vis: Generating data visualizations via natural language using chatgpt, codex and gpt-3 large language models,”Ieee Access, vol. 11, pp. 45181–45193, 2023

  38. [38]

    Nli4volvis: Natural language interaction for volume visualization via llm multi-agents and editable 3d gaussian splatting,

    K. Ai, K. Tang, and C. Wang, “Nli4volvis: Natural language interaction for volume visualization via llm multi-agents and editable 3d gaussian splatting,”IEEE Transactions on Visualization and Computer Graphics, 2025

  39. [39]

    A V A: towards autonomous visualization agents through visual perception- driven decision-making,

    S. Liu, H. Miao, Z. Li, M. Olson, V . Pascucci, and P.-T. Bremer, “A V A: towards autonomous visualization agents through visual perception- driven decision-making,”Computer Graphics Forum, vol. 43, no. 3, p. e15093, 2024

  40. [40]

    An empirical evaluation of the GPT- 4 multimodal language model on visualization literacy tasks,

    A. Bendeck and J. Stasko, “An empirical evaluation of the GPT- 4 multimodal language model on visualization literacy tasks,”IEEE Transactions on Visualization and Computer Graphics, 2024

  41. [41]

    Visualization literacy of multimodal large language models: a comparative study,

    Z. Li, H. Miao, V . Pascucci, and S. Liu, “Visualization literacy of multimodal large language models: a comparative study,”arXiv preprint arXiv:2407.10996, 2024

  42. [42]

    VTK: vtkRectilinearGrid class reference

    “VTK: vtkRectilinearGrid class reference.” https://vtk.org/doc/nightly/ html/classvtkRectilinearGrid.html. [Accessed 16-11-2024]

  43. [43]

    Managing big data for scientific visualiza- tion,

    M. Cox and D. Ellsworth, “Managing big data for scientific visualiza- tion,” inACM Siggraph, vol. 97, pp. 21–38, MRJ/NASA Ames Research Center, 1997

  44. [44]

    OpenVisus

    OpenVisus, “OpenVisus.” https://github.com/sci-visus/openvisus

  45. [45]

    Open Science Data Federation

    OSDF, “Open Science Data Federation.” https://osg-htc.org/services/ osdf

  46. [46]

    OSPRay – a CPU ray tracing framework for scientific visualization,

    I. Wald, G. Johnson, J. Amstutz, C. Brownlee, A. Knoll, J. Jeffers, J. Gunther, and P. Navratil, “OSPRay – a CPU ray tracing framework for scientific visualization,”IEEE Transactions on Visualization and Computer Graphics, vol. 23, pp. 1–1, 01 2016

  47. [47]

    Schroeder, K

    W. Schroeder, K. Martin, and W. Lorensen,The visualization toolkit: an object-oriented approach to 3D graphics. 01 2006

  48. [48]

    IMGui – a desktop GUI application for isolation with migration analyses,

    J. Knoblauch, A. Sethuraman, and J. Hey, “IMGui – a desktop GUI application for isolation with migration analyses,”Molecular Biology and Evolution, vol. 34, no. 2, pp. 500–504, 2017

  49. [49]

    Chain of logic: Rule-based reasoning with large language models,

    S. Servantez, J. Barrow, K. Hammond, and R. Jain, “Chain of logic: Rule-based reasoning with large language models,”arXiv preprint arXiv:2402.10400, 2024

  50. [50]

    NASA DY AMON v2 portal

    NASA, “NASA DY AMON v2 portal.” https://portal.nccs.nasa.gov/ datashare/G5NR/DY AMONDv2/. [Accessed 14-11-2024]

  51. [51]

    Estimating the circulation and climate of the ocean (ECCO) | PO.DAAC / JPL / NASA

    “Estimating the circulation and climate of the ocean (ECCO) | PO.DAAC / JPL / NASA.” https://podaac.jpl.nasa.gov/ECCO?sections=data. [Ac- cessed 14-11-2024]

  52. [52]

    IEEE 2026 SciVis contest: Visualizing the future of climate science, one dataset at a time,

    SciVis Contest 2026 Organizers, “IEEE 2026 SciVis contest: Visualizing the future of climate science, one dataset at a time,” 2026. Accessed: March 30, 2025. [52]The Agulhas current retroflection, pp. 151–207. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006

  53. [53]

    josvg, “meddy.” https://josvg.home.xs4all.nl/Dundee/meddies/index. html