pith. machine review for the scientific record. sign in

arxiv: 2605.09984 · v1 · submitted 2026-05-11 · 💻 cs.CV · cs.AI· cs.LG

Recognition: no theorem link

Geometric 4D Stitching for Grounded 4D Generation

Authors on Pith no claims yet

Pith reviewed 2026-05-12 04:02 UTC · model grok-4.3

classification 💻 cs.CV cs.AIcs.LG
keywords 4D generationgeometric consistency4D stitchingscene reconstructionmesh editingefficient generation4D scenes
0
0 comments X

The pith

Geometric 4D stitching fills missing scene regions with explicit consistent patches to build grounded 4D representations rapidly.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

Recent 4D generation approaches use generative models to complete missing scene details but then reconstruct into radiance fields that hide geometric inconsistencies and demand lengthy optimization. The paper proposes Geometric 4D Stitching to detect those missing geometric areas directly and supply them with explicit, geometrically grounded 4D stitches. This produces scene representations in under 10 minutes per expansion step on one GPU while raising consistency. Readers would care because it removes the hidden inconsistencies and slow steps that limit practical use of 4D content. The explicit stitches also enable direct interactive mesh expansion and scene editing.

Core claim

Geometric 4D Stitching is an efficient framework that explicitly identifies missing geometric regions in generated 4D content and complements them with geometrically grounded 4D stitches. This approach constructs 4D scene representations in under 10 minutes on a single NVIDIA RTX 5090 GPU per one-step scene expansion while improving geometric consistency. The explicit stitches further support interactive expansion of 4D meshes as well as 4D scene editing.

What carries the argument

Geometric 4D Stitching: the explicit identification of missing geometric regions followed by addition of geometrically grounded 4D stitches that enforce consistency without radiance optimization.

Load-bearing premise

Generative models supply enough accurate information about missing regions that the added stitches complete the geometry without creating new inconsistencies or needing further optimization.

What would settle it

Apply the stitching to a 4D scene whose ground-truth geometry is known and incomplete, then check whether the output shows visible geometric mismatches or still requires extra optimization to match the true shape.

Figures

Figures reproduced from arXiv: 2605.09984 by Jong Chul Ye, Sunwoo Park, Taesung Kwon.

Figure 1
Figure 1. Figure 1: We present Geometric 4D Stitching, a geometry-aware approach for constructing expand￾able 4D scene representations from sparse generative videos. Our method refines NVS-generated target-view geometry using the source-view geometry as an anchor, stitches only reliable newly revealed regions, and enables coherent novel view rendering from the augmented 4D scene. Abstract Recent 4D generation methods complete… view at source ↗
Figure 2
Figure 2. Figure 2: Overview of Geometric 4D Stitching. Given a condition video, (a) we first construct an initial 4D mesh and (b) use its geometry as an anchor to refine the geometry of the completed target video. (c) The resulting information-addition regions are converted into stitch candidates and inserted into the mesh, (d) producing an augmented 4D representation that can be progressively expanded. Finally, (e) we apply… view at source ↗
Figure 3
Figure 3. Figure 3: Failure Case of Gaussian￾based Scene Manipulation due to diffi￾cult direct Gaussian matching to object boundaries. To bypass this expensive optimization, recent Feed￾Forward Geometry Transformers (FF-GTs), such as VGGT and DA3 [18, 19], offer rapid reconstruction. While this line of work has led to fast 3D/4D reconstruc￾tion methods like AnySplat and Instant4D [20, 21], ap￾plying them to generative multi-v… view at source ↗
Figure 4
Figure 4. Figure 4: Stitching of warped and sampled images. The warped image preserves source-supported structure but contains invalid regions, while the sampled image fills these areas at the cost of altering some source content. The stitched image retains the reliable projected structure and incorporates generated content only where needed, yielding a more coherent final result. 3.1 Generate the Initial-View 4D Asset We fir… view at source ↗
Figure 5
Figure 5. Figure 5: Curtain and its depth. We identify information-addition regions from the depth discrepancy between raw mesh and point-cloud renderings. Here, Mt i→j denotes the inpainting mask based on Ω t i→j , and Φnvs takes a projected image and an inpainting mask and out￾puts a full novel-views. However, projection holes alone are not sufficient. Regions occluded in the source view can become newly visible in the targ… view at source ↗
Figure 6
Figure 6. Figure 6: Geometric Alignmnet with Pyramidal Refinement. We construct the continuous scale￾shift field constrained to scene geometry for appropriate depth refinement. Details are in App C. 3.3 Geometric Alignment with Pyramidal Refinement For geometrically grounded 4D stitching, the generated novel-view, or target-view, depth should be aligned with the initial mesh geometry. However, the mismatch between the generat… view at source ↗
Figure 7
Figure 7. Figure 7: Conflicting evidence from naive stitching of NVS outputs. When novel-view synthesis generates structures that are not supported by the source observation, directly stitching them into the existing 4D scene introduces conflicting geometry rather than consistent scene expansion [PITH_FULL_IMAGE:figures/full_fig_p006_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Geometry-preserving visual refinement. Single-step TrajectoryCrafter refinement im￾proves raw mesh renderings by reducing boundary artifacts, while preserving the underlying geometry. Render-disagreement distillation. Since NVS does not explicitly guarantee geometric consistency, some candidate additions may still conflict with the existing asset ( [PITH_FULL_IMAGE:figures/full_fig_p006_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative Comparison of Novel-view Rendering of Reconsturcted 4D Scene [PITH_FULL_IMAGE:figures/full_fig_p008_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: MVS-induced inconsistency as a bottleneck for 4D reconstruction. We re-render videos from our 4D mesh and reconstruct radiance-based representation using feed-forward Gaussian splatting [20], with and without per-scene optimization. Compared with conventional generation￾based pipelines with (a) TrajectoryCrafter [10] and (b) Zero4D [9], (c) G4S with 25 views yields more coherent Gaussian Splatting renderi… view at source ↗
Figure 11
Figure 11. Figure 11: Downstream Application including Scene Editing. G4S enables stable object removal, addition, and local scene modification. Our key observation is that NVS-generated views should not be used as dense reconstruction supervi￾sion, even when they appear visually plausible. Because their geometry may be misaligned across views, directly lifting them into a 4D asset can propagate camera-depth calibration incons… view at source ↗
Figure 12
Figure 12. Figure 12: Appearance-level absorption of struc￾tural mismatch. When the underlying scene ge￾ometry is not consistently reconciled across views, the residual mismatch tends to manifest through view-dependent appearance changes rather than a stable shared structure by radiance-based optimiza￾tion. The highlighted regions show this behavior: instead of converging to a coherent geometry, the reconstruction exhibits vie… view at source ↗
Figure 13
Figure 13. Figure 13: Occlusion-induced missing regions between sparse views. Due to foreground occluders, certain background regions remain unobserved across neighboring views. Consequently, interpolated viewpoints between sparse observations contain large information gaps, highlighted in red, which cannot be recovered by simple view interpolation. reconstruction quality also causes the input-generation cost to escalate sharp… view at source ↗
Figure 14
Figure 14. Figure 14: Detailed G4S Pipeline Conversely, point-wise projection onto the image plane of camera π = (K, R, τ ) is defined by q = P(π, X) where X ∈ R 3 denote a 3D point and q =  q˜x q˜z , q˜y q˜z  ∈ R 2 , q˜ := [˜qx, q˜y, q˜z] = K(RX + τ ), (9) where q˜ = [˜qx, q˜y, q˜z] ⊤ ∈ R 3 is the homogeneous image coordinate. We also denote by z(π, X) = e ⊤ 3 (RX + τ ), e3 = [0, 0, 1]⊤, (10) the depth of X as seen from cam… view at source ↗
Figure 15
Figure 15. Figure 15: Geometry initialization in generative scenes under varying view counts. We compare geometry reconstructed from different numbers of generated views using VGGT and DA3. In generative scenes, increasing the number of views does not consistently improve initialization quality, because inpainting collapse introduces cross-view inconsistencies that corrupt multi-view geometric agreement. As a result, even dens… view at source ↗
Figure 16
Figure 16. Figure 16: Pyramidal Refinement for Source-Anchored Feed-forward Geometry Stabilization. We construct the continuous scale-shift field constrained to scene geometry for appropriate depth refinement [PITH_FULL_IMAGE:figures/full_fig_p018_16.png] view at source ↗
Figure 17
Figure 17. Figure 17: Comparison of depth correction behaviors. The boxed regions show that our method better follows the underlying scene geometry, yielding more consistent refined depth than global scale-shift. stage-wise refinement procedure, the geometry-aware propagation rule, the full-resolution expansion, and the corresponding approximate objective. We use the TrajectoryCrafter base resolution of 672 × 384. Among the FF… view at source ↗
Figure 18
Figure 18. Figure 18: Per-frame accuracy and temporal stability in depth completion. We compare our method with image-based depth completion baselines across multiple frames, even when RGB inputs and GT-based geometric cues are provided for each frame. Existing methods suffer from both low per-frame geometric accuracy and strong temporal instability, producing depth predictions that are inaccurate within each frame and inconsi… view at source ↗
Figure 19
Figure 19. Figure 19: Preprocessing refinements for stable geometry estimation. Examples of the three preprocessing steps used in our pipeline. Spike-fix: removes small depth spike outliers using local median-based filtering. Edge-mapping: reduces anti-aliasing artifacts around depth discontinuities by snapping edge pixels to nearby valid surface samples. Mask-refine: corrects foreground-background label mixing near occlusion … view at source ↗
Figure 20
Figure 20. Figure 20: Comparison of view–time video layouts. Blue denotes the source video captured from a fixed view over time. Green denotes an NVS video rendered at another fixed view for all time steps. Orange denotes a novel-view video where the camera view changes over time along a trajectory. E Detailed Experimental Setup and Discussion E.1 Novel-view Rendering Setup For novel-view evaluation, we render intermediate vie… view at source ↗
read the original abstract

Recent 4D generation methods complete scene-level missing information using generative models and reconstruct the scene into radiance-based representations. However, these pipelines often present geometric inconsistencies in the generated content, and the radiance-based reconstruction requires expensive optimization. Furthermore, radiance-based representations often absorb these geometric inconsistencies into their view-dependent nature, failing to enforce the grounded geometric consistency. To address these issues, we propose Geometric 4D Stitching, an efficient framework that explicitly identifies missing geometric regions and complements them with geometrically grounded 4D stitches. As a result, our method constructs 4D scene representations in under 10 minutes on a single NVIDIA RTX 5090 GPU per one-step scene expansion, while improving geometric consistency. Moreover, we demonstrate that our explicit 4D stitching supports interative expansion of 4D mesh as well as 4D scene editing.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 1 minor

Summary. The paper proposes Geometric 4D Stitching, an efficient framework for 4D scene generation that explicitly identifies missing geometric regions via generative models and complements them with geometrically grounded 4D stitches. This avoids radiance-based reconstructions and their associated expensive optimization while claiming to produce consistent 4D representations in under 10 minutes on a single NVIDIA RTX 5090 GPU per one-step scene expansion, with additional support for interactive 4D mesh expansion and scene editing.

Significance. If validated with quantitative evidence, the explicit geometric stitching approach would represent a meaningful advance over radiance-field pipelines by enforcing grounded 4D consistency without optimization, potentially enabling faster and more editable 4D content for graphics and vision applications.

major comments (1)
  1. The central claim that stitching 'improves geometric consistency' and runs 'in under 10 minutes' is load-bearing but presented without any reported metrics, baselines, or timing breakdowns in the provided abstract; the full manuscript must include these in the experiments section to substantiate the efficiency and consistency assertions.
minor comments (1)
  1. Abstract: 'interative' is a typographical error and should be 'interactive'.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback and the clear identification of where quantitative support is needed to strengthen our claims. We address the comment below and commit to revisions that directly incorporate the requested evidence.

read point-by-point responses
  1. Referee: The central claim that stitching 'improves geometric consistency' and runs 'in under 10 minutes' is load-bearing but presented without any reported metrics, baselines, or timing breakdowns in the provided abstract; the full manuscript must include these in the experiments section to substantiate the efficiency and consistency assertions.

    Authors: We agree that the abstract summarizes the benefits at a high level and that explicit quantitative support belongs in the experiments section. The current manuscript already reports wall-clock timings on the RTX 5090 GPU that confirm the under-10-minute per-expansion runtime, together with qualitative side-by-side visualizations showing reduced geometric artifacts relative to radiance-field baselines. To meet the referee’s request for rigorous substantiation, we will expand the Experiments section in the revised manuscript with (1) quantitative geometric-consistency metrics (e.g., mean surface-to-surface distance and normal-consistency scores on reconstructed meshes), (2) direct numerical comparisons against representative radiance-based 4D pipelines, and (3) a component-wise timing breakdown (missing-region detection, stitch generation, and mesh integration). These additions will be placed in a new subsection and will be supported by additional figures and tables. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation is self-contained

full rationale

The abstract and method description present a high-level framework for identifying missing geometry via generative models and applying explicit 4D stitches, with performance claims (under 10 minutes on RTX 5090) stated directly rather than derived from equations. No mathematical derivations, fitted parameters, self-citations as load-bearing premises, or renamings of known results appear in the provided text. The central claims rest on the proposed stitching operation enforcing consistency by construction, but this is asserted without reducing to a self-referential definition or prior self-citation chain. The paper is therefore self-contained against external benchmarks for the purpose of this analysis.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only input provides no information on free parameters, axioms, or invented entities.

pith-pipeline@v0.9.0 · 5448 in / 943 out tokens · 24635 ms · 2026-05-12T04:02:40.864805+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

25 extracted references · 25 canonical work pages · 4 internal anchors

  1. [1]

    Advances in 4d generation: A survey, 2025

    Qiaowei Miao, Kehan Li, Jinsheng Quan, Zhiyuan Min, Shaojie Ma, Yichao Xu, Yi Yang, and Yawei Luo. Advances in 4d generation: A survey, 2025. URLhttps://arxiv.org/abs/2503.14501

  2. [2]

    Barron, and Aleksander Holynski

    Rundi Wu, Ruiqi Gao, Ben Poole, Alex Trevithick, Changxi Zheng, Jonathan T. Barron, and Aleksander Holynski. Cat4d: Create anything in 4d with multi-view video diffusion models. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 26057–26068, June 2025

  3. [3]

    Free4d: Tuning-free 4d scene generation with spatial-temporal consistency

    Tianqi Liu, Zihao Huang, Zhaoxi Chen, Guangcong Wang, Shoukang Hu, Liao Shen, Huiqiang Sun, Zhiguo Cao, Wei Li, and Ziwei Liu. Free4d: Tuning-free 4d scene generation with spatial-temporal consistency. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 25571–25582, October 2025

  4. [4]

    Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency

    Yiming Xie, Chun-Han Yao, Vikram V oleti, Huaizu Jiang, and Varun Jampani. Sv4d: Dynamic 3d content generation with multi-frame and multi-view consistency. InInternational Conference on Learning Representations (ICLR), 2025. URLhttps://openreview.net/forum?id=tJoS2d0Onf

  5. [5]

    3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4), July 2023

    Bernhard Kerbl, Georgios Kopanas, Thomas Leimkühler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering.ACM Transactions on Graphics, 42(4), July 2023. URL https: //repo-sam.inria.fr/fungraph/3d-gaussian-splatting/. 9

  6. [6]

    4d gaussian splatting for real-time dynamic scene rendering

    Guanjun Wu, Taoran Yi, Jiemin Fang, Lingxi Xie, Xiaopeng Zhang, Wei Wei, Wenyu Liu, Qi Tian, and Xinggang Wang. 4d gaussian splatting for real-time dynamic scene rendering. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 20310–20320, June 2024

  7. [7]

    Srinivasan, Matthew Tancik, Jonathan T

    Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. InEuropean Conference on Computer Vision (ECCV), 2020

  8. [8]

    D-nerf: Neural radiance fields for dynamic scenes

    Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10318–10327, June 2021

  9. [9]

    Zero4d: Training-free 4d video generation from single video using off-the-shelf video diffusion.arXiv preprint arXiv:2503.22622, 2025

    Jangho Park, Taesung Kwon, and Jong Chul Ye. Zero4d: Training-free 4d video generation from single video using off-the-shelf video diffusion.arXiv preprint arXiv:2503.22622, 2025. URL https: //arxiv.org/abs/2503.22622

  10. [10]

    Trajectorycrafter: Redirecting camera trajectory for monocular videos via diffusion models

    Mark Yu, Wenbo Hu, Jinbo Xing, and Ying Shan. Trajectorycrafter: Redirecting camera trajectory for monocular videos via diffusion models. InProceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025

  11. [11]

    Recammaster: Camera-controlled generative rendering from a single video.arXiv preprint arXiv:2503.11647, 2025

    Jianhong Bai, Menghan Xia, Xiao Fu, Xintao Wang, Lianrui Mu, Jinwen Cao, Zuozhu Liu, Haoji Hu, Xiang Bai, Pengfei Wan, and Di Zhang. Recammaster: Camera-controlled generative rendering from a single video.arXiv preprint arXiv:2503.11647, 2025. URLhttps://arxiv.org/abs/2503.11647

  12. [12]

    CogVideoX: Text-to-video diffusion models with an expert transformer

    Zhuoyi Yang, Jiayan Teng, Wendi Zheng, Ming Ding, Shiyu Huang, Jiazheng Xu, Yuanming Yang, Wenyi Hong, Xiaohan Zhang, Guanyu Feng, Da Yin, Yuxuan Zhang, Weihan Wang, Yean Cheng, Bin Xu, Xiaotao Gu, Yuxiao Dong, and Jie Tang. CogVideoX: Text-to-video diffusion models with an expert transformer. InInternational Conference on Learning Representations, 2025

  13. [13]

    Wan: Open and Advanced Large-Scale Video Generative Models

    Team Wan, Ang Wang, Baole Ai, Bin Wen, Chaojie Mao, Chen-Wei Xie, Di Chen, Feiwu Yu, Haiming Zhao, Jianxiao Yang, Jianyuan Zeng, Jiayu Wang, Jingfeng Zhang, Jingren Zhou, Jinkai Wang, Jixuan Chen, Kai Zhu, Kang Zhao, Keyu Yan, Lianghua Huang, Mengyang Feng, Ningyi Zhang, Pandeng Li, Pingyu Wu, Ruihang Chu, Ruili Feng, Shiwei Zhang, Siyang Sun, Tao Fang, T...

  14. [14]

    Vggt-world: Transforming vggt into an autoregressive geometry world model.arXiv preprint arXiv:2603.12655, 2026

    Xiangyu Sun, Shijie Wang, Fengyi Zhang, Lin Liu, Caiyan Jia, Ziying Song, Zi Huang, and Yadan Luo. Vggt-world: Transforming vggt into an autoregressive geometry world model.arXiv preprint arXiv:2603.12655, 2026

  15. [15]

    VideoGPA: Distilling Geometry Priors for 3D-Consistent Video Generation

    Hongyang Du, Junjie Ye, Xiaoyan Cong, Runhao Li, Jingcheng Ni, Aman Agarwal, Zeqi Zhou, Zekun Li, Randall Balestriero, and Yue Wang. Videogpa: Distilling geometry priors for 3d-consistent video generation.arXiv preprint arXiv:2601.23286, 2026

  16. [16]

    Nerf-editing: Geometry editing of neural radiance fields

    Yu-Jie Yuan, Yang-Tian Sun, Yu-Kun Lai, Yuewen Ma, Rongfei Jia, and Lin Gao. Nerf-editing: Geometry editing of neural radiance fields. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18353–18364, 2022

  17. [17]

    Control-nerf: Editable feature volumes for scene rendering and manipulation

    Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, and Gerard Pons-Moll. Control-nerf: Editable feature volumes for scene rendering and manipulation. InProceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 4340–4350, 2023

  18. [18]

    Vggt: Visual geometry grounded transformer for multi-view 3d reconstruction.arXiv preprint arXiv:2409.04530, 2024

    Yufei Wang, Shaowei Liu, Minghan Li, Yi Yang, and Bo Dai. Vggt: Visual geometry grounded transformer for multi-view 3d reconstruction.arXiv preprint arXiv:2409.04530, 2024. URL https://arxiv.org/ abs/2409.04530

  19. [19]

    Depth Anything 3: Recovering the Visual Space from Any Views

    Lihe Yang et al. Depth anything 3: Recovering the visual space from any number of views.arXiv preprint arXiv:2511.10647, 2025

  20. [20]

    Anysplat: Feed-forward 3d gaussian splatting from unconstrained views.arXiv preprint arXiv:2505.23716,

    Lihan Jiang, Yucheng Mao, Linning Xu, Tao Lu, Kerui Ren, Yichen Jin, Xudong Xu, Mulin Yu, Jiangmiao Pang, Feng Zhao, Dahua Lin, and Bo Dai. Anysplat: Feed-forward 3d gaussian splatting from unconstrained views.arXiv preprint arXiv:2505.23716, 2025. URLhttps://arxiv.org/abs/2505.23716

  21. [21]

    Instant4d: 4d gaussian splatting in minutes.arXiv preprint arXiv:2510.01119, 2025

    Zhanpeng Luo, Haoxi Ran, and Li Lu. Instant4d: 4d gaussian splatting in minutes.arXiv preprint arXiv:2510.01119, 2025. URLhttps://arxiv.org/abs/2510.01119. Accepted by NeurIPS 2025. 10

  22. [22]

    Sora 2 model documentation, 2026

    OpenAI. Sora 2 model documentation, 2026. URL https://developers.openai.com/api/docs/ models/sora-2. Accessed: 2026-03-04

  23. [23]

    Vbench: Comprehensive benchmark suite for video generative models

    Zhipeng Huang, Shengyu Zhao, Zhaoyang Wu, Zhe Li, Yuxiao Liu, Jiaying Lin, Bo Dai, and Limin Wang. Vbench: Comprehensive benchmark suite for video generative models. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024

  24. [24]

    Least-squares estimation of transformation parameters between two point patterns.IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(4):376–380, 1991

    Shinji Umeyama. Least-squares estimation of transformation parameters between two point patterns.IEEE Transactions on Pattern Analysis and Machine Intelligence, 13(4):376–380, 1991

  25. [25]

    SAM 3: Segment Anything with Concepts

    Nicolas Carion et al. Sam 3: Segment anything with concepts.arXiv preprint arXiv:2511.16719, 2025. 11 A Additional Discussion on the Ill-Posed Problem Setting of Generated-Data-Based 4D Generation In this section, we provide a more detailed discussion of why generated-data-based 4D generation is fundamentally ill-posed, and where its main bottlenecks aris...