Recognition: unknown
Robust Fundamental Matrix Estimation from Single Image Motion Blur
Pith reviewed 2026-05-09 14:15 UTC · model grok-4.3
The pith
A fundamental matrix summarizing 3D camera motion during exposure can be recovered from point correspondences along smear paths in one motion-blurred image.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
We demonstrate the feasibility of establishing correspondences between two time instances within the camera exposure window, and that these can be used to robustly infer a fundamental matrix, which summarizes the motion of the camera during the exposure time. The inferred fundamental matrix is unique up to a transpose, corresponding to an ambiguity of the direction of time. Due to this per-smear ambiguity, classic methods such as the 8-point algorithm are no longer usable. The proposed method modifies the estimation to work on time-direction ambiguous correspondences and incorporates an uncertainty measurement in smear pattern prediction.
What carries the argument
A robust sampler that draws candidate correspondences from predicted smear paths while explicitly handling time-direction ambiguity and weighting each sample by its prediction uncertainty.
If this is right
- Camera motion during exposure can be summarized by a single fundamental matrix without requiring multiple sharp frames.
- The matrix supports direct motion segmentation on blurred images as a downstream task.
- Standard robust estimators must be altered to tolerate transpose ambiguity in the correspondences.
- Incorporating per-correspondence uncertainty from blur prediction improves sampling reliability over unweighted methods.
Where Pith is reading between the lines
- Existing deblurring pipelines could use the recovered matrix as an additional motion constraint to reduce artifacts.
- Single-frame motion analysis might extend to higher-order trajectory models if the smear representation is generalized beyond linear paths.
- Structure-from-motion pipelines could treat blurred frames as usable observations rather than discarding them.
Load-bearing premise
Smear paths must supply enough distinct, repeatable features to form reliable point matches across different moments inside the exposure despite noise and the unknown time direction.
What would settle it
Generate synthetic blurred images from known 3D camera trajectories, run the estimator, and check whether the recovered matrices satisfy the epipolar constraint on held-out points with error comparable to standard methods on sharp pairs; systematic failure above that threshold would disprove the claim.
Figures
read the original abstract
In this paper, we introduce a challenging task: extracting a fundamental matrix from a single motion blurred image. For a camera moving in 3D during exposure, the smear paths in the blurry image contain cues and constraints on this motion. We demonstrate the feasibility of establishing correspondences between two time instances within the camera exposure window, and that these can be used to robustly infer a fundamental matrix, which summarizes the motion of the camera during the exposure time. The inferred fundamental matrix is unique up to a transpose, corresponding to an ambiguity of the direction of time. Due to this per-smear ambiguity, classic methods, such as the 8-point algorithm, are no longer usable. The proposed method modifies the estimation to work on time-direction ambiguous correspondences. To improve the robustness of the fundamental matrix estimation, we also propose to incorporate an uncertainty measurement in smear pattern prediction and use it in the sampling process of the estimator. Experiments on synthetic and real-world motion-blur datasets demonstrate that our approach is able to estimate the fundamental matrix encoding the 3D camera motion, from single frames. Practical applicability is demonstrated on the downstream task of motion segmentation.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper introduces the task of estimating a fundamental matrix from a single motion-blurred image. It establishes point correspondences along smear paths that represent camera motion at two distinct times within the exposure window, modifies the 8-point algorithm to handle the resulting time-direction ambiguity (yielding F unique up to transpose), and incorporates an uncertainty measure from smear prediction into the sampling process. Experiments on synthetic and real-world motion-blur datasets are reported to demonstrate feasibility, with an additional demonstration on a downstream motion-segmentation task.
Significance. If the central claim holds, the work would be significant for enabling 3D motion recovery from individual blurred frames in computer vision pipelines where sharp images are unavailable. The paper is credited for formulating a novel problem, explicitly addressing the per-smear ambiguity, and showing a practical downstream application to motion segmentation. The approach avoids circularity in its derivation and introduces an uncertainty-weighted sampler, which are positive technical contributions.
major comments (2)
- [Experiments section] Experiments section: The abstract and experiments claim that the method successfully estimates the fundamental matrix on synthetic and real-world motion-blur datasets and supports motion segmentation, yet no quantitative error metrics (e.g., mean epipolar distance, inlier ratios), baseline comparisons (standard 8-point algorithm or deblurring-based alternatives), or ablation results on the uncertainty threshold are reported. This absence directly undermines assessment of whether localization noise and time ambiguity are handled robustly enough for the central feasibility claim.
- [Method description] Method description (smear correspondence and sampling): The claim that smear paths yield usable correspondences despite per-path time ambiguity requires that localization error remains small enough for the epipolar constraint to produce a stable F. The paper provides no quantitative breakdown of correspondence precision, fraction of usable smears, or sensitivity analysis to the free parameter 'uncertainty threshold for smear prediction', which is load-bearing for the robustness assertion.
minor comments (2)
- [Abstract] The abstract states that the inferred fundamental matrix is 'unique up to a transpose' but does not clarify in the main text whether this ambiguity is resolved or propagated in the downstream motion-segmentation task.
- Notation for the uncertainty measurement and its integration into the modified sampler could be presented with an explicit equation or pseudocode for clarity.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and for recognizing the novelty of the problem formulation, the handling of time-direction ambiguity, and the downstream application. We address each major comment below and will revise the manuscript to incorporate the requested quantitative evaluations.
read point-by-point responses
-
Referee: [Experiments section] Experiments section: The abstract and experiments claim that the method successfully estimates the fundamental matrix on synthetic and real-world motion-blur datasets and supports motion segmentation, yet no quantitative error metrics (e.g., mean epipolar distance, inlier ratios), baseline comparisons (standard 8-point algorithm or deblurring-based alternatives), or ablation results on the uncertainty threshold are reported. This absence directly undermines assessment of whether localization noise and time ambiguity are handled robustly enough for the central feasibility claim.
Authors: We agree that the current experiments focus on qualitative demonstration and a downstream task, which limits rigorous assessment. In the revised manuscript we will add mean epipolar distances and inlier ratios on both the synthetic and real-world datasets. We will also include direct comparisons to the standard 8-point algorithm (applied to the ambiguous correspondences) and to deblurring-based alternatives where feasible, together with an ablation on the uncertainty threshold to quantify its contribution to robustness. revision: yes
-
Referee: [Method description] Method description (smear correspondence and sampling): The claim that smear paths yield usable correspondences despite per-path time ambiguity requires that localization error remains small enough for the epipolar constraint to produce a stable F. The paper provides no quantitative breakdown of correspondence precision, fraction of usable smears, or sensitivity analysis to the free parameter 'uncertainty threshold for smear prediction', which is load-bearing for the robustness assertion.
Authors: We concur that a quantitative characterization of correspondence quality is necessary to support the robustness claim. The revised paper will report measured localization precision along the extracted smear paths, the fraction of smears retained after uncertainty filtering, and a sensitivity analysis that varies the uncertainty threshold and shows its effect on the stability and accuracy of the recovered fundamental matrix. revision: yes
Circularity Check
No significant circularity; method modifies standard 8-point algorithm without reducing outputs to fitted inputs or self-citations
full rationale
The paper's core contribution is a modified RANSAC-style estimator that handles per-smear time-direction ambiguity by allowing flipped correspondences and weighting by an uncertainty measure derived from smear prediction. This is an algorithmic adaptation of the classic 8-point algorithm rather than a derivation that reduces by construction to the same data or parameters. No equations are shown to equate the output fundamental matrix to a fitted quantity defined from the identical inputs, and no load-bearing uniqueness theorems or ansatzes are imported via self-citation. The approach remains self-contained against external benchmarks such as the standard fundamental matrix estimation literature.
Axiom & Free-Parameter Ledger
free parameters (1)
- uncertainty threshold for smear prediction
axioms (2)
- standard math The fundamental matrix can be estimated from point correspondences via the 8-point algorithm
- domain assumption Smear paths provide geometrically valid point matches at different instants during exposure
Reference graph
Works this paper leans on
-
[1]
In: AAAI (2021)
Argaw, D.M., Kim, J., Rameau, F., Cho, J.W., Kweon, I.S.: Optical flow estimation from a single motion-blurred image. In: AAAI (2021)
2021
-
[2]
In: ICCV (1987)
Bigün, J., Granlund, G.H.: Optimal orientation detection of linear symmetry. In: ICCV (1987)
1987
-
[3]
In: CVPR
Brachmann, E., Rother, C.: Learning less is more - 6d camera localization via 3d surface regression. In: CVPR. IEEE (2018)
2018
-
[4]
In: CVPR (2019)
Brooks, T., Barron, J.T.: Learning to synthesize motion blur. In: CVPR (2019)
2019
-
[5]
IEEE transactions on pattern analysis and machine intelligence (1996)
Chen, W.G., Nandhakumar, N., Martin, W.N.: Image motion estimation from motion smear-a new computational model. IEEE transactions on pattern analysis and machine intelligence (1996)
1996
-
[6]
In: IEEE Conf
Chum, O., Werner, T., Matas, J.: Two-view geometry estimation unaffected by a dominant plane. In: IEEE Conf. Comput. Vis. Pattern Recog. 2005 (2005)
2005
-
[7]
In: CVPR 2011
Crandall, D., Owens, A., Snavely, N., Huttenlocher, D.: Discrete-continuous opti- mization for large-scale structure from motion. In: CVPR 2011. IEEE (2011)
2011
-
[8]
In: CVPR
Dai, S., Wu, Y.: Motion from blur. In: CVPR. IEEE (2008)
2008
-
[9]
In: ECCV (2024)
Ding, Y., Vávra, V., Bhayani, S., Wu, Q., Yang, J., Kukelova, Z.: Fundamental matrix estimation using relative depths. In: ECCV (2024)
2024
-
[10]
In: ICCV (2015)
Dosovitskiy, A., Fischer, P., Ilg, E., Hausser, P., Hazirbas, C., Golkov, V., Van Der Smagt, P., Cremers, D., Brox, T.: Flownet: Learning optical flow with convo- lutional networks. In: ICCV (2015)
2015
-
[11]
In: CVPR (2017)
Dutt Jain, S., Xiong, B., Grauman, K.: Fusionseg: Learning to combine motion and appearance for fully automatic segmentation of generic objects in videos. In: CVPR (2017)
2017
-
[12]
In: CVPR (2023)
Fang, Z., Wu, F., Dong, W., Li, X., Wu, J., Shi, G.: Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring. In: CVPR (2023)
2023
-
[13]
Faugeras, O.D.: What can be seen in three dimensions with an uncalibrated stereo rig? In: European conference on computer vision. pp. 563–578. Springer (1992)
1992
-
[14]
In: Acm Siggraph 2006 Papers (2006)
Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. In: Acm Siggraph 2006 Papers (2006)
2006
-
[15]
In: CVPR (2017)
Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., Shi, Q.: From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In: CVPR (2017)
2017
-
[16]
Cam- bridge University Press, Cambridge, England, UK, 2 edn
Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cam- bridge University Press, Cambridge, England, UK, 2 edn. (2004)
2004
-
[17]
Proceedings of the IEEE (1994)
Huang, T.S., Netravali, A.N.: Motion and structure from feature correspondences: A review. Proceedings of the IEEE (1994)
1994
-
[18]
In: ECCV (2018)
Ilg, E., Cicek, O., Galesso, S., Klein, A., Makansi, O., Hutter, F., Brox, T.: Uncer- tainty estimates and multi-hypotheses networks for optical flow. In: ECCV (2018)
2018
-
[19]
In: ICCV (2019)
Jafarian, Y., Yao, Y., Park, H.S.: Monet: Multiview semi-supervised keypoint via epipolar divergence. In: ICCV (2019)
2019
-
[20]
In: CVPR’18 (2018)
Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E., Kautz, J.: Super slomo: High quality estimation of multiple intermediate frames for video interpo- lation. In: CVPR’18 (2018)
2018
-
[21]
In: ECCV 2012
Köhler, R., Hirsch, M., Mohler, B., Schölkopf, B., Harmeling, S.: Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In: ECCV 2012. Springer (2012) Robust Fundamental Matrix Estimation from Single Image Motion Blur 15
2012
-
[22]
NeurIPS30(2017)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. NeurIPS30(2017)
2017
-
[23]
Pattern Recognition Letters186(2024)
Lind, S.K., Xiong, Z., Forssén, P.E., Krüger, V.: Uncertainty quantification metrics for deep regression. Pattern Recognition Letters186(2024)
2024
-
[24]
Nature (1981)
Longuet-Higgins, H.C.: A computer algorithm for reconstructing a scene from two projections. Nature (1981)
1981
-
[25]
International journal of computer vision (1996)
Luong, Q.T., Faugeras, O.D.: The fundamental matrix: Theory, algorithms, and stability analysis. International journal of computer vision (1996)
1996
-
[26]
Vers une plus grande autonomie des système robotiques
Luong, Q.T.: Matrice fondamentale et calibration visuelle sur l’environnement. Vers une plus grande autonomie des système robotiques. Ph.D. thesis, Université Paris Sud-Paris XI (1992)
1992
-
[27]
In: Proceedings of the IEEE CVPR (2016)
Mayer, N., Ilg, E., Hausser, P., Fischer, P., Cremers, D., Dosovitskiy, A., Brox, T.: A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE CVPR (2016)
2016
-
[28]
In: CVPR’15
Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: CVPR’15
-
[29]
In: CVPR (2017)
Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR (2017)
2017
-
[30]
Machine Vision and Applications (2005)
Nistér, D.: Preemptive ransac for live structure and motion estimation. Machine Vision and Applications (2005)
2005
-
[31]
In: CVPR 2004
Nistér, D., Naroditsky, O., Bergen, J.: Visual odometry. In: CVPR 2004. IEEE
2004
-
[32]
In: ECCV (2024)
Pan, L., Barath, D., Pollefeys, M., Schönberger, J.L.: Global structure-from-motion revisited. In: ECCV (2024)
2024
-
[33]
In: ECCV Workshops (2018)
Poursaeed, O., Yang, G., Prakash, A., Fang, Q., Jiang, H., Hariharan, B., Belongie, S.: Deep fundamental matrix estimation without correspondences. In: ECCV Workshops (2018)
2018
-
[34]
In: CVPR’19 (2019)
Purohit, K., Shah, A., Rajagopalan, A.: Bringing alive blurred moments. In: CVPR’19 (2019)
2019
-
[35]
In: ECCV (2018)
Ranftl, R., Koltun, V.: Deep fundamental matrix estimation. In: ECCV (2018)
2018
-
[36]
In: 3rd IEEE International Conference on Image Processing
Rekleitis,I.M.:Opticalflowrecognitionfromthepowerspectrumofasingleblurred image. In: 3rd IEEE International Conference on Image Processing. IEEE (1996)
1996
-
[37]
In: International Conference on Medical image computing and computer-assisted intervention
Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedi- cal image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. Springer (2015)
2015
-
[38]
very scattered
Sampson, P.D.: Fitting conic sections to “very scattered” data: An iterative refine- ment of the bookstein algorithm. Computer graphics and image processing (1982)
1982
-
[39]
In: CVPR (2016)
Schonberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016)
2016
-
[40]
In: CVPR’18 (2018)
Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: CVPR’18 (2018)
2018
-
[41]
In: ECCV 2020
Wang, Q., Zhou, X., Hariharan, B., Snavely, N.: Learning feature descriptors using camera pose supervision. In: ECCV 2020. Springer (2020)
2020
-
[42]
In: CVPR, 2003
Wexler, Y., Fitzgibbon, A.W., Zisserman, A.: Learning epipolar geometry from image sequences. In: CVPR, 2003. Proceedings. vol. 2. IEEE (2003)
2003
-
[43]
International journal of computer vision (2012)
Whyte, O., Sivic, J., Zisserman, A., Ponce, J.: Non-uniform deblurring for shaken images. International journal of computer vision (2012)
2012
-
[44]
In- ternational journal of computer vision27(1998)
Zhang, Z.: Determining the epipolar geometry and its uncertainty: A review. In- ternational journal of computer vision27(1998)
1998
-
[45]
Zhong, Z., Cao, M., Ji, X., Zheng, Y., Sato, I.: Blur interpolation transformer for real-world motion from blur. In: CVPR (2023) Robust Fundamental Matrix Estimation from Single Image Motion Blur 1 Supplementary Materials This supplementary material has five parts: (A) Comparison of two approaches to synthesize blur generation to support the argument in s...
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.