Recognition: unknown
Dual-Exposure Imaging with Events
Pith reviewed 2026-05-10 16:04 UTC · model grok-4.3
The pith
Event data from cameras aligns short- and long-exposure images to remove motion artifacts in low-light reconstruction.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The E-DEI network reconstructs high-quality low-light images from dual-exposure pairs plus events by decomposing the task into event-guided motion deblurring and enhancement, using a dual-path architecture whose Dual-path Feature Alignment and Fusion module aligns and merges features across exposures with the help of high-temporal-resolution event information.
What carries the argument
The Dual-path Feature Alignment and Fusion (DFAF) module, which takes event streams as auxiliary input to align and fuse features extracted from the short-exposure and long-exposure images.
If this is right
- Dual-exposure pairs become usable in dynamic low-light environments where motion previously made them unreliable.
- Motion deblurring and low-light enhancement can be solved jointly rather than sequentially when event data is available.
- A new real-world dataset of paired low-light and normal-light images with events supports training and benchmarking of similar methods.
- The dual-path design with event-assisted fusion generalizes across multiple test sets, indicating the alignment step is the main source of improvement.
Where Pith is reading between the lines
- The same event-assisted alignment principle could be applied to other multi-frame fusion tasks such as HDR merging or burst denoising.
- In robotics or surveillance, adding event sensors to existing dual-exposure cameras might yield reliable vision without extra hardware beyond the event camera itself.
- If event density is low in very dark regions, performance may degrade unless the network learns to fall back to image-only cues.
Load-bearing premise
Event streams supply sufficiently complete and accurate motion information between and within the two exposure frames to correct spatial displacements and exposure mismatches without creating new artifacts or needing scene-specific tuning.
What would settle it
Capture dual-exposure pairs plus events in a scene with rapid object motion, then check whether the output still contains visible ghosting or residual blur when compared pixel-by-pixel against a static reference image taken at the same average light level.
Figures
read the original abstract
By combining complementary benefits of short- and long-exposure images, Dual-Exposure Imaging (DEI) enhances image quality in low-light scenarios. However, existing DEI approaches inevitably suffer from producing artifacts due to spatial displacement from scene motion and image feature discrepancies from different exposure times. To tackle this problem, we propose a novel Event-based DEI (E-DEI) algorithm, which reconstructs high-quality images from dual-exposure image pairs and events, leveraging high temporal resolution of event cameras to provide accurate inter-/intra-frame dynamic information. Specifically, we decompose this complex task into an integration of two sub-tasks, i.e., event-based motion deblurring and low-light image enhancement tasks, which guides us to design E-DEI network as a dual-path parallel feature propagation architecture. We propose a Dual-path Feature Alignment and Fusion (DFAF) module to effectively align and fuse features extracted from dual-exposure images with assistance of events. Furthermore, we build a real-world Dataset containing Paired low-/normal-light Images and Events (PIED). Experiments on multiple datasets show the superiority of our method. The code and dataset are available at github.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes an Event-based Dual-Exposure Imaging (E-DEI) algorithm that reconstructs high-quality low-light images from paired short- and long-exposure images plus event streams. It decomposes the problem into event-based motion deblurring and low-light enhancement, implemented via a dual-path parallel feature propagation network with a Dual-path Feature Alignment and Fusion (DFAF) module that uses events for inter- and intra-frame alignment. The authors also introduce the PIED real-world dataset of paired low-/normal-light images and events, and claim superior performance on multiple datasets.
Significance. If the experimental claims hold, the work would be significant for low-light imaging applications by showing how event cameras' high temporal resolution can mitigate motion artifacts and exposure discrepancies without scene-specific tuning. The release of the PIED dataset would also be a concrete contribution for future research in event-based vision.
major comments (2)
- [Experiments] Experiments section: the abstract and results claim superiority on multiple datasets and introduce the PIED dataset, yet the provided description contains no quantitative metrics (PSNR, SSIM, etc.), ablation studies on the DFAF module or event contribution, or error analysis; this leaves the central claim of effective alignment without new artifacts unverified.
- [Methods (DFAF)] Methods, DFAF module description: the design assumes events supply sufficiently complete and accurate inter-/intra-frame dynamic information for feature alignment, but no experiments or analysis test robustness when event density is low (as occurs in low-light, low-contrast, or slowly varying regions) or when events are corrupted by typical sensor noise; this directly bears on whether the dual-path decomposition avoids introducing new artifacts.
minor comments (2)
- [Introduction] The notation for 'inter-/intra-frame' dynamic information is used repeatedly but never formally defined or illustrated with an example of what intra-frame cues events are expected to provide.
- [Abstract] The paper states that code and dataset are available at github but provides no specific link or repository identifier in the text.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. We will revise the manuscript to provide more detailed experimental validation and robustness analysis as requested.
read point-by-point responses
-
Referee: [Experiments] Experiments section: the abstract and results claim superiority on multiple datasets and introduce the PIED dataset, yet the provided description contains no quantitative metrics (PSNR, SSIM, etc.), ablation studies on the DFAF module or event contribution, or error analysis; this leaves the central claim of effective alignment without new artifacts unverified.
Authors: We acknowledge that the experiments section in the submitted version may not have presented the quantitative results with sufficient detail. The paper does report comparisons on multiple datasets using standard metrics such as PSNR and SSIM, and includes the PIED dataset. However, to fully address this concern, we will expand the section in the revised manuscript to include explicit quantitative tables, comprehensive ablation studies on the DFAF module and the contribution of event data, and an error analysis demonstrating that the alignment does not introduce new artifacts. revision: yes
-
Referee: [Methods (DFAF)] Methods, DFAF module description: the design assumes events supply sufficiently complete and accurate inter-/intra-frame dynamic information for feature alignment, but no experiments or analysis test robustness when event density is low (as occurs in low-light, low-contrast, or slowly varying regions) or when events are corrupted by typical sensor noise; this directly bears on whether the dual-path decomposition avoids introducing new artifacts.
Authors: We agree with the referee that validating the assumption regarding event data completeness is important. Our current experiments on the real-world PIED dataset include challenging low-light conditions where event density can vary. Nevertheless, we will add specific robustness tests in the revised version, including controlled experiments with reduced event density and added noise, to analyze the performance of the DFAF module and confirm that the dual-path approach mitigates artifacts effectively even under these conditions. revision: yes
Circularity Check
No circularity: empirical network design with external validation
full rationale
The manuscript proposes an empirical dual-path neural network (E-DEI with DFAF module) for event-assisted dual-exposure imaging and introduces a new paired dataset (PIED). No equations, parameter-fitting steps, or first-principles derivations appear; the architecture is presented as a design choice justified by task decomposition and empirical superiority on multiple datasets. No self-citations are invoked as load-bearing uniqueness theorems or ansatzes, and no predictions reduce to fitted inputs by construction. The central claims rest on external experimental benchmarks rather than any closed self-referential loop.
Axiom & Free-Parameter Ledger
Reference graph
Works this paper leans on
-
[1]
Low-light image and video enhancement using deep learning: A survey,
C. Li, C. Guo, L. Han, J. Jiang, M.-M. Cheng, J. Gu, and C. C. Loy, “Low-light image and video enhancement using deep learning: A survey,”IEEE TPAMI, vol. 44, no. 12, pp. 9396–9416, 2021. 1
2021
-
[2]
Seeing dynamic scene in the dark: A high-quality video dataset with mechatronic alignment,
R. Wang, X. Xu, C.-W. Fu, J. Lu, B. Yu, and J. Jia, “Seeing dynamic scene in the dark: A high-quality video dataset with mechatronic alignment,” inICCV, 2021, pp. 9700–9709. 1
2021
-
[3]
Exposure bracketing is all you need for a high-quality image,
Z. Zhang, S. Zhang, R. Wu, Z. Yan, and W. Zuo, “Exposure bracketing is all you need for a high-quality image,” inICLR, 2025. 1
2025
-
[4]
Image deblurring with blurred/noisy image pairs,
L. Yuan, J. Sun, L. Quan, and H.-Y . Shum, “Image deblurring with blurred/noisy image pairs,” inACM SIGGRAPH, 2007, pp. 1–10. 1, 2
2007
-
[5]
Lsd2– joint denoising and deblurring of short and long exposure images with cnns,
J. Mustaniemi, J. Kannala, J. Matas, S. S ¨arkk¨a, and J. Heikkil ¨a, “Lsd2– joint denoising and deblurring of short and long exposure images with cnns,” inBMVC, 2020, pp. 1–13. 1, 2, 7, 8, 9, 12
2020
-
[6]
Low-light image restoration with short-and long-exposure raw pairs,
M. Chang, H. Feng, Z. Xu, and Q. Li, “Low-light image restoration with short-and long-exposure raw pairs,”IEEE TMM, vol. 24, pp. 702–714,
-
[7]
1, 2, 3, 7, 8, 9, 12
-
[8]
D2hnet: Joint denoising and deblurring with hierarchical network for robust night image restoration,
Y . Zhao, Y . Xu, Q. Yan, D. Yang, X. Wang, and L.-M. Po, “D2hnet: Joint denoising and deblurring with hierarchical network for robust night image restoration,” inECCV, 2022, pp. 91–110. 1, 2, 3, 7, 8, 9, 12
2022
-
[9]
Proactive camera attribute control using bayesian optimization for illumination-resilient visual navigation,
J. Kim, Y . Cho, and A. Kim, “Proactive camera attribute control using bayesian optimization for illumination-resilient visual navigation,”IEEE TRO, vol. 36, no. 4, pp. 1256–1271, 2020. 1
2020
-
[10]
Learned camera gain and exposure control for improved visual feature detection and matching,
J. Tomasi, B. Wagstaff, S. L. Waslander, and J. Kelly, “Learned camera gain and exposure control for improved visual feature detection and matching,”IEEE RAL, vol. 6, no. 2, pp. 2028–2035, 2021. 1
2028
-
[11]
Exploring and evaluating image restoration potential in dynamic scenes,
C. Zhang, S. Su, Y . Zhu, Q. Yan, J. Sun, and Y . Zhang, “Exploring and evaluating image restoration potential in dynamic scenes,” inCVPR, 2022, pp. 2067–2076. 1
2022
-
[12]
Camera attributes control for visual odometry with motion blur awareness,
B. Han, Y . Lin, Y . Dong, H. Wang, T. Zhang, and C. Liang, “Camera attributes control for visual odometry with motion blur awareness,” IEEE/ASME Transactions on Mechatronics, 2023. 1
2023
-
[13]
Drl-isp: Multi-objective camera isp with deep reinforcement learning,
U. Shin, K. Lee, and I. S. Kweon, “Drl-isp: Multi-objective camera isp with deep reinforcement learning,” inIROS, 2022, pp. 7044–7051. 1
2022
-
[14]
Spatial temporal video enhancement using alternating exposures,
W. Shen, M. Cheng, G. Lu, G. Zhai, L. Chen, M. S. Asif, and Z. Gao, “Spatial temporal video enhancement using alternating exposures,”IEEE TCSVT, vol. 32, no. 8, pp. 4912–4926, 2021. 1, 2
2021
-
[15]
Ntire 2024 challenge on bracketing image restoration and enhancement: Datasets methods and results,
Z. Zhang, S. Zhang, R. Wu, W. Zuo, R. Timofte, X. Xing, H. Park, S. Song, C. Kim, X. Konget al., “Ntire 2024 challenge on bracketing image restoration and enhancement: Datasets methods and results,” in CVPRW, 2024, pp. 6153–6166. 1
2024
-
[16]
A 128×128 120 dB 15µs latency asynchronous temporal contrast vision sensor,
P. Lichtsteiner, C. Posch, and T. Delbruck, “A 128×128 120 dB 15µs latency asynchronous temporal contrast vision sensor,”IEEE J. Solid- State Circuits, vol. 43, no. 2, pp. 566–576, 2008. 1, 3
2008
-
[17]
Event- based vision: A survey,
G. Gallego, T. Delbr ¨uck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. J. Davison, J. Conradt, K. Daniilidiset al., “Event- based vision: A survey,”IEEE TPAMI, vol. 44, no. 1, pp. 154–180, 2020. 1, 2, 3
2020
-
[18]
Asynchronous blob tracker for event cameras,
Z. Wang, T. Molloy, P. van Goor, and R. Mahony, “Asynchronous blob tracker for event cameras,”IEEE TRO, vol. 40, pp. 4750–4767, 2024. 1, 3
2024
-
[19]
Coherent event guided low-light video enhancement,
J. Liang, Y . Yang, B. Li, P. Duan, Y . Xu, and B. Shi, “Coherent event guided low-light video enhancement,” inICCV, 2023, pp. 10 615– 10 625. 1, 8, 9, 12
2023
-
[20]
Seeing motion at nighttime with an event camera,
H. Liu, S. Peng, L. Zhu, Y . Chang, H. Zhou, and L. Yan, “Seeing motion at nighttime with an event camera,” inCVPR, 2024, pp. 25 648–25 658. 2, 6
2024
-
[21]
From sim-to-real: Toward general event-based low-light frame interpolation with per-scene optimization,
Z. Zhang, Y . Ma, Y . Chen, F. Zhang, J. Gu, T. Xue, and S. Guo, “From sim-to-real: Toward general event-based low-light frame interpolation with per-scene optimization,” inACM SIGGRAPH Asia, 2024, pp. 1–
2024
-
[22]
Towards robust event- guided low-light image enhancement: a large-scale real-world event- image dataset and novel approach,
G. Liang, K. Chen, H. Li, Y . Lu, and L. Wang, “Towards robust event- guided low-light image enhancement: a large-scale real-world event- image dataset and novel approach,” inCVPR, 2024, pp. 23–33. 2, 4, 6, 7, 8, 9, 12
2024
-
[23]
Deep retinex decomposition for low-light enhancement,
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” inBMVC, 2018, pp. 1–12. 2
2018
-
[24]
Uretinex- net: Retinex-based deep unfolding network for low-light image enhance- ment,
W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex- net: Retinex-based deep unfolding network for low-light image enhance- ment,” inCVPR, 2022, pp. 5901–5910. 2, 8, 9, 12
2022
-
[25]
Learning a simple low-light image enhancer from paired low-light instances,
Z. Fu, Y . Yang, X. Tu, Y . Huang, X. Ding, and K.-K. Ma, “Learning a simple low-light image enhancer from paired low-light instances,” in CVPR, 2023, pp. 22 252–22 261. 2
2023
-
[26]
Snr-aware low-light image enhancement,
X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” inCVPR, 2022, pp. 17 714–17 724. 2
2022
-
[27]
Low-light image enhancement via structure modeling and guidance,
X. Xu, R. Wang, and J. Lu, “Low-light image enhancement via structure modeling and guidance,” inCVPR, 2023, pp. 9893–9903. 2, 4
2023
-
[28]
Exposure trajectory recovery from motion blur,
Y . Zhang, C. Wang, S. J. Maybank, and D. Tao, “Exposure trajectory recovery from motion blur,”IEEE TPAMI, vol. 44, no. 11, pp. 7490– 7504, 2021. 2, 3
2021
-
[29]
Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring,
Z. Fang, F. Wu, W. Dong, X. Li, J. Wu, and G. Shi, “Self-supervised non-uniform kernel estimation with flow-based motion prior for blind image deblurring,” inCVPR, 2023, pp. 18 105–18 114. 2
2023
-
[30]
Rethinking coarse-to-fine approach in single image deblurring,
S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, “Rethinking coarse-to-fine approach in single image deblurring,” inICCV, 2021, pp. 4641–4650. 2
2021
-
[31]
Multi-scale frequency separation network for image deblurring,
Y . Zhang, Q. Li, M. Qi, D. Liu, J. Kong, and J. Wang, “Multi-scale frequency separation network for image deblurring,”IEEE TCSVT, vol. 33, no. 10, pp. 5525–5537, 2023. 2
2023
-
[32]
Multi-stage progressive image restoration,
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Multi-stage progressive image restoration,” inCVPR, 2021, pp. 14 821–14 831. 2, 6
2021
-
[33]
Hinet: Half instance normalization network for image restoration,
L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “Hinet: Half instance normalization network for image restoration,” inCVPRW, 2021, pp. 182–192. 2
2021
-
[34]
Deblurgan: Blind motion deblurring using conditional adversarial net- works,
O. Kupyn, V . Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “Deblurgan: Blind motion deblurring using conditional adversarial net- works,” inCVPR, 2018, pp. 8183–8192. 2
2018
-
[35]
Multiscale structure guided diffusion for image deblurring,
M. Ren, M. Delbracio, H. Talebi, G. Gerig, and P. Milanfar, “Multiscale structure guided diffusion for image deblurring,” inICCV, 2023, pp. 10 721–10 733. 2
2023
-
[36]
Blur removal via blurred-noisy image pair,
C. Gu, X. Lu, Y . He, and C. Zhang, “Blur removal via blurred-noisy image pair,”IEEE TIP, vol. 30, pp. 345–359, 2020. 2, 3
2020
-
[37]
Self-supervised image restoration with blurry and noisy pairs,
Z. Zhang, R. Xu, M. Liu, Z. Yan, and W. Zuo, “Self-supervised image restoration with blurry and noisy pairs,”NeurIPS, vol. 35, pp. 29 179– 29 191, 2022. 2
2022
-
[38]
Dual-camera joint deblurring-denoising,
S. Shekarforoush, A. Walia, M. A. Brubaker, K. G. Derpanis, and A. Levinshtein, “Dual-camera joint deblurring-denoising,”arXiv preprint arXiv:2309.08826, 2023. 2
-
[39]
Deep hybrid camera deblurring for smartphone cameras,
J. Rim, J. Lee, H. Yang, and S. Cho, “Deep hybrid camera deblurring for smartphone cameras,” inACM SIGGRAPH, 2024, pp. 1–11. 2
2024
-
[40]
Deformable convnets v2: More deformable, better results,
X. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable convnets v2: More deformable, better results,” inCVPR, 2019, pp. 9308–9316. 2, 5
2019
-
[41]
Event-based fusion for motion deblurring with cross- modal attention,
L. Sun, C. Sakaridis, J. Liang, Q. Jiang, K. Yang, P. Sun, Y . Ye, K. Wang, and L. V . Gool, “Event-based fusion for motion deblurring with cross- modal attention,” inECCV, 2022, pp. 412–428. 3, 8, 9, 12
2022
-
[42]
Unifying motion deblurring and frame interpola- tion with events,
X. Zhang and L. Yu, “Unifying motion deblurring and frame interpola- tion with events,” inCVPR, 2022, pp. 17 765–17 774. 3, 4, 5
2022
-
[43]
Generalizing event- based motion deblurring in real-world scenarios,
X. Zhang, L. Yu, W. Yang, J. Liu, and G.-S. Xia, “Generalizing event- based motion deblurring in real-world scenarios,” inCVPR, 2023, pp. 10 734–10 744. 3
2023
-
[44]
Motion deblurring via spatial- temporal collaboration of frames and events,
W. Yang, J. Wu, J. Ma, L. Li, and G. Shi, “Motion deblurring via spatial- temporal collaboration of frames and events,” inAAAI, vol. 38, no. 7, 2024, pp. 6531–6539. 3, 4, 5, 8, 9, 12
2024
-
[45]
Asymmetric event- guided video super-resolution,
Z. Xiao, D. Kai, Y . Zhang, X. Sun, and Z. Xiong, “Asymmetric event- guided video super-resolution,” inACM MM, 2024, pp. 2409–2418. 3
2024
-
[46]
Event- adapted video super-resolution,
Z. Xiao, D. Kai, Y . Zhang, Z.-J. Zha, X. Sun, and Z. Xiong, “Event- adapted video super-resolution,” inECCV. Springer, 2024, pp. 217–235. 3, 5
2024
-
[47]
Event-based video super-resolution via state space models,
Z. Xiao and X. Wang, “Event-based video super-resolution via state space models,” inCVPR. IEEE, 2025, pp. 12 564–12 574. 3, 5
2025
-
[48]
Time lens: Event-based video frame interpolation,
S. Tulyakov, D. Gehrig, S. Georgoulis, J. Erbach, M. Gehrig, Y . Li, and D. Scaramuzza, “Time lens: Event-based video frame interpolation,” in CVPR, 2021, pp. 16 155–16 164. 3, 5 SUBMISSION TO IEEE TMM 14
2021
-
[49]
Timereplayer: Unlocking the potential of event cameras for video interpolation,
W. He, K. You, Z. Qiao, X. Jia, Z. Zhang, W. Wang, H. Lu, Y . Wang, and J. Liao, “Timereplayer: Unlocking the potential of event cameras for video interpolation,” inCVPR, 2022, pp. 17 804–17 813. 3
2022
-
[50]
Video frame interpolation with stereo event and intensity cameras,
C. Ding, M. Lin, H. Zhang, J. Liu, and L. Yu, “Video frame interpolation with stereo event and intensity cameras,”IEEE TMM, vol. 26, pp. 9187– 9202, 2024. 3
2024
-
[51]
Evunroll: Neuromorphic events based rolling shutter image correction,
X. Zhou, P. Duan, Y . Ma, and B. Shi, “Evunroll: Neuromorphic events based rolling shutter image correction,” inCVPR, 2022, pp. 17 775– 17 784. 3
2022
-
[52]
Evshutter: Transforming events for unconstrained rolling shutter correction,
J. Erbach, S. Tulyakov, P. Vitoria, A. Bochicchio, and Y . Li, “Evshutter: Transforming events for unconstrained rolling shutter correction,” in CVPR, 2023, pp. 13 904–13 913. 3
2023
-
[53]
Self-supervised shutter unrolling with events,
M. Lin, Y . Wang, X. Zhang, B. Shi, W. Yang, C. He, G.-s. Xia, and L. Yu, “Self-supervised shutter unrolling with events,”IJCV, pp. 1–19,
-
[54]
Hdr reconstruction from bracketed exposures and events,
R. Shaw, S. Catley-Chandar, A. Leonardis, and E. P ´erez-Pellitero, “Hdr reconstruction from bracketed exposures and events,” inBMVC, 2022, pp. 1–13. 3, 5
2022
-
[55]
Multi-bracket high dynamic range imaging with event cameras,
N. Messikommer, S. Georgoulis, D. Gehrig, S. Tulyakov, J. Erbach, A. Bochicchio, Y . Li, and D. Scaramuzza, “Multi-bracket high dynamic range imaging with event cameras,” inCVPRW, 2022, pp. 547–557. 3, 5
2022
-
[56]
High-speed hdr video recon- struction from hybrid intensity frames and events,
R. Samra, K. Mitra, and P. Shedligeri, “High-speed hdr video recon- struction from hybrid intensity frames and events,” inCVMI. Springer, 2023, pp. 179–190. 3, 5
2023
-
[57]
Event- assisted 12-stop hdr imaging of dynamic scene,
S. Guo, Z. Chen, Z. Zhang, Y . Chen, G. Xu, and T. Xue, “Event- assisted 12-stop hdr imaging of dynamic scene,”arXiv preprint arXiv:2412.14705, 2024. 3, 5
-
[58]
Reblur2deblur: Deblurring videos via self-supervised learning,
H. Chen, J. Gu, O. Gallo, M.-Y . Liu, A. Veeraraghavan, and J. Kautz, “Reblur2deblur: Deblurring videos via self-supervised learning,” in ICCP. IEEE, 2018, pp. 1–9. 3
2018
-
[59]
Mssnet: Multi-scale-stage network for single image deblurring,
K. Kim, S. Lee, and S. Cho, “Mssnet: Multi-scale-stage network for single image deblurring,” inECCV, 2022, pp. 524–539. 3, 8, 9, 12
2022
-
[60]
Radiometric ccd camera calibration and noise estimation,
G. E. Healey and R. Kondepudy, “Radiometric ccd camera calibration and noise estimation,”IEEE TPAMI, vol. 16, no. 3, pp. 267–276, 1994. 3
1994
-
[61]
Image denoising in the deep learning era,
S. Izadi, D. Sutton, and G. Hamarneh, “Image denoising in the deep learning era,”Artificial Intelligence Review, vol. 56, no. 7, pp. 5929– 5974, 2023. 3
2023
-
[62]
Attention guided low-light image enhancement with a large scale low-light simulation dataset,
F. Lv, Y . Li, and F. Lu, “Attention guided low-light image enhancement with a large scale low-light simulation dataset,”IJCV, vol. 129, no. 7, pp. 2175–2193, 2021. 3
2021
-
[63]
Learning temporal consistency for low light video enhancement from single images,
F. Zhang, Y . Li, S. You, and Y . Fu, “Learning temporal consistency for low light video enhancement from single images,” inCVPR, 2021, pp. 4967–4976. 3
2021
-
[64]
Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,
M. Zhu, P. Pan, W. Chen, and Y . Yang, “Eemefn: Low-light image enhancement via edge-enhanced multi-exposure fusion network,” in AAAI, vol. 34, no. 07, 2020, pp. 13 106–13 113. 4
2020
-
[65]
U-net: Convolutional networks for biomedical image segmentation,
O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” inMICCAI, 2015, pp. 234–241. 4
2015
-
[66]
Restormer: Efficient transformer for high-resolution image restoration,
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” inCVPR, 2022, pp. 5728–5739. 5
2022
-
[67]
Two-frame motion estimation based on polynomial expansion,
G. Farneb ¨ack, “Two-frame motion estimation based on polynomial expansion,” inScandinavian conference on Image analysis. Springer, 2003, pp. 363–370. 7
2003
-
[68]
Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,
S. Nah, S. Baik, S. Hong, G. Moon, S. Son, R. Timofte, and K. Mu Lee, “Ntire 2019 challenge on video deblurring and super-resolution: Dataset and study,” inCVPRW, 2019, pp. 1974––1984. 7
2019
-
[69]
Real-time intermediate flow estimation for video frame interpolation,
Z. Huang, T. Zhang, W. Heng, B. Shi, and S. Zhou, “Real-time intermediate flow estimation for video frame interpolation,” inECCV, 2022, pp. 624–642. 7
2022
-
[70]
v2e: From video frames to realistic dvs events,
Y . Hu, S.-C. Liu, and T. Delbruck, “v2e: From video frames to realistic dvs events,” inCVPR, 2021, pp. 1312–1321. 7, 12
2021
-
[71]
Glow in the dark: Low-light image enhancement with external memory,
D. Ye, Z. Ni, W. Yang, H. Wang, S. Wang, and S. Kwong, “Glow in the dark: Low-light image enhancement with external memory,”IEEE TMM, vol. 26, pp. 2148–2163, 2023. 8, 9, 12
2023
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.