Recognition: unknown
FLARE-BO: Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation for Low-Light Robotic Vision
Pith reviewed 2026-05-09 21:33 UTC · model grok-4.3
The pith
FLARE-BO extends Bayesian optimization to eight parameters for improved low-light image enhancement in robotic vision without training.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By expanding the parameter space to eight dimensions and applying principled Bayesian optimization with Gaussian Processes, FLARE-BO achieves better per-image adaptive enhancement than the prior three-parameter method or other training-free approaches, as demonstrated by superior performance on the LOL dataset.
What carries the argument
Bayesian optimization using Gaussian Processes over an eight-parameter space for fused luminance and adaptive Retinex enhancement, incorporating unit hypercube normalization and Log Expected Improvement acquisition.
If this is right
- Robotic vision systems can adaptively enhance images on a per-image basis using a wider range of enhancement operations.
- Competitive performance is achieved without requiring any training data or learned models.
- The framework allows for the inclusion of illumination decomposition and white balance correction that were missing before.
- Edge preservation improves by combining chrominance denoising with bilateral and NLM filters.
Where Pith is reading between the lines
- Integrating this enhancement as a preprocessing step could boost accuracy in downstream tasks like object detection or SLAM in low light.
- Further efficiency gains might come from approximating the optimization for faster robotic deployment.
- Applying similar optimization to other computer vision challenges like low-contrast or noisy images in different domains.
Load-bearing premise
The objective function used in the Bayesian optimization accurately reflects improvements that matter for robotic vision tasks, and expanding to eight parameters consistently improves results without adding artifacts or instability in varied low-light scenes.
What would settle it
Running the method on a new set of low-light images from a robotic platform and observing no improvement or degradation in a downstream task metric such as path planning success rate or detection precision compared to the baseline three-parameter method.
Figures
read the original abstract
Reliable visual perception under low illumination remains a core challenge for autonomous robotic systems, where degraded image quality directly compromises navigation, inspection, and various operations. A recent training free approach showed that Bayesian optimisation with Gaussian Processes can adaptively select brightness, contrast, and denoising parameters on a per-image basis, achieving competitive enhancement without any learned model. However, that framework is limited to three parameters, applies no illumination decomposition or white balance correction, and relies on Non-Local Means denoising, which tends to over smooth edges under noisy conditions. This paper proposes FLARE-BO (Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation), an extended framework that jointly optimises eight parameters spanning across gamma correction, LIME-style illumination normalisation, chrominance denoising, bilateral filtering, NLM denoising, Grey-World automatic white balance, and adaptive post smoothing. The search engine employs a unit hypercube parameter normalisation, objective standardisation, Sobol quasi-random initialisation, and Log Expected Improvement acquisition for principled exploration of the expanded space. Performance of the proposed method is benchmarked using the Low Light paired dataset (LOL) and results show marked improvements of the proposed method over existing methods that were not specifically trained using this dataset.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes FLARE-BO, extending a prior three-parameter Bayesian optimization framework for low-light enhancement to an eight-parameter space covering gamma correction, LIME-style illumination normalization, chrominance denoising, bilateral filtering, NLM denoising, Grey-World white balance, and adaptive post-smoothing. The optimizer employs unit-hypercube normalization, objective standardization, Sobol quasi-random initialization, and Log Expected Improvement acquisition. Performance is benchmarked on the LOL paired dataset, with the abstract claiming marked improvements over existing methods not specifically trained on this dataset, aimed at robotic vision under low illumination.
Significance. If the performance gains are substantiated with quantitative metrics and the per-image optimization is shown to meet real-time constraints, the method could provide a practical training-free adaptive enhancement pipeline for robotic perception, addressing gaps in prior limited-parameter approaches by incorporating illumination decomposition and automatic white balance.
major comments (2)
- [Abstract] Abstract: the claim of 'marked improvements' on the LOL dataset is unsupported by any quantitative metrics (PSNR, SSIM, or similar), details on the objective function, baseline implementations, statistical tests, or error analysis. This absence is load-bearing for the central empirical claim and prevents verification that the data support the stated gains.
- [Abstract] The per-image application of eight-parameter Bayesian optimization (Sobol + LogEI) is presented as suitable for robotic vision, yet no iteration counts, wall-clock timings, early-stopping criteria, or surrogate approximations are reported. This is load-bearing for the applicability assertion, as the expanded parameter space may incur latency incompatible with frame-rate navigation or inspection tasks.
minor comments (1)
- [Abstract] The description of 'objective standardisation' and 'unit hypercube parameter normalisation' would benefit from explicit cross-reference to the corresponding equations or pseudocode in the methods section for reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments on our manuscript. We address each major comment point by point below and indicate the revisions planned for the next version of the paper.
read point-by-point responses
-
Referee: [Abstract] Abstract: the claim of 'marked improvements' on the LOL dataset is unsupported by any quantitative metrics (PSNR, SSIM, or similar), details on the objective function, baseline implementations, statistical tests, or error analysis. This absence is load-bearing for the central empirical claim and prevents verification that the data support the stated gains.
Authors: We agree that the abstract claim would be more verifiable if supported by key quantitative results. The full manuscript contains PSNR and SSIM evaluations on the LOL dataset against untrained baselines (including standard Retinex, LIME, and related methods), along with a description of the composite no-reference objective function (luminance, contrast, and naturalness terms) and baseline implementations. To address the concern directly, we will revise the abstract to incorporate the main performance numbers, a concise note on the objective function and baselines, and a statement that gains are consistent across the test images. Formal statistical tests and per-image error bars can be added to the abstract if space allows, or retained in the results section with a cross-reference. revision: yes
-
Referee: [Abstract] The per-image application of eight-parameter Bayesian optimization (Sobol + LogEI) is presented as suitable for robotic vision, yet no iteration counts, wall-clock timings, early-stopping criteria, or surrogate approximations are reported. This is load-bearing for the applicability assertion, as the expanded parameter space may incur latency incompatible with frame-rate navigation or inspection tasks.
Authors: We acknowledge that explicit efficiency metrics are necessary to support the robotic-vision use case. The manuscript already specifies the optimization configuration (unit-hypercube normalization, Sobol initialization, Log Expected Improvement acquisition, and Gaussian-process surrogate), but does not report concrete iteration counts or timings. In the revision we will add these details: typical iteration budget (approximately 25 total evaluations), measured average wall-clock time per image on standard CPU hardware, and the early-stopping criterion based on acquisition-function improvement threshold. We will also note that the surrogate model keeps per-iteration cost low and discuss that further reductions (e.g., fewer iterations or GPU acceleration) may be required for strict real-time constraints. revision: yes
Circularity Check
Minor self-citation of prior BO framework but central claims remain independent
full rationale
The paper extends an existing training-free Bayesian optimization approach for low-light enhancement by increasing the parameter space to eight dimensions and incorporating additional operations such as LIME-style illumination and Grey-World white balance. It evaluates the resulting enhancements on the external LOL paired dataset rather than on data used to define or fit the method. No equations or claims reduce the reported performance gains to quantities defined in terms of the optimized parameters themselves, and the cited prior framework is not invoked as a uniqueness theorem or load-bearing justification for the new results. The derivation is therefore self-contained against external benchmarks.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Bayesian optimization with the chosen acquisition function can reliably identify effective parameter sets in the eight-dimensional space for image enhancement
Reference graph
Works this paper leans on
-
[1]
Ultra-low-light- level digital still camera for autonomous underwater ve- hicle,
H. Wu, Y . Hou, W. Xu, and M. Zhao, “Ultra-low-light- level digital still camera for autonomous underwater ve- hicle,”Optical Engineering, vol. 58, no. 1, pp. 013 106– 013 106, 2019
2019
-
[2]
S. Schwaiger, L. Muster, G. Novotny, M. Schebek, W. W¨ober, S. Thalhammer, and C. B ¨ohm, “Ugv-cbrn: an unmanned ground vehicle for chemical, biological, radi- ological, and nuclear disaster response,”arXiv preprint arXiv:2406.14385, 2024
-
[3]
Autonomous naviga- tion in search and rescue simulated environment using deep reinforcement learning,
M. Abdeh, F. Abut, and F. Akay, “Autonomous naviga- tion in search and rescue simulated environment using deep reinforcement learning,”Balkan Journal of Electri- cal and Computer Engineering, vol. 9, no. 2, pp. 92–98, 2021
2021
-
[4]
Low-light im- age quality enhancement through bayesian optimization using gaussian processes,
G. F. Rodrigues, A. B. Viana, L. A. Martinho, J. M. Calvalcanti, J. L. Pio, and F. G. Oliveira, “Low-light im- age quality enhancement through bayesian optimization using gaussian processes,” in2025 Brazilian Conference on Robotics (CROS), vol. 1. IEEE, 2025, pp. 1–6
2025
-
[5]
Deep Retinex Decomposition for Low-Light Enhancement
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex de- composition for low-light enhancement,”arXiv preprint arXiv:1808.04560, 2018
work page Pith review arXiv 2018
-
[6]
Low- light image enhancement for autonomous driving sys- tems using driveretinex-net,
L. H. Pham, D. N.-N. Tran, and J. W. Jeon, “Low- light image enhancement for autonomous driving sys- tems using driveretinex-net,” in2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia). IEEE, 2020, pp. 1–5
2020
-
[7]
Lime: Low-light image enhancement via illumination map estimation,
X. Guo, Y . Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,”IEEE Transactions on image processing, vol. 26, no. 2, pp. 982–993, 2016
2016
-
[8]
The role of gamma correction in colour image processing,
W. Kubinger, M. Vincze, and M. Ayromlou, “The role of gamma correction in colour image processing,” in 9th European signal processing conference (EUSIPCO 1998). IEEE, 1998, pp. 1–4
1998
-
[9]
Adaptive multiscale retinex for image contrast enhance- ment,
C.-H. Lee, J.-L. Shih, C.-C. Lien, and C.-C. Han, “Adaptive multiscale retinex for image contrast enhance- ment,” in2013 International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE Computer Society, 2013, pp. 43–50
2013
-
[10]
Retinexnet: Deep retinex decomposition for low-light enhancement,
C. Wei, W. Wang, and Y . Liu, “Retinexnet: Deep retinex decomposition for low-light enhancement,” in Brit. Mach. Vis. Conf.(BMVC), 2018, pp. 182–190
2018
-
[11]
Zero-reference deep curve estimation for low-light image enhancement,
C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1780–1789
2020
-
[12]
Enlightengan: Deep light enhancement without paired supervision,
Y . Jiang, X. Gong, D. Liu, Y . Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,”IEEE transactions on image processing, vol. 30, pp. 2340– 2349, 2021
2021
-
[13]
Learning enriched features for real image restoration and enhancement,
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” inEuropean conference on computer vision. Springer, 2020, pp. 492– 511
2020
-
[14]
Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,
W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,” inProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5901–5910
2022
-
[15]
Snr-aware low-light image enhancement,
X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 714–17 724
2022
-
[16]
Retinexformer: One-stage retinex-based transformer for low-light image enhancement,
Y . Cai, H. Bian, J. Lin, H. Wang, R. Timofte, and Y . Zhang, “Retinexformer: One-stage retinex-based transformer for low-light image enhancement,” inPro- ceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 12 504–12 513
2023
-
[17]
arXiv preprint arXiv:2305.10028 , year=
D. Zhou, Z. Yang, and Y . Yang, “Pyramid diffusion models for low-light image enhancement,”arXiv preprint arXiv:2305.10028, 2023
-
[18]
Ruas: Retinex-inspired unrolling with learnable activation functions for low-light image en- hancement,
R. Liuet al., “Ruas: Retinex-inspired unrolling with learnable activation functions for low-light image en- hancement,”IEEE Trans. on Pat. Analysis and Machine Intelligence, vol. 43, no. 11, pp. 3965–3979, 2021
2021
-
[19]
Learning semantic-aware knowledge guidance for low-light image enhancement,
Y . Wu, C. Pan, G. Wang, Y . Yang, J. Wei, C. Li, and H. T. Shen, “Learning semantic-aware knowledge guidance for low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 1662–1671
2023
-
[20]
Guided image filtering,
K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE transactions on pattern analysis and machine in- telligence, vol. 35, no. 6, pp. 1397–1409, 2012
2012
-
[21]
A spatial processor model for object colour perception,
G. Buchsbaum, “A spatial processor model for object colour perception,”Journal of the Franklin institute, vol. 310, no. 1, pp. 1–26, 1980
1980
-
[22]
Bilateral filtering for gray and color images,
C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” inSixth international conference on computer vision (IEEE Cat. No. 98CH36271). IEEE, 1998, pp. 839–846
1998
-
[23]
A non-local algorithm for image denoising,
A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in2005 IEEE computer society conference on computer vision and pattern recog- nition (CVPR’05), vol. 2. Ieee, 2005, pp. 60–65
2005
-
[24]
Practical bayesian optimization of machine learning algorithms,
J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” Advances in neural information processing systems, vol. 25, 2012
2012
-
[25]
Distribution of points in a cube and ap- proximate evaluation of integrals,
I. M. Sobol, “Distribution of points in a cube and ap- proximate evaluation of integrals,”USSR Computational mathematics and mathematical physics, vol. 7, pp. 86– 112, 1967
1967
-
[26]
Unexpected improvements to expected im- provement for bayesian optimization,
S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy, “Unexpected improvements to expected im- provement for bayesian optimization,”Advances in neu- ral information processing systems, vol. 36, pp. 20 577– 20 612, 2023
2023
-
[27]
Botorch: A framework for efficient monte-carlo bayesian optimization,
M. Balandat, B. Karrer, D. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy, “Botorch: A framework for efficient monte-carlo bayesian optimization,”Advances in neural information processing systems, vol. 33, pp. 21 524–21 538, 2020
2020
-
[28]
Gpytorch: Blackbox matrix-matrix gaus- sian process inference with gpu acceleration,
J. Gardner, G. Pleiss, K. Q. Weinberger, D. Bindel, and A. G. Wilson, “Gpytorch: Blackbox matrix-matrix gaus- sian process inference with gpu acceleration,”Advances in neural information processing systems, vol. 31, 2018
2018
-
[29]
Zero-ig: Zero-shot illumination-guided joint denoising and adaptive enhancement for low-light images,
Y . Shi, D. Liu, L. Zhang, Y . Tian, X. Xia, and X. Fu, “Zero-ig: Zero-shot illumination-guided joint denoising and adaptive enhancement for low-light images,” inPro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 3015–3024
2024
-
[30]
Toward fast, flexible, and robust low-light image enhancement,
L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5637–5646
2022
-
[31]
Integrating semantic segmentation and retinex model for low-light image enhancement,
M. Fan, W. Wang, W. Yang, and J. Liu, “Integrating semantic segmentation and retinex model for low-light image enhancement,” inProceedings of the 28th ACM international conference on multimedia, 2020, pp. 2317– 2325
2020
-
[32]
Semantic-guided zero-shot learning for low-light image/video enhancement,
S. Zheng and G. Gupta, “Semantic-guided zero-shot learning for low-light image/video enhancement,” inPro- ceedings of the IEEE/CVF Winter conference on appli- cations of computer vision, 2022, pp. 581–590
2022
-
[33]
Image quality metrics: Psnr vs. ssim,
A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in2010 20th international conference on pattern recognition. IEEE, 2010, pp. 2366–2369
2010
-
[34]
Image quality assessment: from error visibility to struc- tural similarity,
Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to struc- tural similarity,”IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004
2004
-
[35]
Making a “completely blind
A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,”IEEE Signal processing letters, vol. 20, no. 3, pp. 209–212, 2012
2012
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.