pith. machine review for the scientific record. sign in

arxiv: 2604.22093 · v1 · submitted 2026-04-23 · 💻 cs.CV · eess.IV

Recognition: unknown

FLARE-BO: Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation for Low-Light Robotic Vision

Authors on Pith no claims yet

Pith reviewed 2026-05-09 21:33 UTC · model grok-4.3

classification 💻 cs.CV eess.IV
keywords low-light image enhancementBayesian optimizationrobotic visionRetinexGaussian Processesimage denoisingwhite balanceLOL dataset
0
0 comments X

The pith

FLARE-BO extends Bayesian optimization to eight parameters for improved low-light image enhancement in robotic vision without training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces FLARE-BO as an extension of a training-free Bayesian optimization framework for low-light image enhancement. It jointly optimizes eight parameters covering gamma correction, LIME-style illumination normalization, various denoising filters, white balance, and smoothing. The optimization uses unit hypercube normalization, Sobol quasi-random initialization, and Log Expected Improvement to explore the larger space effectively. When tested on the LOL low-light paired dataset, it shows marked improvements over existing untrained methods. This approach matters for robotic systems that need reliable visual perception in dark environments for tasks like navigation and inspection.

Core claim

By expanding the parameter space to eight dimensions and applying principled Bayesian optimization with Gaussian Processes, FLARE-BO achieves better per-image adaptive enhancement than the prior three-parameter method or other training-free approaches, as demonstrated by superior performance on the LOL dataset.

What carries the argument

Bayesian optimization using Gaussian Processes over an eight-parameter space for fused luminance and adaptive Retinex enhancement, incorporating unit hypercube normalization and Log Expected Improvement acquisition.

If this is right

  • Robotic vision systems can adaptively enhance images on a per-image basis using a wider range of enhancement operations.
  • Competitive performance is achieved without requiring any training data or learned models.
  • The framework allows for the inclusion of illumination decomposition and white balance correction that were missing before.
  • Edge preservation improves by combining chrominance denoising with bilateral and NLM filters.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Integrating this enhancement as a preprocessing step could boost accuracy in downstream tasks like object detection or SLAM in low light.
  • Further efficiency gains might come from approximating the optimization for faster robotic deployment.
  • Applying similar optimization to other computer vision challenges like low-contrast or noisy images in different domains.

Load-bearing premise

The objective function used in the Bayesian optimization accurately reflects improvements that matter for robotic vision tasks, and expanding to eight parameters consistently improves results without adding artifacts or instability in varied low-light scenes.

What would settle it

Running the method on a new set of low-light images from a robotic platform and observing no improvement or degradation in a downstream task metric such as path planning success rate or detection precision compared to the baseline three-parameter method.

Figures

Figures reproduced from arXiv: 2604.22093 by Hujun Yin, Nathan Shankar, Pawel Ladosz.

Figure 1
Figure 1. Figure 1: Architecture of the enhancement technique [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: The different stages of enhancement in the pipeline [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison across various enhancement techniques. [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Sample Images from the LOL Dataset. competing methods that either over compress the dynamic range or introduce artificial peaks, FLARE-BO maintains a smoother, more continuous distribution that closely tracks the ground-truth statistics across all tonal regions. Chrominance denoising visibly eliminates the random colour speckle that persists in results in some compared methods. FLARE-BO jointly optimises n… view at source ↗
read the original abstract

Reliable visual perception under low illumination remains a core challenge for autonomous robotic systems, where degraded image quality directly compromises navigation, inspection, and various operations. A recent training free approach showed that Bayesian optimisation with Gaussian Processes can adaptively select brightness, contrast, and denoising parameters on a per-image basis, achieving competitive enhancement without any learned model. However, that framework is limited to three parameters, applies no illumination decomposition or white balance correction, and relies on Non-Local Means denoising, which tends to over smooth edges under noisy conditions. This paper proposes FLARE-BO (Fused Luminance and Adaptive Retinex Enhancement via Bayesian Optimisation), an extended framework that jointly optimises eight parameters spanning across gamma correction, LIME-style illumination normalisation, chrominance denoising, bilateral filtering, NLM denoising, Grey-World automatic white balance, and adaptive post smoothing. The search engine employs a unit hypercube parameter normalisation, objective standardisation, Sobol quasi-random initialisation, and Log Expected Improvement acquisition for principled exploration of the expanded space. Performance of the proposed method is benchmarked using the Low Light paired dataset (LOL) and results show marked improvements of the proposed method over existing methods that were not specifically trained using this dataset.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper proposes FLARE-BO, extending a prior three-parameter Bayesian optimization framework for low-light enhancement to an eight-parameter space covering gamma correction, LIME-style illumination normalization, chrominance denoising, bilateral filtering, NLM denoising, Grey-World white balance, and adaptive post-smoothing. The optimizer employs unit-hypercube normalization, objective standardization, Sobol quasi-random initialization, and Log Expected Improvement acquisition. Performance is benchmarked on the LOL paired dataset, with the abstract claiming marked improvements over existing methods not specifically trained on this dataset, aimed at robotic vision under low illumination.

Significance. If the performance gains are substantiated with quantitative metrics and the per-image optimization is shown to meet real-time constraints, the method could provide a practical training-free adaptive enhancement pipeline for robotic perception, addressing gaps in prior limited-parameter approaches by incorporating illumination decomposition and automatic white balance.

major comments (2)
  1. [Abstract] Abstract: the claim of 'marked improvements' on the LOL dataset is unsupported by any quantitative metrics (PSNR, SSIM, or similar), details on the objective function, baseline implementations, statistical tests, or error analysis. This absence is load-bearing for the central empirical claim and prevents verification that the data support the stated gains.
  2. [Abstract] The per-image application of eight-parameter Bayesian optimization (Sobol + LogEI) is presented as suitable for robotic vision, yet no iteration counts, wall-clock timings, early-stopping criteria, or surrogate approximations are reported. This is load-bearing for the applicability assertion, as the expanded parameter space may incur latency incompatible with frame-rate navigation or inspection tasks.
minor comments (1)
  1. [Abstract] The description of 'objective standardisation' and 'unit hypercube parameter normalisation' would benefit from explicit cross-reference to the corresponding equations or pseudocode in the methods section for reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments on our manuscript. We address each major comment point by point below and indicate the revisions planned for the next version of the paper.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the claim of 'marked improvements' on the LOL dataset is unsupported by any quantitative metrics (PSNR, SSIM, or similar), details on the objective function, baseline implementations, statistical tests, or error analysis. This absence is load-bearing for the central empirical claim and prevents verification that the data support the stated gains.

    Authors: We agree that the abstract claim would be more verifiable if supported by key quantitative results. The full manuscript contains PSNR and SSIM evaluations on the LOL dataset against untrained baselines (including standard Retinex, LIME, and related methods), along with a description of the composite no-reference objective function (luminance, contrast, and naturalness terms) and baseline implementations. To address the concern directly, we will revise the abstract to incorporate the main performance numbers, a concise note on the objective function and baselines, and a statement that gains are consistent across the test images. Formal statistical tests and per-image error bars can be added to the abstract if space allows, or retained in the results section with a cross-reference. revision: yes

  2. Referee: [Abstract] The per-image application of eight-parameter Bayesian optimization (Sobol + LogEI) is presented as suitable for robotic vision, yet no iteration counts, wall-clock timings, early-stopping criteria, or surrogate approximations are reported. This is load-bearing for the applicability assertion, as the expanded parameter space may incur latency incompatible with frame-rate navigation or inspection tasks.

    Authors: We acknowledge that explicit efficiency metrics are necessary to support the robotic-vision use case. The manuscript already specifies the optimization configuration (unit-hypercube normalization, Sobol initialization, Log Expected Improvement acquisition, and Gaussian-process surrogate), but does not report concrete iteration counts or timings. In the revision we will add these details: typical iteration budget (approximately 25 total evaluations), measured average wall-clock time per image on standard CPU hardware, and the early-stopping criterion based on acquisition-function improvement threshold. We will also note that the surrogate model keeps per-iteration cost low and discuss that further reductions (e.g., fewer iterations or GPU acceleration) may be required for strict real-time constraints. revision: yes

Circularity Check

0 steps flagged

Minor self-citation of prior BO framework but central claims remain independent

full rationale

The paper extends an existing training-free Bayesian optimization approach for low-light enhancement by increasing the parameter space to eight dimensions and incorporating additional operations such as LIME-style illumination and Grey-World white balance. It evaluates the resulting enhancements on the external LOL paired dataset rather than on data used to define or fit the method. No equations or claims reduce the reported performance gains to quantities defined in terms of the optimized parameters themselves, and the cited prior framework is not invoked as a uniqueness theorem or load-bearing justification for the new results. The derivation is therefore self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The paper applies existing image processing techniques and Bayesian optimization without introducing new mathematical axioms or postulated entities; the central claim rests on the domain assumption that the optimization objective aligns with robotic vision needs.

axioms (1)
  • domain assumption Bayesian optimization with the chosen acquisition function can reliably identify effective parameter sets in the eight-dimensional space for image enhancement
    This underpins the decision to use BO instead of exhaustive search or manual tuning.

pith-pipeline@v0.9.0 · 5528 in / 1270 out tokens · 51300 ms · 2026-05-09T21:33:07.459877+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

35 extracted references · 3 canonical work pages

  1. [1]

    Ultra-low-light- level digital still camera for autonomous underwater ve- hicle,

    H. Wu, Y . Hou, W. Xu, and M. Zhao, “Ultra-low-light- level digital still camera for autonomous underwater ve- hicle,”Optical Engineering, vol. 58, no. 1, pp. 013 106– 013 106, 2019

  2. [2]

    Ugv-cbrn: an unmanned ground vehicle for chemical, biological, radi- ological, and nuclear disaster response,

    S. Schwaiger, L. Muster, G. Novotny, M. Schebek, W. W¨ober, S. Thalhammer, and C. B ¨ohm, “Ugv-cbrn: an unmanned ground vehicle for chemical, biological, radi- ological, and nuclear disaster response,”arXiv preprint arXiv:2406.14385, 2024

  3. [3]

    Autonomous naviga- tion in search and rescue simulated environment using deep reinforcement learning,

    M. Abdeh, F. Abut, and F. Akay, “Autonomous naviga- tion in search and rescue simulated environment using deep reinforcement learning,”Balkan Journal of Electri- cal and Computer Engineering, vol. 9, no. 2, pp. 92–98, 2021

  4. [4]

    Low-light im- age quality enhancement through bayesian optimization using gaussian processes,

    G. F. Rodrigues, A. B. Viana, L. A. Martinho, J. M. Calvalcanti, J. L. Pio, and F. G. Oliveira, “Low-light im- age quality enhancement through bayesian optimization using gaussian processes,” in2025 Brazilian Conference on Robotics (CROS), vol. 1. IEEE, 2025, pp. 1–6

  5. [5]

    Deep Retinex Decomposition for Low-Light Enhancement

    C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex de- composition for low-light enhancement,”arXiv preprint arXiv:1808.04560, 2018

  6. [6]

    Low- light image enhancement for autonomous driving sys- tems using driveretinex-net,

    L. H. Pham, D. N.-N. Tran, and J. W. Jeon, “Low- light image enhancement for autonomous driving sys- tems using driveretinex-net,” in2020 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia). IEEE, 2020, pp. 1–5

  7. [7]

    Lime: Low-light image enhancement via illumination map estimation,

    X. Guo, Y . Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,”IEEE Transactions on image processing, vol. 26, no. 2, pp. 982–993, 2016

  8. [8]

    The role of gamma correction in colour image processing,

    W. Kubinger, M. Vincze, and M. Ayromlou, “The role of gamma correction in colour image processing,” in 9th European signal processing conference (EUSIPCO 1998). IEEE, 1998, pp. 1–4

  9. [9]

    Adaptive multiscale retinex for image contrast enhance- ment,

    C.-H. Lee, J.-L. Shih, C.-C. Lien, and C.-C. Han, “Adaptive multiscale retinex for image contrast enhance- ment,” in2013 International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE Computer Society, 2013, pp. 43–50

  10. [10]

    Retinexnet: Deep retinex decomposition for low-light enhancement,

    C. Wei, W. Wang, and Y . Liu, “Retinexnet: Deep retinex decomposition for low-light enhancement,” in Brit. Mach. Vis. Conf.(BMVC), 2018, pp. 182–190

  11. [11]

    Zero-reference deep curve estimation for low-light image enhancement,

    C. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, and R. Cong, “Zero-reference deep curve estimation for low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 1780–1789

  12. [12]

    Enlightengan: Deep light enhancement without paired supervision,

    Y . Jiang, X. Gong, D. Liu, Y . Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, and Z. Wang, “Enlightengan: Deep light enhancement without paired supervision,”IEEE transactions on image processing, vol. 30, pp. 2340– 2349, 2021

  13. [13]

    Learning enriched features for real image restoration and enhancement,

    S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, M.-H. Yang, and L. Shao, “Learning enriched features for real image restoration and enhancement,” inEuropean conference on computer vision. Springer, 2020, pp. 492– 511

  14. [14]

    Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,

    W. Wu, J. Weng, P. Zhang, X. Wang, W. Yang, and J. Jiang, “Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement,” inProceed- ings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5901–5910

  15. [15]

    Snr-aware low-light image enhancement,

    X. Xu, R. Wang, C.-W. Fu, and J. Jia, “Snr-aware low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 17 714–17 724

  16. [16]

    Retinexformer: One-stage retinex-based transformer for low-light image enhancement,

    Y . Cai, H. Bian, J. Lin, H. Wang, R. Timofte, and Y . Zhang, “Retinexformer: One-stage retinex-based transformer for low-light image enhancement,” inPro- ceedings of the IEEE/CVF international conference on computer vision, 2023, pp. 12 504–12 513

  17. [17]

    arXiv preprint arXiv:2305.10028 , year=

    D. Zhou, Z. Yang, and Y . Yang, “Pyramid diffusion models for low-light image enhancement,”arXiv preprint arXiv:2305.10028, 2023

  18. [18]

    Ruas: Retinex-inspired unrolling with learnable activation functions for low-light image en- hancement,

    R. Liuet al., “Ruas: Retinex-inspired unrolling with learnable activation functions for low-light image en- hancement,”IEEE Trans. on Pat. Analysis and Machine Intelligence, vol. 43, no. 11, pp. 3965–3979, 2021

  19. [19]

    Learning semantic-aware knowledge guidance for low-light image enhancement,

    Y . Wu, C. Pan, G. Wang, Y . Yang, J. Wei, C. Li, and H. T. Shen, “Learning semantic-aware knowledge guidance for low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2023, pp. 1662–1671

  20. [20]

    Guided image filtering,

    K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE transactions on pattern analysis and machine in- telligence, vol. 35, no. 6, pp. 1397–1409, 2012

  21. [21]

    A spatial processor model for object colour perception,

    G. Buchsbaum, “A spatial processor model for object colour perception,”Journal of the Franklin institute, vol. 310, no. 1, pp. 1–26, 1980

  22. [22]

    Bilateral filtering for gray and color images,

    C. Tomasi and R. Manduchi, “Bilateral filtering for gray and color images,” inSixth international conference on computer vision (IEEE Cat. No. 98CH36271). IEEE, 1998, pp. 839–846

  23. [23]

    A non-local algorithm for image denoising,

    A. Buades, B. Coll, and J.-M. Morel, “A non-local algorithm for image denoising,” in2005 IEEE computer society conference on computer vision and pattern recog- nition (CVPR’05), vol. 2. Ieee, 2005, pp. 60–65

  24. [24]

    Practical bayesian optimization of machine learning algorithms,

    J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” Advances in neural information processing systems, vol. 25, 2012

  25. [25]

    Distribution of points in a cube and ap- proximate evaluation of integrals,

    I. M. Sobol, “Distribution of points in a cube and ap- proximate evaluation of integrals,”USSR Computational mathematics and mathematical physics, vol. 7, pp. 86– 112, 1967

  26. [26]

    Unexpected improvements to expected im- provement for bayesian optimization,

    S. Ament, S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy, “Unexpected improvements to expected im- provement for bayesian optimization,”Advances in neu- ral information processing systems, vol. 36, pp. 20 577– 20 612, 2023

  27. [27]

    Botorch: A framework for efficient monte-carlo bayesian optimization,

    M. Balandat, B. Karrer, D. Jiang, S. Daulton, B. Letham, A. G. Wilson, and E. Bakshy, “Botorch: A framework for efficient monte-carlo bayesian optimization,”Advances in neural information processing systems, vol. 33, pp. 21 524–21 538, 2020

  28. [28]

    Gpytorch: Blackbox matrix-matrix gaus- sian process inference with gpu acceleration,

    J. Gardner, G. Pleiss, K. Q. Weinberger, D. Bindel, and A. G. Wilson, “Gpytorch: Blackbox matrix-matrix gaus- sian process inference with gpu acceleration,”Advances in neural information processing systems, vol. 31, 2018

  29. [29]

    Zero-ig: Zero-shot illumination-guided joint denoising and adaptive enhancement for low-light images,

    Y . Shi, D. Liu, L. Zhang, Y . Tian, X. Xia, and X. Fu, “Zero-ig: Zero-shot illumination-guided joint denoising and adaptive enhancement for low-light images,” inPro- ceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2024, pp. 3015–3024

  30. [30]

    Toward fast, flexible, and robust low-light image enhancement,

    L. Ma, T. Ma, R. Liu, X. Fan, and Z. Luo, “Toward fast, flexible, and robust low-light image enhancement,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022, pp. 5637–5646

  31. [31]

    Integrating semantic segmentation and retinex model for low-light image enhancement,

    M. Fan, W. Wang, W. Yang, and J. Liu, “Integrating semantic segmentation and retinex model for low-light image enhancement,” inProceedings of the 28th ACM international conference on multimedia, 2020, pp. 2317– 2325

  32. [32]

    Semantic-guided zero-shot learning for low-light image/video enhancement,

    S. Zheng and G. Gupta, “Semantic-guided zero-shot learning for low-light image/video enhancement,” inPro- ceedings of the IEEE/CVF Winter conference on appli- cations of computer vision, 2022, pp. 581–590

  33. [33]

    Image quality metrics: Psnr vs. ssim,

    A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in2010 20th international conference on pattern recognition. IEEE, 2010, pp. 2366–2369

  34. [34]

    Image quality assessment: from error visibility to struc- tural similarity,

    Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to struc- tural similarity,”IEEE transactions on image processing, vol. 13, no. 4, pp. 600–612, 2004

  35. [35]

    Making a “completely blind

    A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a “completely blind” image quality analyzer,”IEEE Signal processing letters, vol. 20, no. 3, pp. 209–212, 2012