Recognition: 2 theorem links
· Lean TheoremCI-ICM: Channel Importance-driven Learned Image Coding for Machines
Pith reviewed 2026-05-10 19:35 UTC · model grok-4.3
The pith
A learned image codec for machines scores feature channel importance to allocate bits preferentially and raise task accuracy at fixed bitrates.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors propose Channel Importance-driven learned Image Coding for Machines (CI-ICM). A Channel Importance Generation module produces and ranks channel importance scores via a channel order loss. These scores feed a Feature Channel Grouping and Scaling module that non-uniformly groups channels and adjusts their dynamic ranges, plus a Channel Importance-based Context module that allocates bits to preserve fidelity in critical channels. A Task-Specific Channel Adaptation module further enhances features for multiple machine tasks. On COCO2017 the method delivers BD-mAP@50:95 gains of 16.25% in object detection and 13.72% in instance segmentation over the baseline codec.
What carries the argument
The Channel Importance Generation (CIG) module that quantifies and ranks feature-channel importance for machine tasks, enabling the Feature Channel Grouping and Scaling (FCGS) and Channel Importance-based Context (CI-CTX) modules to perform non-uniform bitrate allocation.
If this is right
- Machine vision tasks obtain higher mean average precision at the same bitrate constraint.
- Bitrate is allocated non-uniformly to preserve higher fidelity in channels ranked as task-critical.
- A single codec supports multiple downstream tasks through the task-specific adaptation module.
- Ablation studies confirm that each of the four proposed modules contributes to the measured gains.
Where Pith is reading between the lines
- The same importance-driven grouping could be applied to compress video streams for surveillance or autonomous driving pipelines.
- If the importance scores generalize beyond the tested models, pre-computed channel rankings might enable faster real-time encoding.
- The work suggests compression loops that incorporate feedback from the downstream machine task could outperform purely reconstruction-focused codecs.
Load-bearing premise
The channel importance scores produced by the CIG module accurately reflect task-critical information across varied machine vision models and datasets.
What would settle it
Apply CI-ICM-compressed images to an object-detection or segmentation model whose architecture was not used when training the channel importance scores and check whether the BD-mAP gains disappear or reverse.
Figures
read the original abstract
Traditional human vision-centric image compression methods are suboptimal for machine vision centric compression due to different visual properties and feature characteristics. To address this problem, we propose a Channel Importance-driven learned Image Coding for Machines (CI-ICM), aiming to maximize the performance of machine vision tasks at a given bitrate constraint. First, we propose a Channel Importance Generation (CIG) module to quantify channel importance in machine vision and develop a channel order loss to rank channels in descending order. Second, to properly allocate bitrate among feature channels, we propose a Feature Channel Grouping and Scaling (FCGS) module that non-uniformly groups the feature channels based on their importance and adjusts the dynamic range of each group. Based on FCGS, we further propose a Channel Importance-based Context (CI-CTX) module to allocate bits among feature groups and to preserve higher fidelity in critical channels. Third, to adapt to multiple machine tasks, we propose a Task-Specific Channel Adaptation (TSCA) module to adaptively enhance features for multiple downstream machine tasks. Experimental results on the COCO2017 dataset show that the proposed CI-ICM achieves BD-mAP@50:95 gains of 16.25$\%$ in object detection and 13.72$\%$ in instance segmentation over the established baseline codec. Ablation studies validate the effectiveness of each contribution, and computation complexity analysis reveals the practicability of the CI-ICM. This work establishes feature channel optimization for machine vision-centric compression, bridging the gap between image coding and machine perception.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper proposes CI-ICM, a learned image codec for machine vision tasks that introduces a Channel Importance Generation (CIG) module with channel order loss, a Feature Channel Grouping and Scaling (FCGS) module, a Channel Importance-based Context (CI-CTX) module, and a Task-Specific Channel Adaptation (TSCA) module. On COCO2017, it reports BD-mAP@50:95 gains of 16.25% for object detection and 13.72% for instance segmentation over a baseline codec, supported by ablations and complexity analysis.
Significance. If reproducible and generalizable, the work could advance machine-centric compression by demonstrating that non-uniform bit allocation based on learned channel importance improves downstream task performance at fixed rates. The explicit ablation studies and complexity analysis are strengths that support practical claims; however, the absence of baseline specifications and cross-task validation limits the assessed impact.
major comments (3)
- [Abstract] Abstract: The central performance claim (BD-mAP gains of 16.25% detection / 13.72% segmentation) is presented without naming the baseline codec, its rate points, or any statistical significance tests, preventing verification of the reported improvements.
- [Experimental Results] Experimental section (implied by results on COCO2017): The TSCA module is described as enabling adaptation to multiple tasks, yet only results for the two COCO tasks are shown; without cross-model or cross-task transfer experiments, it remains unclear whether the CIG-derived importance scores capture general machine-critical features or merely overfit to the specific detection/segmentation heads used in training.
- [Method] Method description (CIG and FCGS modules): The channel order loss and subsequent non-uniform grouping/scaling assume that importance scores derived from gradients or activations generalize across varied machine vision models, but no evidence is provided that the added modules avoid introducing distribution shifts harmful to unseen downstream models.
minor comments (2)
- [Abstract] The abstract states that 'computation complexity analysis reveals the practicability' but does not quantify the overhead of the CIG/FCGS/CI-CTX/TSCA modules relative to the baseline.
- [Method] Notation for channel importance scores and grouping is introduced without an explicit equation or diagram reference in the provided summary, which could be clarified for reproducibility.
Simulated Author's Rebuttal
We thank the referee for the constructive feedback and positive recognition of the ablation studies and complexity analysis. We address each major comment below with clarifications and proposed revisions to strengthen the manuscript.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central performance claim (BD-mAP gains of 16.25% detection / 13.72% segmentation) is presented without naming the baseline codec, its rate points, or any statistical significance tests, preventing verification of the reported improvements.
Authors: We agree that the abstract should enable immediate verification. The baseline is the standard learned image codec (without CIG, FCGS, CI-CTX, or TSCA modules) as defined in Section III and used for all rate-distortion curves in Section IV. The BD-mAP@50:95 values are computed over the same set of rate points shown in Figures 3 and 4 (approximately 0.1–0.8 bpp). While statistical significance tests are not standard in learned compression literature, we will add a sentence to the abstract naming the baseline explicitly and referencing the rate points and evaluation protocol used in the experimental section. revision: yes
-
Referee: [Experimental Results] Experimental section (implied by results on COCO2017): The TSCA module is described as enabling adaptation to multiple tasks, yet only results for the two COCO tasks are shown; without cross-model or cross-task transfer experiments, it remains unclear whether the CIG-derived importance scores capture general machine-critical features or merely overfit to the specific detection/segmentation heads used in training.
Authors: The TSCA module is trained jointly with the two COCO tasks (detection and instance segmentation) that employ distinct heads, and the reported gains demonstrate that the same channel importance scores can be adapted to both. We acknowledge that this does not constitute full cross-model transfer (e.g., to classification or different backbones). We will revise the experimental section to explicitly state the scope of the current validation, add a limitations paragraph discussing potential task-specific overfitting, and note that TSCA fine-tuning would be required for new heads. revision: partial
-
Referee: [Method] Method description (CIG and FCGS modules): The channel order loss and subsequent non-uniform grouping/scaling assume that importance scores derived from gradients or activations generalize across varied machine vision models, but no evidence is provided that the added modules avoid introducing distribution shifts harmful to unseen downstream models.
Authors: The channel importance is computed from task-specific gradients and activations, and the channel order loss enforces a stable ranking that prioritizes task-critical channels. Ablation results (Table II) show consistent gains when CIG/FCGS are included, indicating that the non-uniform allocation improves rather than harms the tested tasks. We do not claim zero distribution shift for arbitrary unseen models; TSCA is designed precisely to mitigate task-specific shifts via adaptation. We will add a short discussion in Section III clarifying this scope and the role of TSCA for new models. revision: partial
- Comprehensive experiments on completely unseen downstream models (different architectures or tasks without any fine-tuning) to quantify potential distribution shifts introduced by CIG/FCGS.
Circularity Check
No significant circularity in derivation or claims
full rationale
The paper proposes a set of architectural modules (CIG for channel importance, FCGS for grouping/scaling, CI-CTX for context allocation, and TSCA for task adaptation) within a learned image codec and reports empirical BD-mAP gains on COCO2017 for detection and segmentation. No mathematical derivation, first-principles prediction, or fitted parameter is presented as a 'result' that reduces to its own inputs by construction. The central claims are performance measurements from training and evaluation, not self-referential definitions or renamed known patterns. Self-citations, if present, are not load-bearing for any uniqueness theorem or ansatz that would force the outcome.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Feature channels in learned codecs carry unequal importance for downstream machine vision tasks.
invented entities (4)
-
Channel Importance Generation (CIG) module
no independent evidence
-
Feature Channel Grouping and Scaling (FCGS) module
no independent evidence
-
Channel Importance-based Context (CI-CTX) module
no independent evidence
-
Task-Specific Channel Adaptation (TSCA) module
no independent evidence
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We propose a CIG module to explicitly analyze the importance of feature channels. Complemented by a novel channel order loss, CI-ICM extracts the ordered feature representation...
-
IndisputableMonolith/Foundation/ArithmeticFromLogic.leanembed_strictMono_of_one_lt unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
the features are divided into n uneven groups... high importance features are set to a small group
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Towards efficient front-end visual sensing for digital retina: A model- centric paradigm,
Y . Lou, L.-Y . Duan, Y . Luo, Z. Chen, T. Liu, S. Wang, and W. Gao, “Towards efficient front-end visual sensing for digital retina: A model- centric paradigm,”IEEE Trans. Multimedia, vol. 22, no. 11, pp. 3002– 3013, Nov. 2020
work page 2020
-
[2]
J. Zhang and D. Tao, “Empowering things with intelligence: A survey of the progress, challenges, and opportunities in artificial intelligence of things,”IEEE Internet Things J., vol. 8, no. 10, pp. 7789–7817, May 2021
work page 2021
-
[3]
P. Zhang, F. Huang, D. Wu, B. Yang, Z. Yang, and L. Tan, “Device- edge-cloud collaborative acceleration method towards occluded face recognition in high-traffic areas,”IEEE Trans. Multimedia, vol. 25, pp. 1513–1520, Mar. 2023
work page 2023
-
[4]
Overview of the versatile video coding (vvc) standard and its applications,
B. Bross, Y .-K. Wang, Y . Ye, S. Liu, J. Chen, G. J. Sullivan, and J.-R. Ohm, “Overview of the versatile video coding (vvc) standard and its applications,”IEEE Trans. Circuit Syst. Video Technol., vol. 31, no. 10, pp. 3736–3764, Oct. 2021
work page 2021
-
[5]
Hnr-isc: Hybrid neural representation for image set compression,
P. Zhang, S. Wang, M. Wang, P. Chen, W. Wu, X. Wang, and S. Kwong, “Hnr-isc: Hybrid neural representation for image set compression,”IEEE Trans. Multimedia, vol. 27, pp. 28–40, Dec. 2025
work page 2025
-
[6]
Recent advances in end-to-end learned image and video compression,
W.-H. Peng and H.-M. Hang, “Recent advances in end-to-end learned image and video compression,” inIEEE Int. Conf. Vis. Commun. Image Process., Macau, China, Dec. 2020, pp. 1–2
work page 2020
-
[7]
Video coding for machines: A paradigm of collaborative compression and intelligent analytics,
L. Duan, J. Liu, W. Yang, T. Huang, and W. Gao, “Video coding for machines: A paradigm of collaborative compression and intelligent analytics,”IEEE Trans. Image Process., vol. 29, pp. 8680–8695, Aug. 2020
work page 2020
-
[8]
Task-driven video compression for humans and machines: Framework design and opti- mization,
X. Yi, H. Wang, S. Kwong, and C.-C. Jay Kuo, “Task-driven video compression for humans and machines: Framework design and opti- mization,”IEEE Trans. Multimedia, vol. 25, pp. 8091–8102, Dec. 2023
work page 2023
-
[9]
Just noticeable difference for deep machine vision,
J. Jin, X. Zhang, X. Fu, H. Zhang, W. Lin, J. Lou, and Y . Zhao, “Just noticeable difference for deep machine vision,”IEEE Trans. Circuit Syst. Video Technol., vol. 32, no. 6, pp. 3452–3461, Jun. 2022
work page 2022
-
[10]
Task-aware quantization network for jpeg image compression,
J. Choi and B. Han, “Task-aware quantization network for jpeg image compression,” inEur. Conf. Comput. Vis., Nov. 2020, pp. 309–324
work page 2020
-
[11]
Visual analysis motivated rate- distortion model for image coding,
Z. Huang, C. Jia, S. Wang, and S. Ma, “Visual analysis motivated rate- distortion model for image coding,” inIEEE Int. Conf. Multimedia Expo, Shenzhen, China, Jun. 2021, pp. 1–6
work page 2021
-
[12]
Saliency segmentation oriented deep image compression with novel bit allocation,
Y . Li, W. Gao, G. Li, and S. Ma, “Saliency segmentation oriented deep image compression with novel bit allocation,”IEEE Trans. Image Process., vol. 34, pp. 16–29, Nov. 2025
work page 2025
-
[13]
Learning to predict object-wise just recognizable distortion for image and video compression,
Y . Zhang, H. Lin, J. Sun, L. Zhu, and S. Kwong, “Learning to predict object-wise just recognizable distortion for image and video compression,”IEEE Trans. Multimedia, vol. 26, pp. 5925–5938, Dec. 2024
work page 2024
-
[14]
Towards coding for human and machine vision: Scalable face image coding,
S. Yang, Y . Hu, W. Yang, L.-Y . Duan, and J. Liu, “Towards coding for human and machine vision: Scalable face image coding,”IEEE Trans. Multimedia, vol. 23, pp. 2957–2971, Mar. 2021
work page 2021
-
[15]
Image coding for machines with edge information learning using segment anything,
T. Shindo, K. Yamada, T. Watanabe, and H. Watanabe, “Image coding for machines with edge information learning using segment anything,” inIEEE Int. Conf. Image Process., Abu Dhabi, UAE, Oct. 2024, pp. 3702–3708
work page 2024
-
[16]
Rethink- ing semantic image compression: Scalable representation with cross- modality transfer,
P. Zhang, S. Wang, M. Wang, J. Li, X. Wang, and S. Kwong, “Rethink- ing semantic image compression: Scalable representation with cross- modality transfer,”IEEE Trans. Circuit Syst. Video Technol., vol. 33, no. 8, pp. 4441–4445, Aug. 2023
work page 2023
-
[17]
End-to-end compression towards machine vision: Network architecture design and optimization,
S. Wang, Z. Wang, S. Wang, and Y . Ye, “End-to-end compression towards machine vision: Network architecture design and optimization,” IEEE Open J. Circuits Syst., vol. 2, pp. 675–685, Nov. 2021
work page 2021
-
[18]
Learned image coding for machines: A content-adaptive approach,
N. Le, H. Zhang, F. Cricri, R. Ghaznavi-Youvalari, H. R. Tavakoli, and E. Rahtu, “Learned image coding for machines: A content-adaptive approach,” inIEEE Int. Conf. Multimedia Expo, Shenzhen, China, Nov. 2021, pp. 1–6
work page 2021
-
[19]
Image coding for machines: an end-to-end learned approach,
N. Le, H. Zhang, F. Cricri, R. Ghaznavi-Youvalari, and E. Rahtu, “Image coding for machines: an end-to-end learned approach,” inICASSP IEEE Int. Conf. Acoust. Speech Signal Process, Jun. 2021, pp. 1590–1594
work page 2021
-
[20]
End-to-end learning of compressible features,
S. Singh, S. Abu-El-Haija, N. Johnston, J. Ball ´e, A. Shrivastava, and G. Toderici, “End-to-end learning of compressible features,” inIEEE Int. Conf. Image Process., Abu Dhabi, UAE, Oct. 2020, pp. 3349–3353
work page 2020
-
[21]
SC2 benchmark: Supervised compression for split computing,
Y . Matsubara, R. Yang, M. Levorato, and S. Mandt, “SC2 benchmark: Supervised compression for split computing,”Trans. Mach. Learn. Res., pp. 1–20, Jun. 2023
work page 2023
-
[22]
Multiscale feature importance-based bit allocation for end-to-end feature coding for ma- chines,
J. Liu, Y . Zhang, Z. Guo, X. Huang, and G. Jiang, “Multiscale feature importance-based bit allocation for end-to-end feature coding for ma- chines,”ACM Trans. Multimed. Comput. Commun. Appl., vol. 21, no. 9, pp. 1–19, Sep. 2025
work page 2025
-
[23]
End-to-end optimized image compression for machines, a study,
L. D. Chamain, F. Racap ´e, J. B ´egaint, A. Pushparaja, and S. Feltman, “End-to-end optimized image compression for machines, a study,” in Data Compression Conf., Snowbird, UT, USA, Mar. 2021, pp. 163–172
work page 2021
-
[24]
Rate-distortion theory in coding for machines and its applications,
A. Harell, Y . Foroutan, N. Ahuja, P. Datta, B. Kanzariya, V . S. So- mayazulu, O. Tickoo, A. de Andrade, and I. V . Baji ´c, “Rate-distortion theory in coding for machines and its applications,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 47, no. 7, pp. 5501–5519, Jul. 2025
work page 2025
-
[25]
Improving multiple machine vision tasks in the compressed domain,
J. Liu, H. Sun, and J. Katto, “Improving multiple machine vision tasks in the compressed domain,” inInt. Conf. Pattern Recog., Montreal, QC, Canada, Aug. 2022, pp. 331–337
work page 2022
-
[26]
Scalable image coding for humans and machines,
H. Choi and I. V . Baji ´c, “Scalable image coding for humans and machines,”IEEE Trans. Image Process., vol. 31, pp. 2739–2754, Mar. 2022
work page 2022
-
[27]
Latent-space scalability for multi-task collabo- rative intelligence,
H. Choi and I. V . Baji ´c, “Latent-space scalability for multi-task collabo- rative intelligence,” inIEEE Int. Conf. Image Process., Anchorage, AK, USA, Sep. 2021, pp. 3562–3566
work page 2021
-
[28]
Unified and scalable deep image compression framework for human and machine,
G. Zhang, X. Zhang, and L. Tang, “Unified and scalable deep image compression framework for human and machine,”ACM Trans. Multi- media Comput. Commun. Appl., vol. 20, no. 10, pp. 1–22, Oct. 2024
work page 2024
-
[29]
Learned disentangled latent representations for scalable image coding for humans and ma- chines,
E. ¨Ozyılkan, M. Ulhaq, H. Choi, and F. Racap ´e, “Learned disentangled latent representations for scalable image coding for humans and ma- chines,” inData Compression Conf., Snowbird, UT, USA, Mar. 2023, pp. 42–51
work page 2023
-
[30]
Learnt mutual feature compression for machine vision,
T. Liu, M. Xu, S. Li, C. Chen, L. Yang, and Z. Lv, “Learnt mutual feature compression for machine vision,” inIEEE Int. Conf. Acoust. Speech Signal Process., Rhodes Island, Greece, Jun. 2023, pp. 1–5
work page 2023
-
[31]
Semantically scalable image coding with compression of feature maps,
N. Yan, D. Liu, H. Li, and F. Wu, “Semantically scalable image coding with compression of feature maps,” inIEEE Int. Conf. Image Process., Abu Dhabi, UAE, Oct. 2020, pp. 3114–3118
work page 2020
-
[32]
Semantic and saliency-aware scalable image coding towards human-machine collaboration,
T. Cui, Y . Wang, Y . Wang, and Z. Fang, “Semantic and saliency-aware scalable image coding towards human-machine collaboration,”IEEE Trans. Circuit Syst. Video Technol., pp. 1–1, May 2025
work page 2025
-
[33]
Y .-H. Chen, Y . Weng, C.-H. Kao, C. Chien, W.-C. Chiu, and W. Peng, “Transtic: Transferring transformer-based image compression from hu- man perception to machine perception,” inIEEE/CVF Int. Conf. Comput. Vis., Paris, France, Oct. 2023, pp. 23 240–23 250
work page 2023
-
[34]
Im- age compression for machine and human vision with spatial-frequency adaptation,
H. Li, S. Li, S. Ding, W. Dai, M. Cao, C. Li, J. Zou, and H. Xiong, “Im- age compression for machine and human vision with spatial-frequency adaptation,” inEur. Conf. Comput. Vis., Milan, Italy, Oct. 2024, pp. 382–399
work page 2024
-
[35]
Faster r-cnn: Towards real-time object detection with region proposal networks,
S. Ren, K. He, R. Girshick, and J. Sun, “Faster r-cnn: Towards real-time object detection with region proposal networks,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, Jun. 2017
work page 2017
-
[36]
D. He, Z. Yang, W. Peng, R. Ma, H. Qin, and Y . Wang, “Elic: Efficient learned image compression with unevenly grouped space- channel contextual adaptive coding,” inIEEE/CVF Conf. Comput. Vis. Pattern Recog., New Orleans, LA, USA, Jun. 2022, pp. 5708–5717
work page 2022
-
[37]
Squeeze-and-excitation networks,
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in IEEE/CVF Conf. Comput. Vis. Pattern Recog., Salt Lake City, UT, USA, Jun. 2018, pp. 7132–7141
work page 2018
-
[38]
Mlic: Multi- reference entropy model for learned image compression,
W. Jiang, J. Yang, Y . Zhai, P. Ning, F. Gao, and R. Wang, “Mlic: Multi- reference entropy model for learned image compression,” inACM Int. Conf. Multimedia, New York, NY , USA, Oct. 2023, pp. 7618 – 7627
work page 2023
-
[39]
Real- time evaluation of object detection models across open world scenarios,
P. Goswami, L. Aggarwal, A. Kumar, R. Kanwar, and U. Vasisht, “Real- time evaluation of object detection models across open world scenarios,” Appl. Soft Comput., vol. 163, p. 111921, Sep. 2024
work page 2024
-
[40]
On biasing transformer attention towards monotonicity,
A. R. Gonzales, C. Amrhein, N. Aepli, and R. Sennrich, “On biasing transformer attention towards monotonicity,” inN. Am. Chapter Assoc. Comput. Linguist.Online: Association for Computational Linguistics, Jun. 2021, pp. 4474–4488
work page 2021
-
[41]
K. He, G. Gkioxari, P. Doll ´ar, and R. Girshick, “Mask r-cnn,” inIEEE Int. Conf. Comput. Vis., Venice, Italy, Oct. 2017, pp. 2980–2988
work page 2017
-
[42]
The jpeg 2000 still image compression standard,
A. Skodras, C. Christopoulos, and T. Ebrahimi, “The jpeg 2000 still image compression standard,”IEEE Signal Process. Mag., vol. 18, no. 5, pp. 36–58, Sep. 2001
work page 2000
-
[43]
The pascal visual object classes (voc) challenge,
M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisser- man, “The pascal visual object classes (voc) challenge,”Int. J. Comput. Vision, vol. 88, no. 2, pp. 303–338, Sep. 2010. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , VOL.XX, NO.XX, 2025 15 Yun Zhang(Senior Member, IEEE) received the B.S. and M.S. degrees in electrical e...
work page 2010
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.