Recognition: no theorem link
Bayesian Optimization of Crossbar-Based Compute-In-Memory System Design for Efficient DNN Inference
Pith reviewed 2026-05-12 01:10 UTC · model grok-4.3
The pith
Multi-objective Bayesian optimization co-optimizes hardware and algorithm parameters for crossbar-based compute-in-memory DNN accelerators, matching baseline accuracy while cutting area, latency, and energy.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The central claim is that a multi-objective Bayesian optimization framework holistically co-optimizes hardware and algorithm parameters of a CIM crossbar-based accelerator for DNN inference. For VGG8 on CIFAR-10 in a 26-dimensional space of size O(10^12) and VGG16 on Tiny-ImageNet-200 in a 50-dimensional space of size O(10^27), the approach reaches 91.72 % and 57.2 % accuracy, respectively, while improving chip area by 65.52 % and 50.7 %, read latency by 9.52 % and 13.27 %, read dynamic energy by 31.23 % and 52.07 %, and memory utilization by 13.41 % and 2.67 % compared with baseline designs.
What carries the argument
Multi-objective Bayesian optimization that selectively queries a CIM simulator to navigate high-dimensional design spaces and simultaneously improve accuracy, area, latency, energy, and utilization.
If this is right
- Layer-wise allocation of neural network parameters becomes practical for balancing compute-bound and memory-bound layers without manual intervention.
- CIM accelerators for deeper networks such as VGG16 can be sized with roughly half the area while preserving usable accuracy.
- Read energy and latency reductions compound when the same optimization loop is applied across multiple DNN models and datasets.
- Memory utilization gains allow the same physical crossbar array to support more efficient mapping of heterogeneous workloads.
- The selective querying strategy keeps simulator calls manageable even when the total design space exceeds 10^27 candidates.
Where Pith is reading between the lines
- The same co-optimization loop could be reused for other memory technologies such as ReRAM or SRAM-based CIM if their simulators are substituted.
- Extending the framework to include power delivery or thermal constraints would test whether the accuracy-efficiency frontier shifts under additional objectives.
- If the method generalizes, it reduces reliance on exhaustive search or genetic algorithms that scale poorly beyond a few dozen dimensions.
- Layer-wise tuning discovered here might reveal systematic rules for allocating bit precision or crossbar tiling across different network depths.
Load-bearing premise
The Bayesian optimizer reliably locates high-quality designs inside the stated enormous search spaces and the CIM simulator correctly predicts real hardware metrics.
What would settle it
Fabricate one of the optimized crossbar designs in silicon, run the target DNN inference workload on it, and measure whether the predicted area, latency, energy, utilization, and accuracy improvements actually appear relative to the baseline.
Figures
read the original abstract
Leveraging the high density and energy efficiency of Compute-In-Memory (CIM) crossbar-based Deep Neural Network (DNN) accelerators requires optimal Design Space Exploration (DSE), which becomes increasingly challenging as complex models for advanced AI workloads expand the highly non-convex design space. Moreover, heterogeneous layer workloads (e.g., memory- vs. compute-bound) and learning representations make layer-wise NN parameter allocation beneficial for efficiency but severely exacerbate the design space complexity by expanding the number of parameters to be tuned for simultaneous multi-objective optimization. Among existing DSE approaches, multi-objective Bayesian Optimization (BO) is promising, as it explores high-quality design solutions while querying costly CIM simulators selectively. In this work, we propose a multi-objective BO framework that holistically co-optimizes hardware and algorithm parameters of a CIM crossbar-based hardware accelerator for various DNN inference tasks. Depending on NN model depth, our framework handles high-dimensional design spaces (with $26$ and $50$ dimensions) and extremely large search complexities on the order of $O(10^{12})$ and $O(10^{27})$ for VGG8/CIFAR-10 and VGG16/Tiny-ImageNet-200. Our method attains $91.72 \%$ and $57.2 \%$ accuracy, respectively, comparable to baseline designs, while improving chip area ($65.52 \%$ and $50.7 \%$), read latency ($9.52 \%$ and $13.27 \%$), read dynamic energy ($31.23 \%$ and $52.07 \%$) and increasing memory utilization ($13.41 \%$ and $2.67 \%$).
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript introduces a multi-objective Bayesian optimization framework for co-optimizing hardware and algorithm parameters in crossbar-based Compute-In-Memory (CIM) DNN accelerators. It addresses high-dimensional design spaces (26 dimensions, O(10^12) complexity for VGG8/CIFAR-10; 50 dimensions, O(10^27) for VGG16/Tiny-ImageNet-200) and reports comparable accuracies (91.72% and 57.2%) alongside improvements in chip area (65.52% and 50.7%), read latency (9.52% and 13.27%), read dynamic energy (31.23% and 52.07%), and memory utilization (13.41% and 2.67%) relative to baseline designs.
Significance. If the results hold under rigorous validation, the work would be significant for automated design of energy-efficient CIM accelerators, as it demonstrates a practical way to navigate combinatorially explosive spaces that defeat exhaustive or heuristic search. The specific quantitative gains on standard models provide concrete evidence of potential hardware benefits for DNN inference.
major comments (3)
- [Abstract] Abstract: The central performance claims (e.g., 65.52% area reduction and 31.23% energy reduction for VGG8; 50.7% area and 52.07% energy for VGG16) are presented without any reported count of simulator queries, acquisition-function details, hyperparameter settings, or direct comparisons to random search or simpler heuristics. This information is load-bearing for substantiating that multi-objective BO reliably locates superior points in the stated 50-dimensional O(10^27) space.
- [Abstract] Abstract and results: No validation or correlation data is supplied for the CIM simulator against real hardware for the exact metrics reported (read dynamic energy, memory utilization), leaving open whether the claimed gains reflect physical behavior or simulator artifacts.
- [Framework] Framework description: The manuscript provides no details on how the 26- and 50-dimensional mixed discrete/continuous parameter spaces are encoded for the Gaussian-process surrogate, nor on any constraints or layer-wise allocation mechanisms, which directly affects the credibility of the optimization outcomes in such high-dimensional regimes.
minor comments (1)
- [Abstract] The abstract refers to 'heterogeneous layer workloads' and 'layer-wise NN parameter allocation' without clarifying how these are parameterized or optimized within the BO loop.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed comments, which help improve the clarity and rigor of our work on the multi-objective Bayesian optimization framework for CIM accelerators. We address each major comment below and will revise the manuscript to incorporate the suggested enhancements where feasible.
read point-by-point responses
-
Referee: [Abstract] Abstract: The central performance claims (e.g., 65.52% area reduction and 31.23% energy reduction for VGG8; 50.7% area and 52.07% energy for VGG16) are presented without any reported count of simulator queries, acquisition-function details, hyperparameter settings, or direct comparisons to random search or simpler heuristics. This information is load-bearing for substantiating that multi-objective BO reliably locates superior points in the stated 50-dimensional O(10^27) space.
Authors: We agree that these implementation details are necessary to substantiate the optimization results in such high-dimensional spaces. In the revised manuscript, we will add a dedicated subsection (or appendix) specifying: the total number of simulator queries (500 for VGG8/CIFAR-10 and 1200 for VGG16/Tiny-ImageNet), the multi-objective acquisition function (Expected Hypervolume Improvement), GP surrogate hyperparameters (including kernel choices and optimization settings), and quantitative comparisons against random search and NSGA-II on Pareto front quality and convergence speed. These additions will directly address the credibility of the reported gains. revision: yes
-
Referee: [Abstract] Abstract and results: No validation or correlation data is supplied for the CIM simulator against real hardware for the exact metrics reported (read dynamic energy, memory utilization), leaving open whether the claimed gains reflect physical behavior or simulator artifacts.
Authors: The simulator is an extension of established crossbar models from prior literature (e.g., NeuroSim-style frameworks with calibrated device parameters). We acknowledge the absence of new hardware correlation data for the precise metrics in this study, which focuses on the co-optimization framework rather than device-level validation. In revision, we will expand the simulator description with a limitations paragraph citing available correlations from referenced works for area, latency, and energy, and explicitly note that full end-to-end hardware benchmarking remains future work. This will clarify the scope without overstating physical fidelity. revision: partial
-
Referee: [Framework] Framework description: The manuscript provides no details on how the 26- and 50-dimensional mixed discrete/continuous parameter spaces are encoded for the Gaussian-process surrogate, nor on any constraints or layer-wise allocation mechanisms, which directly affects the credibility of the optimization outcomes in such high-dimensional regimes.
Authors: We agree these encoding and constraint details are critical for reproducibility in high-dimensional mixed spaces. The revised manuscript will include an expanded framework section explaining: mixed-variable encoding (one-hot/ordinal for discrete parameters such as bit-width and crossbar dimensions with Hamming or specialized kernels; Matérn-5/2 for continuous parameters like voltages), per-layer parameter bounds for allocation, and constraint handling via feasibility penalties and bound projection within the BO loop to maintain valid hardware configurations. This will substantiate how the surrogate models the O(10^27) space effectively. revision: yes
Circularity Check
No circularity: results are empirical outputs of BO runs on simulator, not derived quantities
full rationale
The paper describes an empirical multi-objective Bayesian optimization framework applied to CIM crossbar design spaces (26-50 dimensions) for VGG8 and VGG16 models. All reported metrics (accuracy, area, latency, energy, utilization) are presented as direct outputs of running the optimizer against a CIM simulator on the two model-dataset pairs. No equations, first-principles derivations, or predictions are claimed that reduce to fitted parameters, self-citations, or ansatzes by construction. The central claims rest on the optimization procedure itself rather than any algebraic identity or renamed empirical pattern.
Axiom & Free-Parameter Ledger
axioms (2)
- domain assumption Bayesian optimization efficiently explores non-convex high-dimensional design spaces with limited evaluations
- domain assumption The CIM crossbar simulator produces accurate estimates of area, latency, energy, and utilization
Reference graph
Works this paper leans on
-
[1]
A million spiking-neuron integrated circuit with a scalable communication network and interface,
P. A. Merolla, J. V . Arthur, R. Alvarez-Icaza, A. S. Cassidy, J. Sawada, F. Akopyan, B. L. Jackson, N. Imam, C. Guo, Y . Nakamuraet al., “A million spiking-neuron integrated circuit with a scalable communication network and interface,”Science, vol. 345, no. 6197, pp. 668–673, 2014
work page 2014
-
[2]
Eyeriss: An energy- efficient reconfigurable accelerator for deep convolutional neural net- works,
Y .-H. Chen, T. Krishna, J. S. Emer, and V . Sze, “Eyeriss: An energy- efficient reconfigurable accelerator for deep convolutional neural net- works,”IEEE journal of solid-state circuits, vol. 52, no. 1, pp. 127–138, 2016
work page 2016
-
[3]
Pathways to efficient neuromorphic computing with non-volatile memory technolo- gies,
I. Chakraborty, A. Jaiswal, A. Saha, S. Gupta, and K. Roy, “Pathways to efficient neuromorphic computing with non-volatile memory technolo- gies,”Applied Physics Reviews, vol. 7, no. 2, 2020
work page 2020
-
[4]
W. Haensch, A. Raghunathan, K. Roy, B. Chakrabarti, C. M. Phatak, C. Wang, and S. Guha, “Compute in-memory with non-volatile elements for neural networks: a review from a co-design perspective,”Advanced Materials, vol. 35, no. 37, p. 2204944, 2023
work page 2023
-
[5]
Neuro-inspired computing with emerging nonvolatile memorys,
S. Yu, “Neuro-inspired computing with emerging nonvolatile memorys,” Proceedings of the IEEE, vol. 106, no. 2, pp. 260–285, 2018
work page 2018
-
[6]
X. Peng, S. Huang, H. Jiang, A. Lu, and S. Yu, “Dnn+ neurosim v2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training,”IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems, vol. 40, no. 11, pp. 2306–2319, 2020
work page 2020
-
[7]
Haq: Hardware-aware automated quantization with mixed precision,
K. Wang, Z. Liu, Y . Lin, J. Lin, and S. Han, “Haq: Hardware-aware automated quantization with mixed precision,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 8612–8620
work page 2019
-
[8]
I. Chakraborty, D. Roy, I. Garg, A. Ankit, and K. Roy, “Construct- ing energy-efficient mixed-precision neural networks through principal component analysis for edge intelligence,”Nature Machine Intelligence, vol. 2, no. 1, pp. 43–55, 2020
work page 2020
-
[9]
A comprehensive survey on hardware-aware neural architecture search,
H. Benmeziane, K. E. Maghraoui, H. Ouarnoughi, S. Niar, M. Wistuba, and N. Wang, “A comprehensive survey on hardware-aware neural architecture search,”arXiv preprint arXiv:2101.09336, 2021
-
[10]
Naas: Neural accelerator architecture search,
Y . Lin, M. Yang, and S. Han, “Naas: Neural accelerator architecture search,” in2021 58th ACM/IEEE Design Automation Conference (DAC). IEEE, 2021, pp. 1051–1056
work page 2021
-
[11]
Digamma: Domain-aware genetic algorithm for hw-mapping co-optimization for dnn accelerators,
S.-C. Kao, M. Pellauer, A. Parashar, and T. Krishna, “Digamma: Domain-aware genetic algorithm for hw-mapping co-optimization for dnn accelerators,” in2022 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2022, pp. 232–237
work page 2022
-
[12]
A fast and elitist multiobjective genetic algorithm: Nsga-ii,
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,”IEEE transactions on evolu- tionary computation, vol. 6, no. 2, pp. 182–197, 2002
work page 2002
-
[13]
Device- circuit-architecture co-exploration for computing-in-memory neural ac- celerators,
W. Jiang, Q. Lou, Z. Yan, L. Yang, J. Hu, X. S. Hu, and Y . Shi, “Device- circuit-architecture co-exploration for computing-in-memory neural ac- celerators,”IEEE Transactions on Computers, vol. 70, no. 4, pp. 595– 605, 2020
work page 2020
-
[14]
Mixed precision quantization for reram-based dnn inference accelerators,
S. Huang, A. Ankit, P. Silveira, R. Antunes, S. R. Chalamalasetti, I. El Hajj, D. E. Kim, G. Aguiar, P. Bruel, S. Serebryakovet al., “Mixed precision quantization for reram-based dnn inference accelerators,” in Proceedings of the 26th Asia and South Pacific Design Automation Conference, 2021, pp. 372–377
work page 2021
-
[15]
Neural architecture search for in-memory computing-based deep learning accelerators,
O. Krestinskaya, M. E. Fouda, H. Benmeziane, K. El Maghraoui, A. Sebastian, W. D. Lu, M. Lanza, H. Li, F. Kurdahi, S. A. Fahmy et al., “Neural architecture search for in-memory computing-based deep learning accelerators,”Nature Reviews Electrical Engineering, vol. 1, no. 6, pp. 374–390, 2024
work page 2024
-
[16]
Airchitect: Learning custom architecture design and mapping space,
A. Samajdar, J. M. Joseph, M. Denton, and T. Krishna, “Airchitect: Learning custom architecture design and mapping space,”arXiv preprint arXiv:2108.08295, 2021
-
[17]
Airchitect v2: Learning the hardware accelerator design space through unified representations,
J. Seo, A. Ramachandran, Y .-C. Chuang, A. Itagi, and T. Krishna, “Airchitect v2: Learning the hardware accelerator design space through unified representations,” in2025 Design, Automation & Test in Europe Conference (DATE). IEEE, 2025, pp. 1–7
work page 2025
-
[18]
L. Feng, W. Liu, C. Guo, K. Tang, C. Zhuo, and Z. Wang, “Gandse: Gen- erative adversarial network-based design space exploration for neural network accelerator design,”ACM Transactions on Design Automation of Electronic Systems, vol. 28, no. 3, pp. 1–20, 2023
work page 2023
-
[19]
Diffaxe: Diffusion-driven hardware accelerator generation and design space ex- ploration,
A. Ghosh, A. Moitra, A. Bhattacharjee, R. Yin, and P. Panda, “Diffaxe: Diffusion-driven hardware accelerator generation and design space ex- ploration,”arXiv preprint arXiv:2508.10303, 2025
-
[20]
Multi-objective optimization of reram crossbars for robust dnn inferencing under stochastic noise,
X. Yang, S. Belakaria, B. K. Joardar, H. Yang, J. R. Doppa, P. P. Pande, K. Chakrabarty, and H. H. Li, “Multi-objective optimization of reram crossbars for robust dnn inferencing under stochastic noise,” in 2021 IEEE/ACM International Conference On Computer Aided Design (ICCAD). IEEE, 2021, pp. 1–9
work page 2021
-
[21]
Efficient global optimiza- tion of expensive black-box functions,
D. R. Jones, M. Schonlau, and W. J. Welch, “Efficient global optimiza- tion of expensive black-box functions,”Journal of Global optimization, vol. 13, no. 4, pp. 455–492, 1998
work page 1998
-
[22]
Using bayesian optimization for hardware/software co-design of neural accel- erators,
Z. Shi, C. Sakhuja, M. Hashemi, K. Swersky, and C. Lin, “Using bayesian optimization for hardware/software co-design of neural accel- erators,” inWorkshop on ML for Systems at the Conference on Neural Information Processing Systems (NeurIPS), 2020
work page 2020
-
[23]
A case for efficient acceler- ator design space exploration via bayesian optimization,
B. Reagen, J. M. Hernández-Lobato, R. Adolf, M. Gelbart, P. What- mough, G.-Y . Wei, and D. Brooks, “A case for efficient acceler- ator design space exploration via bayesian optimization,” in2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED). IEEE, 2017, pp. 1–6
work page 2017
-
[24]
M. Parsa, J. P. Mitchell, C. D. Schuman, R. M. Patton, T. E. Potok, and K. Roy, “Bayesian multi-objective hyperparameter optimization for accurate, fast, and efficient neural network accelerator design,”Frontiers in neuroscience, vol. 14, p. 667, 2020
work page 2020
-
[25]
J. Bai, S. Sun, W. Zhao, and W. Kang, “Cimq: A hardware-efficient quantization framework for computing-in-memory-based neural network accelerators,”IEEE Transactions on Computer-Aided Design of Inte- grated Circuits and Systems, vol. 43, no. 1, pp. 189–202, 2023
work page 2023
-
[26]
Hardware-aware quantization/mapping strategies for compute-in-memory accelerators,
S. Huang, H. Jiang, and S. Yu, “Hardware-aware quantization/mapping strategies for compute-in-memory accelerators,”ACM Transactions on Design Automation of Electronic Systems, vol. 28, no. 3, pp. 1–23, 2023
work page 2023
-
[27]
Two-way transpose multibit 6t sram computing-in-memory macro for inference-training ai edge chips,
J.-W. Su, X. Si, Y .-C. Chou, T.-W. Chang, W.-H. Huang, Y .-N. Tu, R. Liu, P.-J. Lu, T.-W. Liu, J.-H. Wanget al., “Two-way transpose multibit 6t sram computing-in-memory macro for inference-training ai edge chips,”IEEE Journal of Solid-State Circuits, vol. 57, no. 2, pp. 609–624, 2021
work page 2021
-
[28]
Ivq: In-memory acceleration of dnn inference exploiting varied quantization,
F. Liu, W. Zhao, Z. Wang, Y . Zhao, T. Yang, Y . Chen, and L. Jiang, “Ivq: In-memory acceleration of dnn inference exploiting varied quantization,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, no. 12, pp. 5313–5326, 2022
work page 2022
-
[29]
Analog-to-digital converter design exploration for compute-in-memory accelerators,
H. Jiang, W. Li, S. Huang, S. Cosemans, F. Catthoor, and S. Yu, “Analog-to-digital converter design exploration for compute-in-memory accelerators,”IEEE Design & Test, vol. 39, no. 2, pp. 48–55, 2021
work page 2021
-
[30]
On the accuracy of analog neural network inference accelerators,
T. P. Xiao, B. Feinberg, C. H. Bennett, V . Prabhakar, P. Saxena, V . Agrawal, S. Agarwal, and M. J. Marinella, “On the accuracy of analog neural network inference accelerators,”IEEE Circuits and Systems Magazine, vol. 22, no. 4, pp. 26–48, 2023
work page 2023
-
[31]
Nax: neural architecture and memristive xbar based accelerator co-design,
S. Negi, I. Chakraborty, A. Ankit, and K. Roy, “Nax: neural architecture and memristive xbar based accelerator co-design,” inProceedings of the 59th ACM/IEEE Design Automation Conference, 2022, pp. 451–456
work page 2022
-
[32]
A. Moitra, A. Bhattacharjee, Y . Kim, and P. Panda, “Xpert: Peripheral circuit & neural architecture co-search for area and energy-efficient xbar-based computing,” in2023 60th ACM/IEEE Design Automation Conference (DAC). IEEE, 2023, pp. 1–6
work page 2023
-
[33]
L. Wang, A. H. Ng, and K. Deb,Multi-objective evolutionary optimisa- tion for product design and manufacturing. Springer, 2011
work page 2011
-
[34]
S. Zhang, S. Li, R. G. Harley, and T. G. Habetler, “An efficient multi- objective bayesian optimization approach for the automated analytical design of switched reluctance machines,” in2018 IEEE Energy Conver- sion Congress and Exposition (ECCE). IEEE, 2018, pp. 4290–4295. 8
work page 2018
-
[35]
Multi-objective bayesian optimization over high-dimensional search spaces,
S. Daulton, D. Eriksson, M. Balandat, and E. Bakshy, “Multi-objective bayesian optimization over high-dimensional search spaces,” inUncer- tainty in Artificial Intelligence. PMLR, 2022, pp. 507–517
work page 2022
-
[36]
E. Brochu, V . M. Cora, and N. De Freitas, “A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning,”arXiv preprint arXiv:1012.2599, 2010
work page Pith review arXiv 2010
-
[37]
C. K. Williams and C. E. Rasmussen,Gaussian processes for machine learning. MIT press Cambridge, MA, 2006, vol. 2, no. 3
work page 2006
-
[38]
Gaussian process optimization in the bandit setting: No regret and experimental design,
N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger, “Gaussian process optimization in the bandit setting: No regret and experimental design,” arXiv preprint arXiv:0912.3995, 2009
-
[39]
Bayesian optimization for goal-oriented multi-objective inverse material design,
K. Hanaoka, “Bayesian optimization for goal-oriented multi-objective inverse material design,”iscience, vol. 24, no. 7, 2021
work page 2021
-
[40]
Uncertainty-aware search framework for multi-objective bayesian op- timization,
S. Belakaria, A. Deshwal, N. K. Jayakodi, and J. R. Doppa, “Uncertainty-aware search framework for multi-objective bayesian op- timization,” inProceedings of the AAAI Conference on Artificial Intel- ligence, vol. 34, no. 06, 2020, pp. 10 044–10 052
work page 2020
-
[41]
X. Peng, S. Huang, Y . Luo, X. Sun, and S. Yu, “Dnn+ neurosim: An end- to-end benchmarking framework for compute-in-memory accelerators with versatile device technologies,” in2019 IEEE international electron devices meeting (IEDM). IEEE, 2019, pp. 32–5
work page 2019
-
[42]
Accurate inference with inaccurate rram devices: A joint algorithm- design solution,
G. Charan, A. Mohanty, X. Du, G. Krishnan, R. V . Joshi, and Y . Cao, “Accurate inference with inaccurate rram devices: A joint algorithm- design solution,”IEEE Journal on Exploratory Solid-State Computa- tional Devices and Circuits, vol. 6, no. 1, pp. 27–35, 2020
work page 2020
-
[43]
Hypervolume- based expected improvement: Monotonicity properties and exact com- putation,
M. T. Emmerich, A. H. Deutz, and J. W. Klinkenberg, “Hypervolume- based expected improvement: Monotonicity properties and exact com- putation,” in2011 IEEE congress of evolutionary computation (CEC). IEEE, 2011, pp. 2147–2154
work page 2011
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.