Tensor similarity is a symmetry-invariant metric that measures functional equivalence between tensor-based networks using a recursive algorithm for cross-layer mechanisms.
ISBN 9781510838819
4 Pith papers cite this work. Polarity classification is still indexing.
fields
cs.LG 4years
2026 4representative citing papers
Cross-sample prediction churn between bootstrap-trained classifiers reaches 8-22% on chemistry benchmarks; K-bootstrap bagging reduces it 40-54% and twin-bootstrap with sym-KL consistency loss reduces it a further median 45% at matched 2x compute.
Circuit-based metrics from Vision Transformer internals provide better label-free proxies for generalization under distribution shift than existing methods like model confidence.
Solution concentration is the only robust feature across ML models for electrospinning while flow rate and applied voltage show high model-dependent variability in importance rankings.
citing papers explorer
-
When Are Two Networks the Same? Tensor Similarity for Mechanistic Interpretability
Tensor similarity is a symmetry-invariant metric that measures functional equivalence between tensor-based networks using a recursive algorithm for cross-layer mechanisms.
-
Reducing cross-sample prediction churn in scientific machine learning
Cross-sample prediction churn between bootstrap-trained classifiers reaches 8-22% on chemistry benchmarks; K-bootstrap bagging reduces it 40-54% and twin-bootstrap with sym-KL consistency loss reduces it a further median 45% at matched 2x compute.
-
Inside-Out: Measuring Generalization in Vision Transformers Through Inner Workings
Circuit-based metrics from Vision Transformer internals provide better label-free proxies for generalization under distribution shift than existing methods like model confidence.
-
Cross-Model Consistency of Feature Importance in Electrospinning: Separating Robust from Model-Dependent Features
Solution concentration is the only robust feature across ML models for electrospinning while flow rate and applied voltage show high model-dependent variability in importance rankings.