pith. machine review for the scientific record. sign in

Improving language understanding by generative pre- training

1 Pith paper cite this work. Polarity classification is still indexing.

1 Pith paper citing it

fields

cs.CV 1

years

2021 1

verdicts

ACCEPT 1

representative citing papers

Masked Autoencoders Are Scalable Vision Learners

cs.CV · 2021-11-11 · accept · novelty 8.0

Masked autoencoders with asymmetric encoder-decoder and 75% masking ratio enable scalable self-supervised pre-training of vision transformers, achieving 87.8% ImageNet-1K accuracy with ViT-Huge using only unlabeled data.

citing papers explorer

Showing 1 of 1 citing paper.

  • Masked Autoencoders Are Scalable Vision Learners cs.CV · 2021-11-11 · accept · none · ref 47

    Masked autoencoders with asymmetric encoder-decoder and 75% masking ratio enable scalable self-supervised pre-training of vision transformers, achieving 87.8% ImageNet-1K accuracy with ViT-Huge using only unlabeled data.