pith. machine review for the scientific record. sign in

arxiv: 1711.05225 · v3 · submitted 2017-11-14 · 💻 cs.CV · cs.LG· stat.ML

Recognition: unknown

CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning

Authors on Pith no claims yet
classification 💻 cs.CV cs.LGstat.ML
keywords chexnetchestdiseasesradiologistsalgorithmchestx-ray14detectperformance
0
0 comments X
read the original abstract

We develop an algorithm that can detect pneumonia from chest X-rays at a level exceeding practicing radiologists. Our algorithm, CheXNet, is a 121-layer convolutional neural network trained on ChestX-ray14, currently the largest publicly available chest X-ray dataset, containing over 100,000 frontal-view X-ray images with 14 diseases. Four practicing academic radiologists annotate a test set, on which we compare the performance of CheXNet to that of radiologists. We find that CheXNet exceeds average radiologist performance on the F1 metric. We extend CheXNet to detect all 14 diseases in ChestX-ray14 and achieve state of the art results on all 14 diseases.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 9 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Learning from Compressed CT: Feature Attention Style Transfer and Structured Factorized Projections for Resource-Efficient Medical Image Analysis

    cs.CV 2026-05 unverdicted novelty 7.0

    CT-Lite combines Feature Attention Style Transfer (FAST) and Structured Factorized Projections (SFP) with contrastive learning to reach AUROC within 5-7% of uncompressed baselines on compressed CT volumes across three...

  2. Scaling Vision Models Does Not Consistently Improve Localisation-Based Explanation Quality

    cs.CV 2026-05 accept novelty 6.0

    Scaling vision models by depth and parameter count does not consistently improve localisation-based explanation quality across architectures, datasets, and post-hoc methods; smaller models often perform comparably or better.

  3. Higher Resolution, Better Generalization: Unlocking Visual Scaling in Deep Reinforcement Learning

    cs.LG 2026-05 unverdicted novelty 5.0

    Higher-resolution observations with global-average-pooling encoders improve RL performance and generalization by enabling more localized visual attention, yielding up to 28% gains over standard Impala encoders.

  4. A Unified Open-Set Framework for Scalable PUF-Based Authentication of Heterogeneous IoT Devices

    cs.CR 2026-05 unverdicted novelty 5.0

    A scalable helper-data-free open-set PUF authentication framework using OpenGAN unifies heterogeneous PUF responses for 100% closed-set accuracy and near-zero open-set errors with up to 45 devices.

  5. Grounded Multimodal Retrieval-Augmented Drafting of Radiology Impressions Using Case-Based Similarity Search

    q-bio.QM 2026-03 unverdicted novelty 5.0

    A case-based multimodal RAG system for chest radiograph impressions achieves Recall@5 above 0.95 and produces citation-traceable drafts.

  6. Explanation-Aware Learning for Enhanced Interpretability in Biomedical Imaging

    cs.CV 2026-05 unverdicted novelty 4.0

    Adding explanation supervision to training improves spatial alignment of saliency maps with clinical annotations on chest X-rays while keeping predictive accuracy comparable.

  7. Improving Imbalanced Multi-Label Chest X-Ray Diagnosis via CBAM-Enhanced CNN Backbones

    cs.CV 2026-05 unverdicted novelty 4.0

    CBAM-enhanced CNN backbones reach a mean AUC of 0.8695 on the ChestXray14 dataset for imbalanced multi-label chest X-ray pathology classification, outperforming listed baselines.

  8. Momentum-Anchored Multi-Scale Fusion Model for Long-Tailed Chest X-Ray Classification

    cs.CV 2026-05 unverdicted novelty 4.0

    A new neural network stabilizes features for rare chest X-ray diseases via momentum anchoring and multi-scale fusion on EfficientNet, achieving 0.8682 AUC on ChestX-ray14.

  9. CBAM-Enhanced DenseNet121 for Multi-Class Chest X-Ray Classification with Grad-CAM Explainability

    eess.IV 2026-04 unverdicted novelty 3.0

    CBAM-DenseNet121 reaches 84.29% mean test accuracy on three-class chest X-ray classification with Grad-CAM visualizations showing plausible lung regions.