pith. machine review for the scientific record. sign in

arxiv: 1710.10577 · v2 · submitted 2017-10-29 · 💻 cs.CV

Recognition: unknown

Examining CNN Representations with respect to Dataset Bias

Authors on Pith no claims yet
classification 💻 cs.CV
keywords biasdatasetrelationshipsrepresentationstestingattributeattributescaused
0
0 comments X
read the original abstract

Given a pre-trained CNN without any testing samples, this paper proposes a simple yet effective method to diagnose feature representations of the CNN. We aim to discover representation flaws caused by potential dataset bias. More specifically, when the CNN is trained to estimate image attributes, we mine latent relationships between representations of different attributes inside the CNN. Then, we compare the mined attribute relationships with ground-truth attribute relationships to discover the CNN's blind spots and failure modes due to dataset bias. In fact, representation flaws caused by dataset bias cannot be examined by conventional evaluation strategies based on testing images, because testing images may also have a similar bias. Experiments have demonstrated the effectiveness of our method.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.