pith. machine review for the scientific record. sign in

arxiv: 1811.07747 · v1 · submitted 2018-11-19 · 💻 cs.LG · stat.ML

Recognition: unknown

How far from automatically interpreting deep learning

Authors on Pith no claims yet
classification 💻 cs.LG stat.ML
keywords learningdeepinterpretabilitycognitivemodelperformancesproblemsolution
0
0 comments X
read the original abstract

In recent years, deep learning researchers have focused on how to find the interpretability behind deep learning models. However, today cognitive competence of human has not completely covered the deep learning model. In other words, there is a gap between the deep learning model and the cognitive mode. How to evaluate and shrink the cognitive gap is a very important issue. In this paper, the interpretability evaluation, the relationship between the generalization performance and the interpretability of the model and the method for improving the interpretability are concerned. A universal learning framework is put forward to solve the equilibrium problem between the two performances. The uniqueness of solution of the problem is proved and condition of unique solution is obtained. Probability upper bound of the sum of the two performances is analyzed.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.