This is not an easy topic. But, what is the problem trying to solve? We cannot explain visual sensory in our brain easily but we know it works. Sensory information is hard to interpret or explain fully. Do we really need to justify our model or just make sure the neural net can approximate the real world accurately? Are we dealing with a problem of explanation or we do not have a robust way to avoid overfitting? Can we generalize the solution well and avoid adversary? Can we improve our sample efficiency and knowledge transfer?
I don’t think we know any of the questions above well or what are the questions need to be answered next. So it is very hard for me to comment on the model explanation you mentioned or what is the priority. Do we need any explanation beyond knowing the cost function is low in the testing data? I don’t have the answer for now.