Explaining what learned models predict: In which cases can we trust machine learning models and when is caution required?
I don’t understand this subject well, but there is this article you might find interesting.
It talks how ML models in general predicts output with high accuracy but without really providing any understanding of how or why a particular output has been obtained. The black box like nature is not so conducive to building any trust and that leads to a need of having ‘interpretable ML models’. Here is the link: