Result Interpretation is the Data Science field, which explains the reasons for choosing one or another solution by the ML model.
There are two main areas of research in this field:
- Study of the model as a black box. Analyzing the provided examples, the algorithm compares the features of these examples and the results of the algorithm, making conclusions about the priority of the provided features. The black box is usually used for neural networks.
- Study the properties of the model itself. The study of the characteristics that the model uses to determine the degree of their importance. The method is most often applied to algorithms based on the decision tree method.
For example, when forecasting defects in a production line, the characteristics of objects are machine setting data, raw material chemical composition, sensor readings, video from the conveyor, etc. And the question to answer is whether the defective product will be produced or not.
Naturally, a company is interested not only in the forecast of the defective product production but also in the interpretation of the results, i.e., the reasons for the defects and their subsequent elimination. The reasons could be long periods between machine maintenance, the quality of the raw materials, or simply abnormal readings of some sensors, which the technologist should pay attention to.
Therefore, in the framework of the production defect forecast, it is not enough just to create an ML-model, but the interpretation of the model results has to be performed to identify factors that affect production defects.