When designing a classification system, one needs to understand its performance. perClass Mira provides a confusion matrix offering a detailed performance brake-down per class.

Confusion matrix highlights how labeled examples of different classes (ground truth) mapps to classifier output (decisions).


In this example, we have an image with a set of tomatos in a box.



We have labeled six distinct classes.


When we build a classification model, we may switch to the Confusion matrix docked pannel.


TIP: You may quickly switch to confusion matrix by pressing 'c' key and to spectral plot by pressing 's' key




The confusion matrix shows true class labels in rows and decisions in columns. Therefore, the diagonal represent correctly classifier examples and off-diagonal elements the errors.

By default, confusion matrix is normalized by sum of each row (total number of true class examples) to provide class errors.


To view the absolute, not normalized, values, toggle the Show normalized matrix command in the right-click context menu or press Shift+N key.



The right-most column shows total number of class samples, or (on normalized confusion matrix) the per class error rate.

The last row shows number of decisions per class, or (on the normalized confusion matrix), the precisions. Precision is a total number of correctly classified samples dividied by the total decisions of this class. We wish to have precision of 1.00 which means that all each decision for this class is correct (pure).


Observing the confusion matrix in our example, we can see that the green tomato and green stem/leaves classes are not well separable. This probably caused by overlap of these classes in our data.


Although we cannot fully separate them, we may fine-tune the respective error trade-off. perClass Mira provides fully interactive confusion matrix.


We may right-click on any field of interest, for example the true green tomato vs green stem/leaves. The context menu shows a slider which may be used to tune the respective trade-off.

By moving the slider, we may see that the error is lowered and iage decisions change to reflect this new setting. Note that our statistical model does not change, we only tune the importance of specific class in our application. Therefore, the error removed from one confusion matrix field will move to another one. In our case, there will be higher error between true green stem/leaves and green tomato decision.



When exporting the trained model for execution using perClass Runtime, the current operating point (performance setting) is used. The deployed classifier should, therefore, reflect the situation in the confusion matrix.