-
Notifications
You must be signed in to change notification settings - Fork 230
How percentages are calculated
Matthew D. Scholefield edited this page Apr 17, 2018
·
1 revision
Percent false positives and false negatives are defined as the proportion of samples the model failed on within the half of the data that was marked as either positive or negative. So:
-
% false positives
:false_positives / total_negatives
wheretotal_negatives = false_positives + true_negatives
-
% false negatives
:false_negatives / total_positives
wheretotal_positives = false_negatives + true_positives
For reference, here's the definitions of all these terms:
-
false positive
: Was not a wake word, but model incorrectly predicts it was -
true negative
: Was not a wake word, and model correctly predicts it was not -
false negative
: Was a wake word, but model incorrectly predicts it was not -
true positive
: Was a wake word, and model correctly predicts it was
Given the following output:
=== Counts ===
False Positives: 1
True Negatives: 819
False Negatives: 6
True Positives: 43
=== Summary ===
862 out of 869
99.19 %
0.12 % false positives
12.24 % false negatives
The percentages were calculated as follows:
-
1 / (1 + 819) = 0.12%
and 6 / (6 + 43) = 12.24%