The following metrics accept only real valued for
y_pred is real, it will converge to TensorFlow implementations.
If not, it will cast
y_pred to real by making
y_pred = (tf.math.real(y_pred) + tf.math.imag(y_pred)) / 2.
ComplexAccuracy: Complex implementation of Accuracy
ComplexCategoricalAccuracy: Complex implementation of CategoricalAccuracy
ComplexPrecision: Complex implementation of Precision
ComplexRecall: Complex implementation of Recall
ComplexCohenKappa: Complex implementation of CohenKappa
ComplexF1Score: Complex implementation of F1Score
update_state(self, y_true, y_pred, sample_weight=None, ignore_unlabeled=True)¶
- y_true – Ground truth label values.
- y_pred – The predicted probability values.
- sample_weight – Optional
sample_weightacts as a coefficient for the metric. If a scalar is provided, then the metric is simply scaled by the given value. If
sample_weightis a tensor of size
[batch_size], then the metric for each sample of the batch is rescaled by the corresponding element in the
sample_weightvector. If the shape of
[batch_size, d0, .. dN-1]`(or can be broadcasted to this shape), then each metric element of y_pred is scaled by the corresponding value of
sample_weight. (Note on dN-1: all metric functions reduce by 1 dimension, usually the last axis
- ignore_unlabeled – Default
True. Ignore cases where
labels[-1] == zeros. The
sample_weightparameter is used to ignore unlabeled data so using this will deprect the
ignore_unlabeled takes precedence over
sample_weight so make sure to turn it to
False when using
Complex Average Accuracy¶
Average Accuracy (AA) is defined as the average of individual class accuracy. This is used for unbalanced dataset in order to see the actual accuracy per class.
# Unbalanced dataset with 90% cases of one class and 10% of the other y_true = np.array([[1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [0., 1.] ]) # Dummy classifier has learned to just predict always the first class y_pred = np.array([ [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.], [1., 0.] ]) m = ComplexCategoricalAccuracy() m.update_state(y_true, y_pred) print(m.result().numpy()) # The dummy classifier has a big accuracy of 90% >>> 0.9 m = ComplexAverageAccuracy() m.update_state(y_true, y_pred) print(m.result().numpy()) # But an average accuracy of just 50% >>> 0.5