Understanding Accuracy in Scikit-Learn
Accuracy is a measure of how often the classifier is correct. It is the ratio of the number of correct predictions to the total number of input samples. In Scikit-Learn, calculating accuracy is straightforward using the accuracy_score function.
Here's a typical usage:
from sklearn.metrics import accuracy_score
y_true = [0, 1, 1, 1, 0, 0, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1]
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy * 100:.2f}%")