Imagine you are a doctor who has developed a machine learning model to diagnose patients with a rare disease. You are excited to test your model on a dataset containing hundreds of patient records. Your model's accuracy is 95%, which sounds impressive. But, as you start analyzing the results, you notice that the model is biased towards certain age groups and genders. You also realize that the model is not providing any insights into the underlying causes of the disease. You begin to wonder if accuracy is enough to evaluate the performance of your model.
Real life examples like this highlight the need for better evaluation metrics for machine learning models. Accuracy, which is commonly used as a measure of performance, does not always reflect the real-world impact of a model. Exotic metrics like fairness, interpretability, and robustness can provide a more holistic evaluation of a model.
For example, ProPublica found that a software used by some US courts to predict the likelihood of defendants committing future crimes was biased against black defendants. A study by MIT discovered that an image recognition system was less accurate in identifying darker-skinned individuals. In both cases, accuracy alone would have deemed the models successful, but the underlying biases were problematic. Fairness metrics can help address these issues and ensure that the models are not discriminating against any group.
Many companies are also exploring exotic metrics for evaluating their machine learning models. Google uses interpretability metrics to identify which features of a model are driving the predictions. Adobe uses robustness metrics to test how well a model performs under adversarial attacks. Amazon uses fairness metrics to detect potential biases in its recruitment algorithms.
These companies are realizing that accuracy alone is not enough to evaluate the performance of their models. Exotic metrics can provide a more comprehensive picture of the model's strengths and weaknesses.
Conclusion
- Accuracy is not always enough to evaluate the performance of machine learning models.
- Exotic metrics like fairness, interpretability, and robustness can provide a more comprehensive evaluation of a model.
- Many companies are exploring exotic metrics and realizing the importance of holistic evaluation of their models.
References and Further Readings
- Machine Bias: Thereā??s Software Used Across the Country to Predict Future Criminals. And itā??s Biased Against Blacks
- Biased Algorithms Are Everywhere, and No One Seems to Care
- Beyond Accuracy: Exploring Exotic Metrics in Machine Learning
- Interpreting Deep Neural Networks with End-to-End Object-Oriented Prediction and Explanation
- Explained: Machine Learning Interpretability & Defect Classification
- Inspect your models for bias with Amazon SageMaker
Akash Mittal Tech Article
Share on Twitter Share on LinkedIn