Imagine a scenario where a self-driving car malfunctions and causes a fatal accident. The investigation reveals that the car's AI system had made a decision that went against the vehicle's safety protocols. The car manufacturer claims that they cannot disclose how the AI system arrived at that decision because it is a proprietary algorithm. The victim's family and the public are left with no answers and no way to hold anyone accountable for the tragedy.
Here are some quantifiable examples of the potential impact of closed AI models:
- A study by the AI Now Institute found that commercial facial recognition systems from companies like Amazon, IBM, and Microsoft had higher error rates for people with darker skin tones and women. The proprietary nature of these systems makes it difficult to identify and address the root causes of these biases.
- In 2018, a Tesla Model X crashed into a highway barrier in California and caught fire, killing the driver. Tesla claimed that the driver had ignored multiple warnings to keep their hands on the wheel, but the National Transportation Safety Board found that the Autopilot system had contributed to the crash. However, Tesla refused to release the full data logs from the car, citing concerns about revealing confidential business information.
- During the COVID-19 pandemic, AI models have been used to predict which patients are at risk of severe illness and need hospitalization. However, many of these models have been criticized for their lack of transparency and potential biases. For example, a study in the UK found that a widely used risk prediction model was less accurate for black and minority ethnic patients.
The Importance of Open Models in AI Oversight
Open models, where the code and data used in AI systems can be examined by independent auditors and regulators, are crucial for ensuring transparency, accountability, and fairness in AI decision-making. Here are three reasons why:
- Transparency: by enabling auditors and regulators to examine the code and data used in AI systems, open models allow for greater transparency and understanding of how decisions are being made. This is crucial for identifying and addressing biases and errors, especially in high-stakes domains like healthcare and criminal justice.
- Accountability: without open models, it can be difficult to hold parties responsible for the consequences of AI decision-making. In the case of the self-driving car example above, an open model would have allowed investigators and the public to understand why the AI system had made the decision it did, and whether the car manufacturer had properly tested and validated the system.
- Fairness: open models enable researchers to identify and address biases and other sources of unfairness in AI systems, which can perpetuate and amplify existing inequities in society. By ensuring that AI decisions are based on objective and inclusive criteria, open models can help mitigate these disparities.
Practical Tips for Advancing Open Models in AI Oversight
Here are some practical tips for advancing open models in AI oversight:
- Require that AI systems used for high-stakes applications, such as healthcare, criminal justice, and transportation, be subject to independent auditing and certification by qualified experts.
- Encourage the development of open-source AI frameworks and toolkits that can be used by researchers and developers to build more transparent and accountable AI systems.
- Support research into interpretability and explainability methods for AI systems, which can help make their decision-making processes more transparent and understandable.
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn