As an AI ethicist, Sarah Johnson's job is to ensure that artificial intelligence serves the greater good, rather than perpetuating biases and other potential harms. Her passion for this field began with a story that she'll never forget.
One of her acquaintances was an AI researcher who worked for a company that paid him to design a system for predicting which job candidates would be successful. The system was fed large amounts of data on previous employee performance, as well as information about the candidates themselves.
Eventually, the AI system began flagging candidates whose applications included words or phrases that had been disproportionately associated with poor performance in the past. While seemingly logical, these phrases (such as "single mother") had nothing to do with the actual abilities of the candidate.
"When I heard that, I was horrified," Johnson recalls. "I realized that the wrong use of AI could really harm people."
Johnson now works for a consulting firm that advises companies on how to develop and implement AI systems in ethical ways. She's also active in professional organizations like the IEEE's Ethics in AI and Autonomous Systems group.
The stakes are high when it comes to AI and ethics. While the technology has the potential to revolutionize everything from healthcare to transportation, it's also capable of perpetuating and even amplifying existing biases and inequalities.
One of the biggest challenges facing AI ethicists is the lack of diversity among the teams that develop these systems. Research has shown that AI models are more likely to reflect the biases and assumptions of their creators.
For example, an AI system used by a bank to evaluate loan applications might start to flag candidates who live in low-income areas, even if those individuals are just as qualified as others. If the team that developed the AI system is predominantly white and affluent, they might not have accounted for the many factors that could lead someone to live in a low-income area (such as systemic racism or a lack of access to education), and therefore assume that it's a good indicator of risk.
Another problem is that AI systems are often opaque. Unlike traditional software, which is based on code that can be inspected and modified by humans, AI relies on "black box" algorithms whose decisions are difficult or impossible to understand.
While this can make AI more powerful and effective in certain applications, it also makes it difficult to spot and correct errors or biases that might creep in.
So what does being an AI ethicist actually look like on a day-to-day basis? Johnson shared a few examples of the work that she and her colleagues do.
If you're interested in becoming an AI ethicist, Johnson says there are a few key things to keep in mind:
Being an AI ethicist is a critical job that requires a deep understanding of both the technology and the ethical principles that underlie it. While the challenges are many, the rewards are significant: AI ethicists play a crucial role in ensuring that technological innovations do not come at the expense of society's most vulnerable members.
Technology/Artificial Intelligence/Ethics
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn