The Secrets Inside AI's Black Boxes

+The Secrets Inside AI's Black Boxes+

By Akash Mittal

In 2019, the world witnessed the first fatal accident involving a self-driving car. A pedestrian was struck and killed by an autonomous Uber vehicle in Tempe, Arizona. The vehicle detected the pedestrian 6 seconds before the crash but failed to apply the brakes. The accident highlighted a huge problem with one of the most advanced and sophisticated technologies of our time: artificial intelligence.

AI is capable of performing a vast array of tasks. From recognizing speech and images to driving cars and diagnosing diseases, AI is all around us. However, the inner workings of AI are often a mystery. The algorithms that enable AI to make decisions are hidden inside black boxes, making it difficult for humans to understand how they reach their conclusions.

To tackle this problem, computer scientists are now peering inside AI's black boxes. They are developing techniques to reveal the inner workings of these algorithms and improve their transparency. Let's explore some real-life examples of how computer scientists are doing this.

Example 1: Image Recognition

Image recognition is a popular application of AI, used for identifying objects in photos, videos, and live streams. Google's DeepDream, for example, is an image recognition software that uses neural networks to identify objects in images. However, the results are often unpredictable and difficult to interpret. To address this issue, researchers at Google developed a technique called Class Activation Mapping (CAM). CAM allows researchers to visualize and understand which parts of an image the neural network is using to make its decisions.

Google

Example 2: Speech Recognition

Speech recognition is another application of AI that is becoming increasingly popular. Amazon's Alexa, Apple's Siri, and Google's Assistant all use speech recognition to understand and respond to users' commands. However, these systems are not perfect and can sometimes misinterpret what a user is saying. Researchers at the University of Maryland developed a technique called Linguistic Input Feature Extraction (LIFE) to analyze the signals produced by a user's speech. The technique allows researchers to identify which parts of the speech signal are important for the recognition of specific words or phrases.

Amazon, Apple, Google

Example 3: Autonomous Driving

The Uber accident in Tempe, Arizona, highlighted the need for greater transparency in autonomous driving systems. Researchers at the University of California, Berkeley, developed a technique called "Learning from Demonstrations" to improve the transparency of autonomous driving systems. The technique involves observing and analyzing human driving behavior to teach the autonomous system how to drive safely and efficiently.

Uber

Conclusion

Akash Mittal Tech Article

Share on Twitter
Share on LinkedIn