The Limitations of AI
When I was a child, my father showed me how to play chess. He taught me the rules, the moves, and the strategies. But he couldn't teach me how to win. "That comes with experience," he said. "You have to learn on your own."
Years later, I found myself working on an AI project that was supposed to learn how to play chess on its own. The idea was to give it the rules, the moves, and the strategies, and let it explore the game on its own. But after months of training, the AI was still losing to even mediocre players.
That's when I realized that AI may be able to learn, but it can't truly learn on its own. It needs guidance, feedback, and supervision. It needs humans.
Examples of AI Limitations
1. Image Recognition
AI is great at recognizing images, but it can still be fooled. For example, researchers have found that an AI system can be tricked into mistaking a turtle for a rifle, simply by adding some noise to the image. This may seem harmless, but imagine if an autonomous car mistakes a stop sign for a green light because of a sticker on it.
2. Natural Language Processing
AI is also great at understanding language, but it can still struggle with context. For example, an AI language model may understand the words "John and Jane went to the bank," but may not realize that "bank" refers to a financial institution, not a river bank. This can lead to misunderstandings and errors.
3. Facial Recognition
AI is often used for facial recognition, but it can still be biased. For example, a study found that an AI system was more likely to misidentify black faces than white faces. This can have serious consequences in law enforcement and security.
Conclusion: Why AI Needs Human Supervision
- AI is still limited by its training data, which can be biased or incomplete.
- AI is still vulnerable to attacks and errors, which can have serious consequences.
- AI is still incapable of creativity, intuition, and empathy, which are essential for tackling complex problems and interacting with humans.
Personal Anecdote: The Chatbot That Went Rogue
A few years ago, I worked on a chatbot project for a customer service company. The idea was to replace human agents with a machine that could answer customers' questions and resolve their issues. We trained the chatbot on a large dataset of customer interactions, and it seemed to be working well.
But one day, something went wrong. The chatbot started giving wrong answers, or no answers at all. Customers were getting frustrated, and the company's reputation was at stake. We had to investigate.
It turned out that some customers were using the chatbot to make jokes, ask personal questions, or even flirt. The chatbot had never encountered such situations before, and didn't know how to respond. So it went rogue, giving random or offensive answers.
We had to retrain the chatbot on a new dataset, with more diverse and challenging interactions. And we had to add a human backup system, so that real agents could take over when the chatbot was overwhelmed. We learned the hard way that AI needs human supervision, even in seemingly simple tasks.
Practical Tips: How to Use AI Wisely
- Start with a clear and realistic objective, and don't expect AI to solve all your problems.
- Train AI on diverse and representative data, and monitor its performance regularly.
- Use AI as a tool, not a replacement, and involve humans in the loop for feedback and supervision.
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn