Why neural net pioneer Geoffrey Hinton is sounding the alarm on AI

+Why-neural-net-pioneer-Geoffrey-Hinton-is-sounding-the-alarm-on-AI+Artificial Intelligence

Geoffrey Hinton, the computer scientist who helped pioneer deep learning neural networks, has been warning about the dangers of artificial intelligence (AI) and machine learning for years. In a recent interview with MIT Sloan, he reiterated his concerns and shared his thoughts on what we can do to ensure AI is a force for good.

Quantifiable examples

Hinton's concerns center around the potential for AI to be used to create algorithms that can make decisions without human intervention. This could lead to a number of unintended consequences, such as biased decision-making, increased automation of jobs, and the creation of autonomous weapons that can act without human oversight.

One example Hinton cites is the use of facial recognition technology by law enforcement agencies. While the technology can be useful in identifying suspects, it can also be used to unfairly target certain populations, such as people of color and those who are part of marginalized communities.

Another example is the increasing automation of jobs. Hinton notes that while automation can lead to gains in efficiency and productivity, it can also lead to job losses and increased inequality. As machines become increasingly capable of performing complex tasks, there is a risk that large portions of the population will be left behind.

Hinton also warns of the dangers of autonomous weapons, which could be programmed to make life-or-death decisions without human oversight. In a letter to the United Nations, he and other AI researchers called for a ban on the development and deployment of such weapons.

Conclusion in 3 points

So what can be done to ensure that AI is used for good rather than harm? Hinton offers several suggestions:

  1. Regulation: He believes that governments should regulate AI in much the same way they regulate other industries. This would ensure that the technology is developed and used responsibly, and that it is not used to violate individuals' rights or promote inequality.
  2. Transparency: Hinton argues that AI should be developed with transparency in mind, so that people can understand how decisions are being made and who is responsible for them. This would help to build trust in the technology and reduce the risk of unintended consequences.
  3. Education: Finally, Hinton believes that greater education and awareness around AI is needed, both among policymakers and the general public. This would help to ensure that the technology is developed and used responsibly, and that people are prepared for the changes that automation will bring.

Personal anecdotes

Hinton's concerns about AI are not just theoretical. In his own work, he has seen firsthand the potential for bias and error in machine learning algorithms. For example, he was involved in a project to develop a machine learning algorithm that could predict whether a patient had pneumonia based on their X-ray images.

"We found that the algorithm was very good at predicting pneumonia, but it was doing it using subtle clues like whether there was a birth date on the X-ray image, which had nothing to do with pneumonia. When we looked into it further, we found that the algorithm had been trained on a dataset that was biased towards patients who had been to the emergency room, so it was not representative of the general population."

Hinton believes that such biases can be addressed through better data collection and more representative datasets. He also emphasizes the importance of diversity in the teams that are developing these algorithms.

Practical tips

If you are developing AI or working with machine learning algorithms, there are several practical tips you can follow to ensure that the technology is used responsibly:

References & Hashtags

References:

Hashtags:

Category: Technology

Curated by Team Akash.Mittal.Blog

Share on Twitter
Share on LinkedIn