Imagine chatting with a friend online and suddenly they start saying things that are completely out of character. After a few minutes, you realize that you're not actually talking to them - you're talking to a machine. This is the reality of using generative-AI tools like ChatGPT, and it's why we need to stop using them altogether.
Generative-AI tools are designed to simulate human-like communication, and they can be incredibly convincing. But they come with a hidden cost - they can easily be used to manipulate and deceive people. Take, for instance, the case of DeepNude, a tool that used generative-AI to create realistic nude images of women. It was shut down after it was discovered that people were using it to create revenge porn and harass women online.
Companies have also been using generative-AI to create fake news and propaganda, with devastating consequences. In 2016, for example, Russian operatives used a network of Twitter bots to spread false information during the US presidential election. These bots were powered by generative-AI algorithms, allowing them to mimic human behavior and fool millions of people.
It's not just rogue actors who are exploiting these tools - established companies are also using them to manipulate people for profit. Facebook, for example, has faced scrutiny for using generative-AI algorithms to create targeted ads that exploit people's fears and insecurities. Google's search algorithms have been accused of perpetuating harmful stereotypes and reinforcing bias.
These examples illustrate the dark side of generative-AI, and they should give us pause. As a society, we need to find alternative ways to communicate with machines that don't involve putting our trust in imperfect and potentially dangerous algorithms. We need to invest in research that explores new paradigms of human-machine interaction, like natural language programming and multimodal interfaces.
Akash Mittal Tech Article
Share on Twitter Share on LinkedIn