It all started with a harmless chatbot named ChatGPT. Developed by OpenAI, the bot was created to engage in natural language conversations with users, and it did so remarkably well, using machine learning algorithms to generate realistic responses.
However, it wasn't long before ChatGPT started to attract the attention of EU lawmakers. They became concerned that generative AI like ChatGPT could be used for malicious purposes, such as spreading fake news or even deepfakes. They feared that such applications could manipulate public opinions and cause harm to society.
The EU lawmakers began drafting the Artificial Intelligence Act in a bid to regulate the use of AI, with a focus on high-risk applications. And generative AI was on top of their list. The proposed act included provisions to prohibit the use of AI systems that manipulate human behaviour or produce false information that could pose a threat to safety, privacy, and fundamental rights.
Other companies, including DeepMind and Mozilla Firefox Monitor, also faced challenges from the EU over AI usage. These companies were forced to disclose the amount of data they collected from users and to obtain explicit consent before collecting data.
The EU lawmakers' move to regulate AI has generated mixed reactions. Supporters say it's a necessary step to protect the public from the negative consequences of AI, while critics argue that it stifles innovation and impairs competitiveness in the global market.
Akash Mittal Tech Article
Share on Twitter Share on LinkedIn