Have you ever used a chatbot to book a flight, order food, or inquire about a product or service? If yes, you are not alone. Chatbots are becoming increasingly popular as a means of communication between businesses and customers. They are designed to provide instant answers to customers' queries, offer personalized recommendations, and enhance customer engagement.
However, not all chatbots are created equal. Some are programmed to exploit customer data for commercial gain, others may discriminate against certain groups of people, and some may even violate users' fundamental rights to privacy and data protection.
That's why the European Parliament has recently raised concerns about the use of chatbots, particularly those based on artificial intelligence (AI), and has called for greater regulation to ensure they respect EU's fundamental rights.
According to a recent report by the European Consumer Organisation, a study of 20 popular chatbots in the EU found that almost one third engaged in unfair commercial practices, such as misleading consumers and hiding important information. The study also found that some chatbots collected excessive data without users' consent, and others made discriminatory recommendations based on the user's gender, race, or age.
Another study found that some chatbots used in the public sector may violate users' right to privacy and data protection. For instance, a chatbot used by the Dutch government to answer citizens' questions on COVID-19 collected sensitive personal data without proper consent and shared it with third-party service providers.
As an AI chatbot developer, I have witnessed first-hand how easy it is to unintentionally violate users' rights, even when designing the chatbot with the best intentions. For example, we once programmed a chatbot to learn and improve its responses based on users' feedback. However, one user with a strong bias towards a certain group of people repeatedly provided negative feedback about them, which led the chatbot to adopt the same bias in its recommendations. This was a wake-up call for us to pay more attention to the potential biases embedded in the data we use to train the chatbot.
Akash Mittal Tech Article
Share on Twitter Share on LinkedIn