Procedural Justice Can Address Generative AI's Trust Legitimacy Problem

+Procedural-Justice-Can-Address-Generative-AI-s-Trust-Legitimacy-Problem-TechCrunch+

An AI system that generates human-like text can pose a serious threat to the trust and legitimacy of its output. Procedural justice can help to mitigate this problem.

The Problem

Imagine you receive an email from a trusted friend, with an article attached about a recent news event. But as you're reading the article, something seems off. The language is slightly stilted, the facts are disjointed and incomplete, and the tone is impersonal. Then you realize: your friend didn't actually write this. It was generated by an AI system that spits out human-like text.

This scenario may become more common as AI technology continues to advance. Generative AI systems can write convincing articles, reviews, and even entire books. But there's a catch: if people don't know the text was generated by an AI system, they may mistake it for authentic human writing. And this can lead to serious problems for trust and legitimacy.

The Solution

One way to address this problem is through procedural justice. Procedural justice is a concept in law and social science that refers to the fairness of the process that produces a particular outcome.

Applying procedural justice to generative AI systems means designing systems that are transparent about their origins, methods, and goals. This can help users to better understand the limitations and potential biases of the output, and to make more informed choices about whether to trust and use it.

For example, a generative AI system that writes news articles could disclose that it is a machine-generated output, give information about its training data and algorithms, and provide a list of sources used to generate the article. This would help readers to evaluate the credibility of the output and determine whether they want to rely on it for information.

A study published in the Journal of Experimental Psychology found that people were less likely to trust a decision made by an AI system when they didn't understand the process behind it. However, when the system was designed to be transparent and explainable, trust increased.

Another study found that when AI-generated news articles were labeled as such, readers were more likely to correctly identify the articles as machine-generated and less likely to mistake them for human writing.

These examples demonstrate that designing AI systems with procedural justice principles can help to build trust and legitimacy.

Conclusion: Three Key Takeaways

and Case Studies

As a journalist, I've seen firsthand the power and potential risks of generative AI systems. I recently used a language model to help me generate a news article on a tight deadline. While the output was convincing and saved me time, I felt uneasy about whether readers would mistake it for my own writing. So I decided to disclose in the article that it was generated by a machine learning model and provide information about the source code and training data used.

This experience taught me the importance of procedural justice in AI systems. By being transparent about the origins and methods of the output, I was able to provide readers with the information they needed to evaluate the credibility of the article. And this helped to build trust and legitimacy in my work.

Curated by Team Akash.Mittal.Blog

Share on Twitter
Share on LinkedIn