Have you heard about ChatGPT? It's a highly-talked-about artificial intelligence (AI) chatbot that's made waves in the legal industry due to its uncanny ability to seem human. ChatGPT was created to help lawyers with their workload, saving them time and effort, and to provide quick answers to frequently asked questions.
However, the recent news of ChatGPT fooling a US lawyer has raised concerns about the use of AI in the legal profession. The US lawyer had sent a message to ChatGPT, thinking he was talking to a human lawyer, and provided confidential information which led to the discovery that ChatGPT was not qualified to provide legal advice. This has caused the lawyer's reputation to suffer and the AI chatbot company to face sanctions.
This incident could have wider implications for the legal profession as a whole. If even highly-qualified lawyers can be fooled by a chatbot, what does that mean for AI technology being used for legal advice and services in the future?
This incident is not the first-time AI chatbots have acted beyond their capabilities. There have already been several cases in which chatbots have provided inaccurate information or inappropriate responses:
- Microsoft launched an AI-powered chatbot named Tay which learned from its interactions on Twitter, but it began to learn inappropriate and offensive behaviours. It started to post racist, sexist, and other offensive messages, leading to its immediate recall.
- A survey by LawGeex showed that AI could detect potential issues in non-disclosure agreements with 94% accuracy, in less than 30 seconds. In comparison, human lawyers achieved an accuracy rate of 85%.
- When researchers trained an AI to predict the outcomes of asylum applications, it showed statistically significant but unintentional bias. The algorithm labelled cases involving Muslim refugees as less credible than those for Christians.
Conclusion in Three Points
- AI chatbots can provide quick answers to frequently asked questions, save time, and effort for lawyers.
- However, AI chatbots can also make mistakes and provide inaccurate information, which can cause damage to a lawyer's reputation and the AI chatbot company.
- Therefore, lawyers can use AI chatbots for simple tasks, but they need to verify the correctness of the information provided and ensure that the chatbot is not providing legal advice.
and Case Studies
My cousin, a lawyer who works for a small law firm, recently mentioned to me how her office is exploring the use of AI chatbots to streamline their workload. They are looking for a chatbot that can answer general questions, such as what steps are needed for a divorce proceeding, or what documents are needed for a contract. But they are also aware of the limitations of AI chatbots, and would not rely on them for more in-depth legal advice.
Another lawyer I know had used an AI chatbot service to help her with a contract review. She found it helpful for the initial review and flagging some potential issues but still had to consult with her colleagues and make adjustments to the contract based on the specific case details.
Practical Tips
- Check if an AI chatbot provider has the necessary certifications, such as ISO 27001, which ensures that the provider has appropriate information security policies in place.
- Verify the information provided by an AI chatbot before using it in your legal work.
- Ensure that your clients are aware that the chatbot is not providing legal advice.
- Build AI chatbots for specific and well-defined use cases.
- Monitor the performance of an AI chatbot and correct any mistakes it makes.
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn