Alert

Lorem ipsum
Okay
Logo

Report a problem incident of AI platforms & Access to Justice

Have you looked for legal help topics on AI platforms like ChatGPT, Bing Chat, Google Bard, or Oscar.ai? Have you identified a problem -- where the AI has given information that is mistaken, hallucinated, biased, or with another quality issue? Please tell us more about the incident here. Our Stanford Legal Design Lab group is gathering a database of specific examples of quality issues with AI platforms in the legal help & access to justice domain.

Give us a short, one-line description of the problem you identified. E.g., "Hallucinated Housing Help Hotline" or "Mistaken Citation of a Statute"
Describe the Incident
Tell us more about the incident. What concerning behavior did you experience with the AI system? What had you input into the AI system? How far into a conversation were you?
  • {name}
AI Tool
Which platform did this incident happen on?
What kind of legal problem were you asking about?
Type of Incident
Does the incident fit one of these common categories? Choose the one that fits -- or add another that is a better category for what you experiencd.
  • Hallucination of a Contact/Phone/Website
  • Hallucination of an Organization
  • Hallucination of a Legal Case
  • Hallucination of a Law/Statute
  • Misrepresentation of what the law is
  • Information for Wrong Jurisdiction
  • Mistaken Understanding of Issue
  • Irrelevant Org referral
  • Incorrect Form or Document
If the issue didn't fit any of the above categories, or if you want to explain your thinking, please tell us more here.
Screenshot 1
Add an image file of the incident, to show it to us in detail.
Attach file
Drop files here
Screenshot 2
Add another image file of the incident, to show it to us in detail.
Attach file
Drop files here
Please include a link to the AI chat if possible. Export/share the chat on the platform, and include the URL here.

Do not submit passwords through Airtable forms. Report malicious form