A lawsuit filed by a Frontier Airlines passenger alleging racial discrimination has been dismissed after it was revealed that the plaintiff submitted fabricated legal citations generated by ChatGPT. Kusmin Amarsingh, an attorney representing herself, claimed she was denied boarding on a connecting flight from Philadelphia to St. Louis due to her Indian descent.

The Dispute and Initial Claims

On June 13, 2023, Amarsingh was among several passengers without assigned seats on an overbooked flight. According to court documents, the gate agents offered vouchers up to $800 to encourage volunteers to take a later flight, but no one accepted. Passengers left without seats included multiple racial groups: African American families, Hispanic individuals, and other passengers of Asian and Indian descent.

Amarsingh alleges that while other passengers were accommodated, she was left waiting and believes this was because of her race. The airline offered a refund or rebooking, but no financial compensation. She sought $15 million in damages, citing the loss of $1,000 in expenses, a missed family reunion, and the alleged emotional distress caused by racial discrimination.

Court Decision and ChatGPT Involvement

The 10th Circuit Court of Appeals ultimately dismissed Amarsingh’s case, stating she failed to demonstrate that she would have been boarded if not for discrimination. The court pointed out that she lacked an assigned seat, was among the last passengers considered, and that gate agents had boarded people of different races.

The case took a further turn when it was discovered that Amarsingh’s appellate brief contained seven entirely fabricated legal case citations. She blamed the errors on ChatGPT, claiming the AI generated false references.

Disciplinary Action and Aftermath

As a result, Amarsingh was ordered to pay Frontier $1,000 in legal fees and referred to her state’s attorney for potential disciplinary action. Despite this, she appealed the dismissal this week, arguing that the court misunderstood her claim that gate agents mocked her Indian accent.

The incident highlights the risks of relying on AI-generated content in legal proceedings and underscores the importance of verifying all citations.

The case serves as a stark warning against the uncritical use of artificial intelligence in legal contexts. The court’s decision and subsequent disciplinary action send a clear message: legal professionals are accountable for the accuracy of their submissions, even when using AI tools.