Sullivan & cromwell Ai filing scandal: false legal citations in bankruptcy case

Top law firm Sullivan & Cromwell has acknowledged to a U.S. bankruptcy court that one of its recent filings in a closely watched case contained serious errors produced by artificial intelligence-including completely fabricated legal citations.

In a written submission to Judge Martin Glenn of the U.S. Bankruptcy Court for the Southern District of New York, the firm conceded that portions of an April 9 motion were drafted with the assistance of an AI tool and that internal safeguards designed to prevent exactly this kind of problem were bypassed. As a result, the filing cited non‑existent legal authorities and mischaracterized real ones.

“We deeply regret that this has occurred,” wrote Andrew Dietderich, the head of Sullivan & Cromwell’s restructuring group, in his letter to the court. He described the problematic content as AI “hallucinations,” using the now-common term for instances when generative AI systems confidently produce false or misleading information that appears plausible on its face.

The episode arose in litigation surrounding the Prince Group, a network of related entities that has been described in court filings as tied to an alleged scam operation. Sullivan & Cromwell is representing court-appointed liquidators from the British Virgin Islands in the U.S. aspect of the proceedings. The April 9 motion at issue was part of those efforts, but the firm later discovered that some of the legal support in that document was unreliable because it had been generated by AI without adequate human verification.

According to the firm’s account to the court, the AI system inserted citations to authorities that simply do not exist, and in other places altered or distorted real cases and statutes in ways that changed their meaning. This is precisely the type of error pattern that has made courts and regulators increasingly wary of uncritical reliance on generative AI in legal practice.

Sullivan & Cromwell stressed that it has internal policies governing the use of AI in preparing legal documents, and that those policies, if followed, should have prevented any unverified AI-generated text from being filed with the court. In this instance, however, those protections were “not followed during the drafting process,” the firm admitted, without yet publicly specifying which individuals were involved or how the procedures failed in practice.

The firm told the court it has since corrected the record and taken steps to ensure that judges and opposing parties are not misled by the earlier submission. That typically involves submitting a revised motion or supplemental filing that removes the tainted citations, explains the source of the error, and replaces the faulty authorities with properly researched, verifiable ones. Dietderich’s letter was part of that remedial effort, aimed at preserving the integrity of the court process and the firm’s own credibility.

This incident arrives at a tense moment for the legal industry, which is rapidly experimenting with AI-powered tools while judges are simultaneously signaling that they will not tolerate sloppy or deceptive use of such technology. Courts across the United States have already begun issuing standing orders that require attorneys to certify whether and how they used AI in drafting briefs and to confirm that all citations have been independently checked against official legal databases.

The Sullivan & Cromwell disclosure underscores a key problem with generative AI in law: these systems are designed to produce fluent, authoritative-sounding text, not to guarantee factual or legal accuracy. When lawyers treat AI output as if it were a vetted research product instead of a rough first draft requiring strict verification, the risk of hallucinations turning into filed misrepresentations rises dramatically.

For a firm of Sullivan & Cromwell’s stature-long considered one of the elite players in corporate, financial, and restructuring work-the embarrassment is acute. High-end law practices trade on reputations for precision, reliability, and professional judgment. Having to confess to a federal bankruptcy judge that a key filing leaned on made-up cases created by a machine challenges that image and places the firm under pressure to show it can still control the tools it uses.

Beyond reputational harm, there are potential legal and ethical dimensions. Lawyers have professional duties of competence, diligence, and candor to the tribunal. Even if the mistakes were unintentional and driven by faulty technology, bar regulators and courts can view the failure to verify AI-assisted work as a lapse in those duties. While there is no indication yet of sanctions in this case, other courts have previously imposed financial penalties and public reprimands when lawyers submitted AI-generated hallucinations as if they were real authorities.

The Prince Group matter also highlights how AI failures can have real-world consequences in complex financial disputes. Bankruptcy and cross‑border insolvency cases often involve large amounts of money, potentially defrauded creditors, and intricate jurisdictional questions. If a court were to rely, even indirectly, on bogus citations in deciding issues about asset recovery, jurisdiction, or creditor priorities, the fallout could be significant for investors and counterparties already dealing with the aftermath of alleged fraud.

Events like this are likely to accelerate the push for clearer, firmer policies on AI in legal practice. Many firms are now moving toward models where AI can assist with early-stage drafting or issue-spotting, but any output must be rigorously checked by human lawyers using trusted research databases before it ever reaches a client or a court. Formal training, AI-use logs, internal approval workflows, and even technical filters that flag or block unverified content are becoming part of the discussion.

From a risk‑management perspective, law firms that want to harness AI’s efficiency gains need to design systems around the assumption that hallucinations will occur. That means documenting when AI is used, specifying who is responsible for final verification, and making it clear to every team member that AI suggestions are hypotheses-not authorities. Some firms are also experimenting with “closed” AI models trained only on vetted internal knowledge bases, hoping to reduce the rate of imaginative but false outputs.

For clients, the Sullivan & Cromwell case serves as a reminder to ask pointed questions about how their legal teams are incorporating AI into high‑stakes matters. Sophisticated clients are beginning to request explicit AI policies as part of their outside counsel guidelines, including requirements for human oversight and prohibitions against passing AI-generated analysis directly to regulators or courts without independent checking.

On the judicial side, judges may react by tightening disclosure rules and imposing steeper consequences for future incidents. Mandatory certifications about AI use, requirements to attach copies of all cited cases from official sources, and explicit warnings about sanctions for hallucinated citations are all tools courts are already testing. Each new high-profile failure gives courts more justification to formalize such measures.

At a broader level, the episode demonstrates the tension at the heart of AI adoption in regulated professions: the same technology that promises speed and cost savings can also amplify mistakes if used uncritically. In law, where the currency is credibility and the margin for error is thin, that tension is especially stark. The pressure to innovate runs up against the enduring expectation that lawyers will only submit to courts what they know to be accurate and supportable.

Sullivan & Cromwell’s admission that its AI safeguards were bypassed in the Prince Group bankruptcy filing will likely be cited for some time as a cautionary tale. It illustrates both how quickly AI can infiltrate core professional workflows and how essential it is to build robust, enforceable guardrails around its use. For now, the firm has sought to contain the damage by owning the error and correcting the record-but the incident will remain a reference point in the evolving debate over how far, and how fast, AI should be allowed to reshape legal practice.