Two legal professionals received warnings for submitting AI-generated arguments.
A High Court judge in England and Wales has issued a formal warning, stating that lawyers who use AI to create legal arguments with fabricated case citations could face criminal charges.
Victoria Sharp, the president of the King’s Bench Division of the High Court, along with Judge Jeremy Johnson, criticized lawyers in two separate cases for apparently using AI tools to draft arguments submitted to the court without verification.
“The misuse of artificial intelligence has serious implications for the justice system and public trust,” Sharp stated in Friday’s ruling.
The cases emerged after lower courts suspected the use of AI-generated arguments and witness statements containing false information.
In Judge Sharp’s ruling, the court revealed that in a £90 million ($120 million) lawsuit regarding a financing agreement breach with Qatar National Bank, one attorney cited 18 nonexistent cases.
Hamad Al-Haroun, the client in the case, apologized for unintentionally misleading the court with false information from publicly available AI tools. He accepted responsibility and cleared his solicitor, Abid Hussain, of blame.
Sharp found it “extraordinary that the lawyer depended on the client for legal research accuracy, rather than the reverse.”
In a separate case, barrister Sarah Forey cited five fabricated cases in a housing claim filed by a tenant against the London Borough of Haringey. Forey denied intentionally using AI but admitted she might have inadvertently done so while conducting online research.
The judge reprimanded her, stating that “Ms. Forey could have verified the cited cases by using the National Archives’ caselaw website or the law library of her Inn of Court.”
Sharp cautioned that presenting false material as authentic could be considered contempt of court, or, in severe instances, perverting the course of justice, a criminal offense punishable by a life sentence.
The judges referred the lawyers in both cases to their respective regulatory bodies but chose not to impose harsher penalties.
Prominent figures and scientists have increasingly voiced concerns about the potential dangers of AI.
Researchers caution that AI is rapidly evolving, with some models reportedly becoming self-aware and rewriting their code. Palisade Research reported last month that in recent tests, a powerful AI model “sabotaged a shutdown mechanism to prevent itself from being turned off,” raising concerns about a possible AI rebellion in the real world.