Synthetic Intelligence & Robotics
Newest ‘Bluebook’ has ‘bonkers’ rule on citing to synthetic intelligence
September 25, 2025, 11:52 am CDT
In case you are not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in keeping with specialists who spoke with LawSites. (Photograph by Howchou, PD ineligible (books), by way of Wikimedia Commons)
In case you are not sure about how and when to quote content material generated by synthetic intelligence, a brand new quotation rule is unlikely to clear up the confusion, in keeping with specialists who spoke with LawSites.
The twenty second version of The Bluebook: A Uniform System of Quotation, launched in Might, features a new Rule 18.3 for citing output from generative AI. Critics argue that the brand new rule “is essentially flawed in each conception and execution,” LawSites stories.
Critics embody Susan Tanner, a professor on the College of Louisville’s Louis D. Brandeis Faculty of Regulation, who referred to as the brand new rule “bonkers” in a submit at Medium.
The rule requires that authors citing output from generative AI, reminiscent of ChatGPT conversations or Google search outcomes, save a screenshot of that output as a PDF. The rule has three sections—for giant language fashions, search outcomes and AI-generated content material—and has barely differing quotation guidelines for every.
One drawback, Tanner stated, is that the rule treats AI as a citable authority, slightly than a analysis software.
“What would a wise method to AI quotation seem like?” Tanner wrote. “First, acknowledge that in 99% of instances, we shouldn’t be citing AI in any respect. We must always cite the verified sources AI helped us discover.”
Within the uncommon case wherein an AI output ought to be cited, the creator ought to do not forget that the quotation is documenting what was stated by generative AI, not the reality of what was stated, Tanner stated. She offers this instance: “OpenAI, ChatGPT-4, ‘Clarify the rumour rule in Kentucky’ (Oct. 30, 2024) (conversational artifact on file with creator) (not cited for accuracy of content material).”
Jessica R. Gunder, a scientific legislation professor on the College of Idaho Faculty of Regulation, offered one other instance of an applicable quotation to generative AI in her critique of Rule 18.3 posted to SSRN.
“If an creator wished to focus on the unreliability of a generative AI software by pointing to the truth that the software crafted a pizza recipe that included glue as an ingredient to maintain the cheese from falling off the slice, a quotation—and preservation of the generative AI output—could be applicable,” she wrote.
Cullen O’Keefe, the director of analysis on the Institute for Regulation & AI, sees one other drawback. The rule differentiates between giant language fashions and “AI-generated content material,” however giant language fashions are a kind of AI-generated content material.
In an article on the Substack weblog Jural Networks, he recommended that one interpretation of the rule governing AI-generated content material is that it applies to issues reminiscent of photographs, audio recordings and sound.
He additionally sees inconsistencies about whether or not to make use of firm names together with mannequin names and when to require the date of the era and the immediate used.
“I don’t imply to be too harsh on the editors, whom I commend for tackling this problem head-on,” O’Keefe wrote. “However this rule lacks the everyday precision for which The Bluebook is (in)well-known.”
Write a letter to the editor, share a narrative tip or replace, or report an error.