A Stanford University “disinformation expert” has been accused of using artificial intelligence (AI) to craft testimony that was later used by Minnesota Attorney General Keith Ellison in a politically charged case.
Jeff Hancock, a communications professor and founder of the acclaimed school’s Social Media Lab, provided expert testimony in a case involving a satirical conservative YouTuber named Christopher Kohls. The court case concerns Minnesota’s recent ban on political deepfakes, which the plaintiffs say is an attack on free speech.
Ellison presented testimony to the court from Hancock, who argues in favor of the law. Hancock is “known for his research on how people use deception with technology, from sending text messages and emails to detecting fake online reviews,” according to Stanford’s website.
But the plaintiff’s attorneys asked the Minnesota federal judge hearing the case to throw out the testimony, accusing Hancock of citing a false study.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?
“[The] Professor Jeff Hancock’s statement cites a study that does not exist,” the lawyers argued in a recent 36-page memo. “No article with that title exists.”
The “study” was called “The influence of deepfake videos on political attitudes and behavior” and was supposedly published in the Information Technology and Policy Magazine. He Presentation of November 16 He points out that the journal is authentic, but it had never published a study with that name.
“The publication exists, but the cited pages belong to unrelated articles,” the lawyers argued. “The study was probably a ‘hallucination’ generated by a large AI language model like ChatGPT.”
“Plaintiffs do not know how this hallucination ended up in Hancock’s statement, but it casts doubt on the entire document, especially when much of the commentary contains no methodology or analytical logic whatsoever.”
The document also criticizes Ellison, arguing that “the conclusions on which Ellison relies most have no methodology behind them and consist entirely of expert opinions.”
“Hancock could have cited an actual study similar to the proposal in paragraph 21,” the memo states. “But the existence of a fictitious quote that Hancock (or his assistants) didn’t even bother to click calls into question the quality and veracity of the entire statement.”
PROPOSED EXECUTIVE ORDER FOR ‘WAKEN UP’ ARTIFICIAL INTELLIGENCE CALLED ‘SOCIAL CANCER’
The memo also reinforces the claim that the quote is false, pointing to the multiple searches that attorneys conducted to try to locate the study.
“The title of the alleged article, and even a fragment of it, does not appear anywhere on the Internet indexed by Google and Bing, the most used search engines,” the document states. “Searching Google Scholar, a search engine specializing in academic articles and patent publications, does not reveal any articles matching the description of the citation written by ‘Hwang.’ [the purported author] that includes the term ‘deepfake.'”
“Perhaps this was simply a copy-and-paste error? It is not,” the document later states flatly. “The article does not exist.”
The lawyers concluded that, if the statement was partially fabricated, it is wholly unreliable and should be dismissed from judicial consideration.
“Professor Hancock’s statement should be excluded in its entirety because at least part of it is based on fabricated material likely generated by an AI model, calling into question his conclusive claims,” the document concludes. “The court may investigate the source of the invention and additional measures may be warranted.”
CLICK HERE TO GET THE FOX NEWS APP
Fox News Digital has reached out to Ellison, Hancock and Stanford University for comment.