The first GPT-3, machine-generated law review article

Home / Uncategorized / The first GPT-3, machine-generated law review article

BY:

Salvatore Nicci

Technology Analyst / Reporter

PROJECT COUNSEL MEDIA

 

23 December 2021 (Paris, France) – As part of his end-of-the-year revew, our boss published an essay entitled “The Language Machines : a deep look at GPT-3, an AI that has no understanding of what it’s saying” which we think provides you a very comprehensive look at GPT-3 and large language models – the extremely sophisticated text predictors.

Greg did not have space to include the story about the first machine-generated law review article so we’ll do that here. Benjamin Alarie, Arthur Cockfield (and GPT-3), writing in the legal journal Law, Technology and Humans:

We present here the first machine-generated law review article. Our self-interest motivates us to believe that knowledge workers who write complex articles drawing upon years of research and effort are safe from AI developments. However, how reasonable is it to persist in this belief given recent advances in AI research? With that topic in mind, we caused GPT-3, a state-of-the-art AI, to generate a paper that explains “why humans will always be better lawyers, drivers, CEOs, presidents, and law professors than artificial intelligence and robots can ever hope to be.” The resulting paper, with no edits apart from giving it a title and bolding the headings generated by GPT-3, is reproduced below. It is imperfect in a humorous way. Ironically, it is publishable “as-is” only because it is machine-generated. Nevertheless, the resulting paper is good enough to give us some pause for thought. Although GPT-3 is not up to the task of replacing law review authors currently, we are far less confident that GPT-5 or GPT-100 might not be up to the task in future.

In their comments ahead of the article, the humans note that:

the article is not suitable as a law journal article. It lacks citations to supporting sources and exhibits odd assumptionsin some parts (as with its discussion of [the TV series] Friends). GPT-3 also demonstrates gender bias when it indicates, “For instance, most people instinctively know that a woman who is crying during an argument isn’t necessarily telling the truth.”

To be honest, it reads more like something written by someone not quite sober. They know the words, but not the order for them. Anyway, see for yourself. You can read the cover article and the actual law law review by clicking here.

And, yes, “Friends” was the first TV show to reference the internet. You can read the full story hereYep. “Friends” really is deeply embedded in the web.
Related Posts