July is coming to an end, and a new school year is fast approaching. This is a great time to reflect on the past year, and make plans for the one ahead. This month, we’re going to look into one of the past year’s most talked-about subjects in education: AI and its role in the classroom. Keep reading for an experiment with ChatGPT (I try to plagiarize one of my own assignments) and a roundup of debates and discussion of AI and ChatGPT taking place right now on HeinOnline.
An Experiment with Plagiarism
ChatGPT was released for public use in November 2022, six months after I taught my most recent course, so I have not yet had the opportunity to encounter it in the wild of the classroom. I experimented by feeding ChatGPT the first two paragraphs of the assignment sheet for an argumentative essay assignment I have used in the past with students in my introductory composition courses. Here is the input I fed ChatGPT.
For this assignment, you will develop an academic argument and support that argument using evidence from reliable, academic research. This argument might explore the cause of a trend or problem, argue for solutions to a problem, take a stance in an academic debate (NOT a moral or popular debate), or analyze a cultural phenomenon.
Whatever path you choose, you will need to do research using resources available to you through the UB Library website. Although there might be reliable sources available elsewhere, part of the goal of this assignment is to get you acclimated with using academic sources found through the library. In supporting your argument, you should use 4-8 reliable, academic sources. You should employ a variety of sources and types of evidence.
As you can see, the parameters of the essay are pretty broad, in terms of permissible topics—I like giving students the freedom to explore their own interests. ChatGPT came up with an acceptable, if not exactly original, topic for the essay: “the causes and consequences of income inequality in modern societies.” With some additional prompting and more specific instructions on my part, ChatGPT produced the outline of an essay with the title “The Impact of Income Inequality on Society and the Economy” (again, acceptable but uninspiring). The outline consisted of an introductory paragraph; the skeleton details of four body paragraphs, each centering around one of four sources ChatGPT selected; and a conclusion. Here is the introduction ChatGPT generated.
Income inequality has emerged as a pressing social and economic issue in modern societies. The gap between the rich and the poor has widened significantly, raising concerns about the implications for societal well-being and sustainable development. This essay aims to explore the causes and consequences of income inequality, using evidence from reliable academic sources to support the argument that addressing this issue is crucial for achieving a fair and equitable society.
To me, the most noteworthy thing about this content is how unremarkable it is. The introduction does not say much, is pretty nonspecific, and in places just more or less parrots the language from the assignment description. Interestingly enough, this paragraph does read like a reasonable facsimile of the sort of introduction an inexperienced first-year college student might write—which is to say, “okay, but definitely needs work.” My takeaway from this experiment is that ChatGPT is not very good at writing essays on its own, at least not with a single prompt. Even when given a series of increasingly specific prompts, all it would generate for me was a rough outline of an essay that could—with a great deal of further work on the part of the student—potentially meet the criteria for my assignment.
There are some tasks that ChatGPT performed relatively well in my experiment. I was impressed by its ability to find credible academic sources: three academic monographs and a whitepaper from Oxfam, all on the subject of income inequality. But even here, there were some issues. The first source ChatGPT produced was Thomas Piketty’s Capital in the Twenty-First Century, a nearly 700-page text on macroeconomics that would almost certainly be beyond the comprehension of a typical first-year college student. However, this is not a problem unique to AI. Plenty of students come upon research sources out of their depth through other means, such as library databases. This is why we have professors and research librarians—to help students contextualize the often-overwhelming documents they unearth in their research. I have no problem with students using AI to find research sources under the guidance of an instructor or librarian—as long as they then read the sources themselves, and produce their own conclusions and analysis. To me, this does not seem ethically different from using any number of the other search tools available. This is, of course, an ongoing debate amongst educators.
What ChatGPT Can and Cannot Do
Scholars and researchers have identified other serious shortcomings with using ChatGPT as a replacement for human research and writing. As Khym Lam Fidler puts it, in “ChatGPT — The Blurst of Times“:[1]Khym Lam Fidler, ChatGPT – The Blurst of Times, 31 Austl. L. LIBR. 19 (2023). This article can be found in HeinOnline’s Law Journal Library. “For the app, factual accuracy isn’t as important as stating something plausibly – as if by a knowledgeable human.” Simply put, ChatGPT is good at generating convincing language but it does not understand the meaning or the context of the content it generates. In this sense, “Artificial Intelligence” is a bit of a misnomer for what most experts instead refer to as a “large language model.” The linguist Emily M. Bender goes so far as to refer to ChatGPT as a “stochastic parrot,” thanks to its propensity for mimicking human language while simultaneously fabricating information, seemingly at random.
This characteristic of ChatGPT—its confusing habit of presenting fabricated and false information of uncertain origin as straightforward fact—has come to commonly be referred to as “hallucinating.”[2]Herbert B. Dixon Jr., My “Hallucinating” Experience with ChatGPT, 62 Judges J. 37 (2023). This article can be found in HeinOnline’s Law Journal Library. This presents a problem not merely for students, but for legal professionals as well. In June, a district judge in Manhattan fined two lawyers for submitting a court filing that contained citations to fake cases that had been fabricated by ChatGPT. The law firm claimed to have used ChatGPT in good faith, assuming that it was a straightforward research tool that uncovered obscure legal cases: “In the face of what even the court acknowledged was an unprecedented situation, we made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth.”
This lack of information literacy from a practicing lawyer only serves to highlight the importance of engaging critically with new technologies in the classroom. Banning ChatGPT, as some institutions have already done, is not a useful solution. If anything, it only heightens the mystique surrounding AI, making it seem like some sort of all-powerful and omnipotent entity. It’s not. It’s a tool, which has its uses and its limitations. Kevin Roose suggests using ChatGPT the way we use calculators—“allowing it for some assignments, but not others,”—and suggests a number of ways that ChatGPT could be incorporated into student learning.
Cherie Shields, a high school English teacher in Oregon, told me that she had recently assigned students in one of her classes to use ChatGPT to create outlines for their essays comparing and contrasting two 19th-century short stories that touch on themes of gender and mental health: “The Story of an Hour,” by Kate Chopin, and “The Yellow Wallpaper,” by Charlotte Perkins Gilman. Once the outlines were generated, her students put their laptops away and wrote their essays longhand.
The process, she said, had not only deepened students’ understanding of the stories. It had also taught them about interacting with A.I. models, and how to coax a helpful response out of one.
There are also a number of uses for ChatGPT “behind the scenes” of the classroom, as Tammy Pettinato Oltz describes in “ChatGPT, Professor of Law.”[3]Tammy Pettinato Oltz, ChatGPT, Professor of Law, 2023 U. ILL. J.L. TECH. & POL’y 207 (2023). This article can be found in HeinOnline’s Law Journal Library. As part of her own experiment with ChatGPT in a law school context, Oltz ran seven prompts through ChatGPT—four related to service duties and three related to teaching responsibilities. The results were mixed in terms of teaching responsibilities (ChatGPT generated false information when prompted to generate a handout on sexual harassment law), but the program “did very well on the service-related prompts.” Oltz concludes that “ChatGPT can provide law professors with near-finished products for routine tasks and a solid jumping-off point for those that are more complex.”
Further Reading
A great deal of the anxiety around ChatGPT revolves around its use as tool to facilitate plagiarism. As with “traditional” plagiarism, one of the best ways to deal with ChatGPT and AI is to take a proactive approach by using scaffolding to design assignments and encouraging regular and open conversation with students. For further reading, check out last October’s edition of HeinOnline in the Classroom, “Four Tips for Preventing Plagiarism.” For more information on how HeinOnline uses AI and natural language processing tools to enhance the platform’s search capabilities, check out this blog post.
This is far from the last word on AI! If anything, the debate over the technical, ethical and societal implications of this developing technology is only getting started. You can follow these conversations as they unfold, in real time, through our flagship collection of interdisciplinary publications in the Law Journal Library. Another great resource is our Bar Journals Library, which contains complete coverage of nearly all state bar journals and a number of city bar journals, along with select publications from the American Bar Association. The debate about AI and ChatGPT in the legal professions in these journals is lively and ongoing.
Finally, if you’re interested in viewing my prompts and ChatGPT’s responses from the experiment I described in greater detail, you can view the conversation here. Enjoy!
HeinOnline Sources[+]
↑1 | Khym Lam Fidler, ChatGPT – The Blurst of Times, 31 Austl. L. LIBR. 19 (2023). This article can be found in HeinOnline’s Law Journal Library. |
---|---|
↑2 | Herbert B. Dixon Jr., My “Hallucinating” Experience with ChatGPT, 62 Judges J. 37 (2023). This article can be found in HeinOnline’s Law Journal Library. |
↑3 | Tammy Pettinato Oltz, ChatGPT, Professor of Law, 2023 U. ILL. J.L. TECH. & POL’y 207 (2023). This article can be found in HeinOnline’s Law Journal Library. |