Image created using Ideogram.In a conversation with a work colleague, we discussed analogies to how generative AI impacts how we teach students to learn and how we assess student learning outcomes. What does it mean for scholarship?
Comparisons were offered with the introduction of the calculator, or Wikipedia, or talking to a well-read friend, or an actor, or the role Leonardo DiCaprio played in Catch Me if You Can. I pondered the new problem that after two and half thousand years of literary theory, we still can't measure what it is to write as a human. We have no basis for collecting data on the distinction between human writing and writing by a bot that mimics human writing. The first attempt was based on perplexity and burstiness. These are not normally regarded as literary terms. We can collect data to determine how to write in the style of Ernest Hemingway or Agatha Christie, measuring the occurrence of adverbs, exclamation marks, and distinctive words, as well as sentence length, for example. These are distant readings; the opposite of close readings (see Ben Blatt’s 2017 Nabokov’s Favourite Colour is Mauve on statistics on the craft of writing).
My colleague said that a producer creates an artifact which is the boundary between the producer and the responder. The responder assumes a human made the product. The text. This is a boundary - the artifact stands between the writer and the reader - but the reader understands that the text was created by a human. Now the human creator is unknown. With generative AI, that boundary is blurred. We think of this as something new, but I suspect we have been here before. I suggested that the blurred boundary already exists in the field of Classics. Generative AI is like the reception of texts from ancient literature; the passed down received meaning of texts that are lost, found, fragmented, translated with bias, and compiled with bias.
I said that my analogy would be the corpus of extant ancient literature and explained why: fragments lost and found, references from other texts, fluid oral stories transcribed into an artifact, translations retranslated, the whole provenance over millennia, all the accidents of history; yet the claim is that these texts survived due to their value and their meanings as they have been passed down through generations of scholarship. We have no autograph copies of any ancient texts. We don’t know if Homer was an actual person; we only know that hymns, epic poems, and a comedy were assigned to him, if he existed. As for the known writers in ancient history, there is no assurance that a person we identify wrote these exactly as the texts have come to us. They do have value, but as tools to think with (as does everything) and we need to check our assumptions.
This is what Chat GPT says about the reception of ancient classical texts: Classical texts have been revered and preserved over centuries due to their cultural and historical significance. They often reflect the values, beliefs, and ideas of the time in which they were written. They have been studied, analyzed, and interpreted by scholars, and their influence has been acknowledged and passed down through generations.
But that’s not exactly true.
The texts that have survived from ancient times did not necessarily survive because they were the best. If that were true then the Roman graffiti that survives does so because it is excellent and important, rather than because it was written on stone walls rather than papyrus. Most texts that have survived are due to accidents of history. Socio-religious or geo-political factors may play their roles in specific times and places but generally the survival of most, perhaps all, texts are purely by chance. We know about some texts that did not survive because they are recorded in texts that did survive, but, with fragments, it is difficult to identify what the text was: it could be a joke, satirical, critical, and the references are to people and characters we can only speculate about. There is no way to discern the significance. Ask Classicists what texts they most want to be found and responses might include: Homer’s lost comedy; Aristotle’s second book of Poetics which covered comedy; Euclid’s book of logical fallacies; Ovid’s Medea; the plays that beat the plays of Aeschylus, Sophocles and Euripides in competition; and Longinus’ works on Homer. And, of course, the whole Library of Alexandria, which, if it had survived, would have altered the course of human history. Sigh.
Texts have been lost because they were deliberately destroyed but also due to fire, corruption, neglect, reuse as another text or reuse as toilet paper. Texts have been found when used as packing paper, in sealed storage containers, among debris, or written over which come to us as palimpsests. The stories of these texts’ survival are mostly an ongoing process spanning millennia. (See Josephine Balmer’s 2017 The Paths of Survival, for a poetic exploration of the provenance of an ancient text.)
When you read ancient classical texts you need to adjust yourself to becoming comfortable with ‘the rest is lost’. For good translations of fragments the translator aims to replicate the source, not just translating words and mimicking the sounds, wordplay, and other literary devices, but also the gaps in the text, indicated by brackets or the layout on the page.
In 2014 US Professor of Classics, Diane Rayor, was about to publish her translations of the ‘complete’ works of Sappho when new fragments were found. She needed to re-evaluate what she thought she knew to incorporate these new pieces of the puzzle. She says that fragments offer intriguing possibilities, echoing broken conversations, trailing voices. Australian Professor of Classics, Marguerite Johnson, agrees there is a pleasure in working with fragments: ‘I really don't want them ever to be completed, filled in, finalised. Their fragmentary condition makes them special, unique, and I really can't image Sappho actually composing anything complete.’ (This quote is from personal correspondence. You can check all other references in books or online academic journals written by experts, who became experts due to scholarship).
We need to challenge assumptions and check the facts and check the sources for the facts. What do we know and how do we know it? What is the provenance? Not just in the light of generative AI, but for everything. And when we talk about bias, we need to consider the audience, purpose, and context of the producer. What was their agenda? What were they aiming to do? What do they value? What do they disregard? This is more difficult when applied to texts generated by AI.
And when we observe that history can be rewritten, texts can be rewritten, and that news reports of current events can be inaccurate, biased, and just wrong, then how do we check that we understand the events of history and the development of ideas? In my own lifetime I have witnessed how the music of the 1980s has been misrepresented; the nostalgia radio stations playing ‘the best’ music of that decade is certainly not the music that was valued at the time and was actively despised in the share houses I lived in during the 1980s.
Generative AI writes the commonly held ideas from all sources. Those sources are not consistent, not authorised, not experts, and not challenged. It is like the passed down received meaning of texts that are lost, found, fragmented, translated, and compiled with bias. It mimics human writing but is not human.
Humans bring their whole selves, influenced by all the factors that make that person an individual. We share a collective humanity. We want to engage with scholarship; we want to pursue our intellectual curiosity; we want to use texts as tools to think with; we want to share our thinking and test our thinking. We want to engage as humans, and we want students to engage as humans. And we know that people are more valuable than bots for doing this thinking together. So long as we check our sources, this is what scholarship is.