Itemoids

Honor Code

ChatGPT Doesn’t Have to Ruin College

The Atlantic

www.theatlantic.com › ideas › archive › 2024 › 10 › chatgpt-vs-university-honor-code › 680336

Two of them were sprawled out on a long concrete bench in front of the main Haverford College library, one scribbling in a battered spiral-ring notebook, the other making annotations in the white margins of a novel. Three more sat on the ground beneath them, crisscross-applesauce, chatting about classes. A little hip, a little nerdy, a little tattooed; unmistakably English majors. The scene had the trappings of a campus-movie set piece: blue skies, green greens, kids both working and not working, at once anxious and carefree.

I said I was sorry to interrupt them, and they were kind enough to pretend that I hadn’t. I explained that I’m a writer, interested in how artificial intelligence is affecting higher education, particularly the humanities. When I asked whether they felt that ChatGPT-assisted cheating was common on campus, they looked at me like I had three heads. “I’m an English major,” one told me. “I want to write.” Another added: “Chat doesn’t write well anyway. It sucks.” A third chimed in, “What’s the point of being an English major if you don’t want to write?” They all murmured in agreement.

What’s the point, indeed? The conventional wisdom is that the American public has lost faith in the humanities—and lost both competence and interest in reading and writing, possibly heralding a post-literacy age. And since the emergence of ChatGPT, which can produce long-form responses to short prompts, universities have tried, rather unsuccessfully, to stamp out the use of what has become the ultimate piece of cheating technology, resulting in a mix of panic and resignation about the influence AI will have on education. But at Haverford, the story seemed different. Walking onto campus was like stepping into a time machine, and not only because I had graduated from the school a decade earlier. The tiny, historically Quaker college on Philadelphia’s Main Line still maintains its old honor code, and students still seem to follow it instead of letting a large language model do their thinking for them. For the most part, the students and professors I talked with seemed totally unfazed by this supposedly threatening new technology.

[Read: The best way to prevent cheating in college]

The two days I spent at Haverford and nearby Bryn Mawr College, in addition to interviews with people at other colleges with honor codes, left me convinced that the main question about AI in higher education has little to do with what kind of academic assignments the technology is or is not capable of replacing. The challenge posed by ChatGPT for American colleges and universities is not primarily technological but cultural and economic.

It is cultural because stemming the use of Chat—as nearly every student I interviewed referred to ChatGPT—requires an atmosphere in which a credible case is made, on a daily basis, that writing and reading have a value that transcends the vagaries of this or that particular assignment or résumé line item or career milestone. And it is economic because this cultural infrastructure isn’t free: Academic honor and intellectual curiosity do not spring from some inner well of rectitude we call “character,” or at least they do not spring only from that. Honor and curiosity can be nurtured, or crushed, by circumstance.

Rich private colleges with honor codes do not have a monopoly on academic integrity—millions of students and faculty at cash-strapped public universities around the country are also doing their part to keep the humanities alive in the face of generative AI. But at the wealthy schools that have managed to keep AI at bay, institutional resources play a central role in their success. The structures that make Haverford’s honor code function—readily available writing support, small classes, and comparatively unharried faculty—are likely not scalable in a higher-education landscape characterized by yawning inequalities, collapsing tenure-track employment, and the razing of public education at both the primary and secondary levels.

When OpenAI’s ChatGPT launched on November 30, 2022, colleges and universities were returning from Thanksgiving break. Professors were caught flat-footed as students quickly began using the generative-AI wonder app to cut corners on assignments, or to write them outright. Within a few weeks of the program’s release, ChatGPT was heralded as bringing about “the end of high-school English” and the death of the college essay. These early predictions were hyperbolic, but only just. As The Atlantic’s Ian Bogost recently argued, there has been effectively zero progress in stymying AI cheating in the years since. One professor summarized the views of many in a recent mega-viral X post: “I am no longer a teacher. I’m just a human plagiarism detector. I used to spend my grading time giving comments for improving writing skills. Now most of that time is just checking to see if a student wrote their own paper.”

While some institutions and faculty have bristled at the encroachment of AI, others have simply thrown in the towel, insisting that we need to treat large language models like “tools” to be “integrated” into the classroom.  

I’ve felt uneasy about the tacit assumption that ChatGPT plagiarism is inevitable, that it is human nature to seek technological shortcuts. In my experience as a student at Haverford and then a professor at a small liberal-arts college in Maine, most students genuinely do want to learn and generally aren’t eager to outsource their thinking and writing to a machine. Although I had my own worries about AI, I was also not sold on the idea that it’s impossible to foster a community in which students resist ChatGPT in favor of actually doing the work. I returned to Haverford last month to see whether my fragile optimism was warranted.

When I stopped a professor walking toward the college’s nature trail to ask if ChatGPT was an issue at Haverford, she appeared surprised by the question: “I’m probably not the right person to ask. That’s a question for students, isn’t it?” Several other faculty members I spoke with said they didn’t think much about ChatGPT and cheating, and repeated variations of the phrase I’m not the police.

Haverford’s academic climate is in part a product of its cultural and religious history. During my four years at the school, invocations of “Quaker values” were constant, emphasizing on personal responsibility, humility, and trust in other members of the community. Discussing grades was taboo because it invited competition and distracted from the intrinsic value of learning.

The honor code is the most concrete expression of Haverford’s Quaker ethos. Students are trusted to take tests without proctors and even to bring exams back to their dorm rooms. Matthew Feliz, a fellow Haverford alum who is now a visiting art-history professor at Bryn Mawr—a school also governed by an honor code—put it this way: “The honor code is a kind of contract. And that contract gives students the benefit of the doubt. That’s the place we always start from: giving students the benefit of the doubt.”

[Read: The first year of AI college ends in ruin ]

Darin Hayton, a historian of science at the college, seemed to embody this untroubled attitude. Reclining in his office chair, surrounded by warm wood and, for 270 degrees, well-loved books, he said of ChatGPT, “I just don’t give a shit about it.” He explained that his teaching philosophy is predicated on modeling the merits of a life of deep thinking, reading, and writing. “I try to show students the value of what historians do. I hope they’re interested, but if they’re not, that’s okay too.” He relies on creating an atmosphere in which students want to do their work, and at Haverford, he said, they mostly do. Hayton was friendly, animated, and radiated a kind of effortless intelligence. I found myself, quite literally, leaning forward when he spoke. It was not hard to believe that his students did the same.

“It seems to me that this anxiety in our profession over ChatGPT isn’t ultimately about cheating.” Kim Benston, a literary historian at Haverford and a former president of the college, told me. “It’s an existential anxiety that reflects a deeper concern about the future of the humanities,” he continued. Another humanities professor echoed these remarks, saying that he didn’t personally worry about ChatGPT but agreed that the professorial concern about AI was, at bottom, a fear of becoming irrelevant: “We are in the sentence-making business. And it looks like they don’t need us to make sentences any more.”

I told Benston that I had struggled with whether to continue assigning traditional essays—and risk the possibility of students using ChatGPT—or resort to using in-class, pen-and-paper exams. I’d decided that literature classes without longer, take-home essays are not literature classes. He nodded. The impulse to surveil students, to view all course activity through a paranoid lens, and to resort to cheating-proof assignments was not only about the students or their work, he suggested. These measures were also about nervous humanities professors proving to themselves that they’re still necessary.

My conversations with students convinced me that Hayton, Benston, and their colleagues’ build-it-and-they-will-come sentiment, hopelessly naive though it may seem, was largely correct. Of the dozens of Haverford students I talked with, not a single one said they thought AI cheating was a substantial problem at the school. These interviews were so repetitive, they almost became boring.

The jock sporting bright bruises from some kind of contact sport? “Haverford students don’t really cheat.” The econ major in prepster shorts and a Jackson Hole T-shirt? “Students follow the honor code.” A bubbly first-year popping out of a dorm? “So far I haven’t heard of anyone using ChatGPT. At my high school it was everywhere!” More than a few students seemed off put by the very suggestion that a Haverfordian might cheat. “There is a lot of freedom here and a lot of student autonomy,” a sophomore psychology major told me. “This is a place where you could get away with it if you wanted to. And because of that, I think students are very careful not to abuse that freedom.” The closest I got to a dissenting voice was a contemplative senior who mused: “The honor code is definitely working for now. It may not be working two years from now as ChatGPT gets better. But for now there’s still a lot of trust between students and faculty.”

To be sure, despite that trust, Haverford does have occasional issues with ChatGPT. A student who serves on Haverford’s honor council, which is responsible for handling academic-integrity cases, told me, “There’s generally not too much cheating at Haverford, but it happens.” He said that the primary challenge is that “ChatGPT makes it easy to lie,” meaning the honor council struggles to definitively prove that a student who is suspected of cheating used AI. Still, both he and a fellow member of the council agreed that Haverford seems to have far fewer issues with LLM cheating than peer institutions. Only a single AI case came before the honor council over the past year.

In another sign that LLMs may be preoccupying some people at the college, one survey of the literature and language faculty found that most teachers in these fields banned AI outright, according to the librarian who distributed the query. A number of professors also mentioned that a provost had recently sent out an email survey about AI use on campus. But in keeping with the general disinterest in ChatGPT I encountered at Haverford, no one I talked with seemed to have paid much attention to the email.

Wandering over to Bryn Mawr in search of new perspectives, I found a similar story. A Classics professor I bumped into by a bus stop told me, “I try not to be suspicious of students. ChatGPT isn’t something I spend time worrying about. I think if they use ChatGPT, they’re robbing themselves of an opportunity.” When I smiled, perhaps a little too knowingly, he added: “Of course a professor would say that, but I think our students really believe that too.” Bryn Mawr students seemed to take the honor code every bit as seriously as that professor believed they would, perhaps none more passionately than a pair of transfer students I came across, posted up under one of the college’s gothic stone archways.

“The adherence to it to me has been shocking,” a senior who transferred from the University of Pittsburgh said of the honor code. “I can’t believe how many people don’t just cheat. It feels not that hard to [cheat] because there’s so much faith in students.” She explained her theory of why Bryn Mawr’s honor code hadn’t been challenged by ChatGPT: “Prior to the proliferation of AI it was already easy to cheat, and they didn’t, and so I think they continue not to.” Her friend, a transfer from another large state university, agreed. “I also think it’s a point of pride,” she observed. “People take pride in their work here, whereas students at my previous school were only there to get their degree and get out.”

The testimony of these transfer students most effectively made the case that schools with strong honor codes really are different. But the contrast the students pointed to—comparatively affordable public schools where AI cheating is ubiquitous, gilded private schools where it is not—also hinted at a reality that troubles whatever moralistic spin we might want to put on the apparent success of Haverford and Bryn Mawr. Positioning honor codes as a bulwark against academic misconduct in a post-AI world is too easy: You have to also acknowledge that schools like Haverford have dismantled—through the prodigious resources of the institution and its customers—many incentives to cheat.

It is one thing to eschew ChatGPT when your professors are available for office hours, and on-campus therapists can counsel you if you’re stressed out by an assignment, and tutors are ready to lend a hand if writer’s block strikes or confusion sets in, and one of your parents’ doctor friends is happy to write you an Adderall prescription if all else fails. It is another to eschew ChatGPT when you’re a single mother trying to squeeze in homework between shifts, or a non-native English speaker who has nowhere else to turn for a grammar check. Sarah Eaton, an expert on cheating and plagiarism at Canada’s University of Calgary, didn’t mince words: She called ChatGPT “a poor person’s tutor.” Indeed, several Haverford students mentioned that, although the honor code kept students from cheating, so too did the well-staffed writing center. “The writing center is more useful than ChatGPT anyway,” one said. “If I need help, I go there.”

But while these kinds of institutional resources matter, they’re also not the whole story. The decisive factor seems to be whether a university’s honor code is deeply woven into the fabric of campus life, or is little more than a policy slapped on a website. Tricia Bertram Gallant, an expert on cheating and a co-author of a forthcoming book on academic integrity, argues that honor codes are effective when they are “regularly made salient.” Two professors I spoke with at public universities that have strong honor codes emphasized this point. Thomas Crawford at Georgia Tech told me, “Honor codes are a two-way street—students are expected to be honest and produce their own work, but for the system to function, the faculty must trust those same students.” John Casteen, a former president and current English professor at the University of Virginia, said, “We don’t build suspicion into our educational model.” He acknowledged that there will always be some cheaters in any system, but in his experience UVA’s honor-code culture “keeps most students honest, most of the time.”

And if money and institutional resources are part of what makes honor codes work, recent developments at other schools also show that money can’t buy culture. Last spring, owing to increased cheating, Stanford’s governing bodies moved to end more than a century of unproctored exams, using what some called a “nuclear option” to override a student-government vote against the decision. A campus survey at Middlebury this year found that 65 percent of the students who responded said they’d broken the honor code, leading to a report that asserted, “The Honor Code has ceased to be a meaningful element of learning and living at Middlebury for most students.” An article by the school newspaper’s editorial board shared this assessment: “The Honor Code as it currently stands clearly does not effectively deter students from cheating. Nor does it inspire commitment to the ideals it is meant to represent such as integrity and trust.” Whether schools like Haverford can continue to resist these trends remains to be seen.

Last month, Fredric Jameson, arguably America’s preeminent living literary critic, passed away. His interests spanned, as a lengthy New York Times obituary noted, architecture, German opera, and sci-fi. An alumnus of Haverford, he was perhaps the greatest reader and writer the school ever produced.

[Read: The decade in which everything was great but felt terrible]

If Jameson was a singular talent, he was also the product of a singular historical moment in American education. He came up at a time when funding for humanities research was robust, tenure-track employment was relatively available, and the humanities were broadly popular with students and the public. His first major work of criticism, Marxism and Form, was published in 1971, a year that marked the high point of the English major: 7.6 percent of all students graduating from four-year American colleges and universities majored in English. Half a century later, that number cratered to 2.8 percent, humanities research funding slowed, and tenure-line employment in the humanities all but imploded.

Our higher-education system may not be capable of producing or supporting Fredric Jamesons any longer, and in a sense it is hard to blame students for resorting to ChatGPT. Who is telling them that reading and writing matter? America’s universities all too often treat teaching history, philosophy, and literature as part-time jobs, reducing professors to the scholarly equivalent of Uber drivers in an academic gig economy. America’s politicians, who fund public education, seem to see the humanities as an economically unproductive diversion for hobbyists at best, a menace to society at worst.

Haverford is a place where old forms of life, with all their wonder, are preserved for those privileged enough to visit, persisting in the midst of a broader world from which those same forms of life are disappearing. This trend did not start with OpenAI in November 2022, but it is being accelerated by the advent of magic machines that automate—imperfectly, for now—both reading and writing.

At the end of my trip, before heading to the airport, I walked to the Wawa, a 15-minute trek familiar to any self-respecting Haverford student, in search of a convenience-store sub and a bad coffee. On my way, I passed by the duck pond. On an out-of-the-way bench overlooking the water feature, in the shadow of a tree well older than she was, a student was sitting, her brimming backpack on the grass. There was a curl of smoke issued from a cigarette, or something slightly stronger, and a thick book open on her lap, face bent so close to the page her nose was almost touching it. With her free hand a finger traced the words, line by line, as she read.