While grading essays for his world religions course last month, Antony Aumann, a professor of philosophy at Northern Michigan University, read what he said was easily “the best paper in the class.” It examined the morality of burqa bans with clear paragraphs, appropriate examples and rigorous arguments.
A red flag immediately went up.
Mr. Aumann confronted his student with the question whether he had written the essay himself. The student confessed to using ChatGPT, a chatbot that delivers information, explains concepts, and generates ideas in simple sentences – and in this case, he had written the paper.
Alarmed by his discovery, Mr. Aumann decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in class, using browsers that monitor and limit computer activity. In later versions, students must explain each revision. Mr. Aumann, who may forego essays in the coming semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.
“What happens in class will no longer be, ‘Here are some questions — let’s talk about it between us humans,'” he said, but instead “it’s like, ‘What is this alien robot thinking too? ‘”
Across the country, college professors like Mr. Aumann, department chairs, and administrators are beginning to overhaul classrooms in response to ChatGPT, sparking a potentially massive shift in teaching and learning. Some professors are completely redesigning their courses and making changes that include more oral exams, group work, and handwritten assessments instead of typed assessments.
The moves are part of a real-time struggle with a new wave of technology known as generative artificial intelligence. ChatGPT, which was released in November by the artificial intelligence lab OpenAI, is at the forefront of the shift. The chatbot generates eerily articulate and nuanced text in response to brief prompts, and people use it to write love letters, poetry, fanfiction – and their schoolwork.
That has rocked some middle and high schools, with teachers and administrators trying to discern if students are using the chatbot to do their schoolwork. Some public school systems, including in New York City and Seattle, have since banned the tool from school Wi-Fi networks and devices to prevent cheating, though students can easily find workarounds to access ChatGPT.
In higher education, colleges and universities have been reluctant to ban the AI tool because administrators doubt the move will be effective and they don’t want to encroach on academic freedom. That means the way people teach is changing.
“We’re trying to create a general policy that definitely supports the faculty member’s authority to lead a class,” rather than targeting specific methods of cheating, said Joe Glover, provost of the University of Florida. “This won’t be the last innovation we have to deal with.”
The rise of OpenAI
The San Francisco company is one of the world’s most ambitious artificial intelligence laboratories. Here’s a look at some recent developments.
This is especially true as generative AI is still in its infancy. OpenAI is expected to release another tool soon, GPT-4, which is better at generating text than previous versions. Google has built LaMDA, a rival chatbot, and Microsoft is discussing a $10 billion investment in OpenAI. Silicon Valley startups, including Stability AI and Character.AI, are also working on generative AI tools.
An OpenAI spokeswoman said the lab acknowledged that its programs could be used to trick people and was developing technology to help people identify text generated by ChatGPT.
ChatGPT is now at the top of the agenda at many universities. Administrators set up task forces and organize university-wide discussions to respond to the tool, with much of the guidance being adaptation to the technology.
At schools including George Washington University in Washington, DC, Rutgers University in New Brunswick, NJ, and Appalachian State University in Boone, NC, professors are phasing out open-book assignments to take home — which became a dominant method of assessment in the pandemic, but now seem vulnerable to chatbots. Instead, they opt for classroom assignments, handwritten papers, group work, and oral exams.
Gone are prompts like “write five pages about this or that.” Some professors instead come up with questions they hope are too smart for chatbots and ask students to write about their own lives and current events.
Students “plagiarize this because the assignments can be plagiarized,” says Sid Dobrin, chair of the University of Florida English Department.
Frederick Luis Aldama, the chair of humanities at the University of Texas at Austin, said he plans to teach newer or more niche texts about which ChatGPT may have less information, such as William Shakespeare’s early sonnets rather than “A Midsummer Night’s Dream ‘.
The chatbot could “motivate people who lean into canonical, primal texts to actually reach outside their comfort zone for things that aren’t online,” he said.
In case the changes don’t prevent plagiarism, Mr Aldama and other professors said they plan to set stricter standards for what they expect from students and how they grade. Now it is not sufficient for an essay to have only a thesis, introduction, supporting paragraphs and a conclusion.
“We need to up our game,” Mr. Aldama said. “The imagination, creativity and innovation of analysis that we usually think of as an A paper should seep into the B series of papers.”
Universities are also striving to educate students about the new AI tools. The University of Buffalo in New York and Furman University in Greenville, SC, said they plan to embed a discussion of AI tools into prerequisite courses that teach beginning or freshman students about concepts such as academic integrity.
“We need to add a scenario to this so students can see a concrete example,” said Kelly Ahuna, who leads the University at Buffalo’s Office of Academic Integrity. “We want to prevent things from happening instead of catching them when they happen.”
Other universities are trying to draw boundaries for AI Washington University in St. Louis and the University of Vermont in Burlington are drafting revisions to their academic integrity policies so that their plagiarism definitions include generative AI
John Dyer, vice president for enrollment services and educational technologies at Dallas Theological Seminary, said the language in his seminary’s honor code felt “a little archaic anyway.” He plans to update the plagiarism definition with: “using text written by a generation system as your own (e.g. entering a prompt into an artificial intelligence tool and using the output in a paper).”
The misuse of AI tools most likely won’t end, so some professors and universities said they plan to use detectors to stamp out that activity. Plagiarism detection service Turnitin said it would include more features for identifying AI this year, including ChatGPT.
More than 6,000 educators from Harvard University, Yale University, the University of Rhode Island and others have also signed up to use GPTZero, a program that promises to quickly detect AI-generated text, said Edward Tian, its creator and a senior at Princeton University.
Some students see value in embracing AI tools for learning. Lizzie Shackney, 27, a law and design student at the University of Pennsylvania, has started using ChatGPT to brainstorm papers and debug coding problems.
“There are disciplines that want you to share and don’t want you to spin your wheels,” she said, describing her computer science and statistics classes. “The place my brain is useful is understanding what the code means.”
But she has reservations. ChatGPT, said Ms Shackney, sometimes misinterprets ideas and miscites sources. The University of Pennsylvania also hasn’t set any rules for the tool, so it doesn’t want to rely on it in case the school bans it or considers it cheating, she said.
Other students have no such scruples, sharing on forums like Reddit that they have submitted assignments written and solved by ChatGPT – and sometimes for fellow students as well. On TikTok, the hashtag #chatgpt has over 578 million views, with people sharing videos of the tool, writing papers, and solving coding problems.
A video shows a student copying and pasting a multiple-choice exam into the tool with the caption, “I don’t know about you guys, but I’m just letting Chat GPT take my exam. Have fun studying.”