A faculty member at the University of Texas at El Paso was grading a composition during the spring 2023 semester, and suspected that it was not the student’s work – until she got to that one sentence.
The instructor of the upper-level course with a strong writing component believed the essay was prepared, at least in part, by ChatGPT, an artificial intelligence program that launched last November. With a few prompts, users of the free AI program can produce essays, research papers, computer code and more with relative ease.
To stymie ChatGPT, the lecturer directed her students to base one of their answers on how they related to the assigned readings. The answer from the student in question did not sound like a student’s “voice.” The final confirmation was the inclusion of something like “I don’t have a personal experience because I’m AI.”
The student, who earned a zero for his paper, acknowledged his offense and apologized to the instructor, who did not want to be named. The student is among those who tried to cut academic corners with ChatGPT. In most cases, these indiscretions were handled at the classroom level. More serious offenses were submitted to the university’s Office of Student Conduct and Conflict Resolution, or OSCCR.
“I’d like to see more suggestions or training on how to proactively address ChatGPT with my students rather than solely acting in the role of ‘catching’ and disciplining them,” the faculty member said.
To address such concerns, UTEP conducted a series of workshops late last spring to inform faculty about the pervasive use of ChatGPT and other forms of AI. UTEP’s Center for Faculty Leadership and Development organized the presentations to increase awareness and, where possible, to educate faculty on how to use AI effectively in the classroom and as an assessment tool. About 50 university instructors from throughout the university attended the presentations.
Jeffrey Olimpo, director of the faculty leadership center, said the main concern workshop participants shared with him was students’ unethical use of ChatGPT and AI in general. His response was that AI is not going away.
“We came at it from an angle of, ‘You can’t put the toothpaste back in the tube,’” Olimpo said a few weeks after the last workshop.
The event’s presenters included representatives from OSCCR and the Provost’s Office. Olimpo recalled that the OSCCR official said that his office already had seen some potential ChatGPT cases.
Strategize, don’t demonize
The Office of Student Conduct and Conflict Resolution conducted 20 investigations into possible cases of academic dishonesty tied to the use of AI during the spring 2023 semester, according to the university. UTEP did not respond to a question about how those cases were resolved and said that OSCCR director Jovita Simón would not comment on this story.
While the university was aware of ChatGPT’s potential downsides, Olimpo said there was no reason to chase it down with torches and pitchforks.
“We try to strategize and not demonize,” he said.
Arthur Ramirez, a second-year UTEP doctoral student in finance, said he began to test ChatGPT soon after it launched to learn if it could help with his research. Initially, he was concerned with its inaccuracies, but found it helpful with coding, especially with better prompts, and to understand certain charts. He said the only instructions a professor gave him was to follow the university’s guidelines.
“He said there was no right or wrong way to use ChatGPT,” Ramirez said. “Just don’t abuse it.”
Responding to an El Paso Matters Instagram request for students to share their experiences, one UTEP student said that some of his professors encouraged students to use ChatGPT, while others warned them not to use it for plagiarism.
“I don’t see what the big deal is,” wrote the student who identified himself as “sergio.iii.”
Sergio.iii called the AI program an effective study and communication tool with the right prompts. He said it helped create outlines for papers, add focus to his PowerPoint presentations and often gave more understandable explanations to complicated topics.
“The students using ChatGPT unethically aren’t even being smart about it,” he wrote via Instagram. El Paso Matters reached out to the user, but he did not respond. “Most people use it in a brain-dead way where they just copy and paste answers straight out of ChatGPT and they end up with responses that look identical to a dozen other students.”
Leslie Waters, an assistant professor of history, did not offer ChatGPT instructions at the start of the spring 2023 semester. She believed the obscure primary source material from her 20th century European history course would be AI-proof. In one case she gave students copies of letters written by soldiers and their families during World War I and asked them to write essays based on the letters’ themes.
Three of her students submitted papers that focused generally on the war, but did not mention the letters or their themes. Additionally, the essays included ChatGPT red flags: grammatically correct sentences that lacked analysis and critical thinking. Each of those students earned low scores. Waters planned to send one of those cases to OSCCR, which she said uses software that can detect AI-generated material.
“It’s not easy (for me) to prove, but it’s extremely easy for me to detect,” she said of ChatGPT work.
Her plan for the fall 2023 semester is to talk to her students about the perils of using ChatGPT, and to encourage them to stay on top of their coursework. It is her experience that students cheat out of desperation. She will give multi-level assignments that force students to submit papers at various stages to keep track of their progress.
Olimpo did not respond to several requests for the recommendations generated by his spring workshops, but he previously proposed that a faculty committee review and possibly update the university’s general course syllabus in regards to the use of AI tools.
A June 2023 article in The Chronicle of Higher Education included the results of a faculty survey of how to work with ChatGPT this fall. Two of the more popular ideas were to alter assignments to make AI participation less useful, and to incorporate AI in some work to help students understand its strengths and weaknesses.
As for El Paso Community College, its ChatGPT directive to students is to follow their professors’ instructions for assignments and the academic guidelines in the college’s Student Code of Conduct, said Keri Moe, associate vice president for External Relations Communication & Development.
“ChatGPT, like any technology available, must be used with academic integrity and in accordance with these guidelines,” Moe said.
Texas Tech University Health Sciences Campus El Paso did not respond to a request for instructions on how its leaders want faculty and students to use ChatGPT.
While some faculty members want to use AI tools such as Turnitin to catch cheaters, Sarah Elaine Eaton, an associate professor in the Werklund School of Education at the University of Calgary, in Canada, advised them to not overreact.
During a May 16 virtual forum about “Academic Integrity and AI,” Eaton said that instructors should include a statement in their syllabus about the AI they plan to use to help with their assessments and inform the students about the limitations of those programs.
“It’s not about trying to use technology in order to catch students,” Eaton said during the presentation. “Nobody wins in an academic-integrity arms race. Deceptive assessment using tools and technologies without students’ knowledge ahead of time is not modeling integrity.”
Greg Beam, an associate professor of practice in UTEP Department of Communication, said that he taught an asynchronous virtual course this summer and strongly suspected that some students submitted work done by chatbots. He posted a video on Blackboard where he explained the right and wrong way to use ChatGPT.
Beam told the students that those who admitted that they used the technology improperly would be allowed to redo the assignment with no penalty. Additionally, he told them that he would contact those who did not come forward to ask them follow-up questions about their submissions to verify that they understood the material.
The professor said about 10% of those students redid the assignment. He suspected a few others, but those submissions lacked the tell-tale red flags. It made him wonder if some students had mastered ChatGPT enough to be undetectable.
“For the most part, at UTEP at least, I don’t think students want to cheat – they want to learn,” Beam said. “And they’re just as concerned about the potential ramifications of these new technologies as the rest of us are.”