edsurge podcast logo 1634695741

ChatGPT Has Colleges in Emergency Mode to Shield Academic Integrity

Colleges around the country have been holding emergency meetings of their honor code councils or other committees that govern student cheating.

The reason: a whole new kind of cheating that is suddenly possible, thanks to a new AI tool called ChatGPT. The technology, which emerged just a couple of months ago, can answer just about any question you type into it, and can adapt those answers into a different style or tone on command. The result is it generates text that sounds like a person wrote it.

As we explored in an episode of the EdSurge Podcast a couple weeks ago, students around the country at schools and colleges have figured out that they can easily ask ChatGPT to do their homework for them. After all, it’s tailor-made to craft the kinds of essays that instructors ask for.

So professors have been quick to respond.

At Texas State University, for instance, professors across the campus began emailing the honor code council with cries for help.

“So many professors right now are struggling with burnout and disengagement and so many other things already, that even those that embrace paradigm shifts are at minimum sighing—ugh, this is another thing for me to pay attention to,” says Rachel Davenport, a senior lecturer in biology at the university who serves as vice chair of the honor code council.

That’s among the professors open to change, she notes: “The other prevailing mood is terror, thinking ‘This throws everything that I do into chaos. How am I supposed to catch it?’”

On this week’s EdSurge Podcast, we bring you part two of our exploration of what ChatGPT means for teaching. Our focus is on what college honor code councils are doing to respond.

Listen to the episode on Apple Podcasts, Overcast, Spotify, Stitcher or wherever you get your podcasts, or use the player on this page. Or read a partial transcript below, condensed and edited for clarity.

To get a national perspective, EdSurge recently connected with Derek Newton, a journalist who runs a weekly Substack newsletter called The Cheat Sheet, about academic integrity and cheating.

“It’s been a very loud alarm bell for people in teaching and learning … at all levels,” he said.

In the past, any new approach to cheating has spread slowly, often in secret on dark corners of the internet. With ChatGPT, adoption has become widespread in just a few months.

“I counted I think six separate columns in The New York Times on ChatGPT,” Newton said. “That level of visibility is basically unprecedented for everything except war.”

So back to Texas State, Rachel Davenport noted that one thing she did recently to get up to speed was to try both ChatGPT and a tool designed to detect bot-written writing, called GPTZero. Davenport is a trained scientist, so her impulse was to run her own experiment.

“I did run nine submissions through GPTZero,” she says. “Six of them were human and three of them I had ChatGPT generate. Of those nine, seven of them were correctly identified [by GPTZero]. Of the two that weren’t correctly identified, they weren’t incorrectly identified either. It just said they needed more information. And one of them was by a student and the other was by ChatGPT.”

On Monday, the honor council at Texas State sent out a letter about ChatGPT to every faculty member. The subject line is: “Artificial Intelligence (ChatGPT) and the University Honor Code Policy.”

Here’s how it starts:

“As we begin the second week of the spring 2023 semester, we would like to briefly mention the developing topic of artificial intelligence (AI) and potential Honor Code implications that may arise if used by students in preparation of course deliverables submitted for academic credit. Our institution, teaching and evaluation methods, and follow-on industry rely on the use of computers to assist with common work tasks every day. However, when used in lieu of individual thought, creation, and synthesis of knowledge by falsely submitting a paper written (all or in part) as one’s own original work, an academic integrity violation results.”

It goes on to remind faculty of the rules and some basics of the honor code, and it points to some resources professors can check out to learn more about ChatGPT.

It turns out, there are deeper questions to consider when it comes to ChatGPT and this ability for AI to generate language that sounds human. Because it’s possible we’re at a big turning point in our broader use of technology, where many real-world scenarios arise where people work with AI to get things done.

That came up the other day when I was talking to Simon McCallum, a professor who teaches video game design at Victoria University of Wellington in New Zealand.

He was telling me about how he’s starting to use AI tools with his students that can turn code written in one programming language into code in another language. It’s called GitHub Copilot, and it’s kind of like ChatGPT but for computing.

“I’ve been talking to industry programmers who are using the AI coders and advertisers who have been using AI to do copy for a long time,” said McCallum. “And if industry is using these tools, if we try and move back to pen and paper … and we try to force people not to use those tools, our assessments become less and less valid.”

Source link

About The Author

Scroll to Top