ChatGPT and Education: What are the Risks?

Listen to this article instead
7:32


ChatGPT is a game-changer, and, as with anything that upends the status quo, it’s polarizing. This artificial intelligence chatbot uses natural language processing to generate text in ways that no existing technology has yet been able to match. ChatGPT is a fascinating example of how far AI has come, and it’s a gut punch to humanity that computers are quickly outpacing our intelligence. People are learning new ways to harness AI for amusement, curiosity, and — naturally — personal gain.

It’s within this context that academic institutions find themselves struggling to navigate: In a January 2023 survey, 89% of students admitted to using ChatGPT for school, whether that be to write essays, build outlines for written assignments, or assist with take-home exams.

ChatGPT has the potential to be a powerful tool, but its existence poses serious questions and considerable consequences. Is it cheating to use ChatGPT? How can instructors monitor AI usage in the classroom? Perhaps most importantly, what are the broader implications for the future of learning?

The ethics of ChatGPT: Is it plagiarism?

To understand whether ChatGPT should be considered plagiarism, it’s necessary to examine both the intent of the platform and the intent of the user. 

To grossly oversimplify the technology, AI works by feeding massive amounts of existing information into a computer, which then searches for patterns and uses them to make educated guesses. If this sounds exactly like the learning process students undergo in schools, it’s because it is — again, grossly oversimplified. Supporters and critics have reached a general consensus that ChatGPT doesn’t knowingly plagiarize the information it’s been fed. A ChatGPT-generated response, therefore, isn’t plagiarism.

However, when a user prompts ChatGPT to generate original content, it is plagiarism if that user fails to attribute the AI bot or attempts to pass it off as their own work. If a student has ChatGPT write an academic essay for them or if they ask the chatbot to answer exam questions, they’re outsourcing their work to an unattributed third-party. And, with ChatGPT now able to pass the bar exam with flying colors, educators — especially those in high-stakes fields such as medicine and law — are desperately looking for ways to protect academic integrity.

Ensuring academic integrity in the ChatGPT era

Academic dishonesty is nothing new. For decades, the International Center for Academic Integrity has studied the prevalence of plagiarism in education. Perhaps unsurprisingly, its statistics show a clear increase in academic dishonesty since it first began its research. In its latest survey, conducted before the rollout of ChatGPT, the majority of college students and nearly all high school students already admitted to cheating. When it came to plagiarism, 15% of undergraduates and 58% of high schoolers confessed their participation.

OpenAI, ChatGPT’s developer, recognizes the potential for plagiarism. With ChatGPT, it’s easier for students to plagiarize and it’s much more difficult for educators to catch. OpenAI has committed to designing future iterations that will make it easier for educators to spot whether ChatGPT has been used to generate content. Still, AI tools like ChatGPT are making it more difficult for educators to monitor and ensure academic integrity.

In an interview with the Observer, Thomas Lancaster, a computer scientist at Imperial College London who researches academic integrity and plagiarism, said that universities are “panicking”: 

“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good. The use of English and quality of grammar is often better than from a student.”

It’s a growing challenge. How do educators detect AI usage if ChatGPT has mastered the art of natural language? If the solution is to enlist the help of computer software to uncover ChatGPT content, will all of academia use the same tool, and if not, how can educators ensure fairness?

Educational institutions are learning how to approach, navigate, and discipline AI usage. It’s a matter of both upholding existing academic standards and creating a sustainable, scalable approach to the use of technology in classrooms that can, for all intents and purposes, think for students.

The falsification of learning

Ethics and surveillance aside, educators have another problem: the falsification of learning.

Outsourced thinking

In an ideal world, students would use ChatGPT as a resource to help guide them through the learning process. Instead, the fear is that users will replace critical elements of the learning process with AI and miss out on important skill development as a result. Search engines have given users immediate access to human-written answers, but ChatGPT both generates the answer and provides reasoning, too. If users aren’t careful, their critical thinking skills could atrophy.

Misinformation

ChatGPT is a misinformation generator as much as it is a novel content creator. On OpenAI’s website, the ChatGPT creator admits that the technology is “still not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).” Its experts warn users to exercise “great care [...] when using language model outputs, particularly in high-stakes contexts,” and OpenAI recommends “human review, grounding with additional context, or avoiding high-stakes uses altogether” to prevent factual errors from slipping through.

One news outlet conducted an experiment to discover how susceptible ChatGPT is to generating misinformation. In January 2023, NewsGuard had ChatGPT respond to 100 leading prompts relating to false narratives. ChatGPT advanced 80% of the false narratives. When ChatGPT-4 was released in March, NewsGuard tested its experiment a second time and found that the technology now responded with false and misleading claims 100% of the time. The researchers concluded: “The results show that the chatbot — or a tool like it using the same underlying technology — could be used to spread misinformation at scale,” and that “the new ChatGPT has become more proficient not just in explaining complex information, but also in explaining false information — and in convincing others that it might be true.”

Preventing AI-assisted plagiarism

ChatGPT’s astonishing abilities (and its numerous limitations) pose dangers to academic institutions at a scale not seen before. Left unchecked, students could outsource their own learning, and their plagiarism would be much more difficult to detect. 

Educators face a significant challenge: Teach students how to properly coexist with ChatGPT and institute solutions to catch plagiarism. There are temporary fixes, too. For example, schools can invest in AI detection tools and block ChatGPT on its servers. In fact, 72% of college students believe that the chatbot should be banned from their school’s network. 

Arguably, the greatest impact will come from candid, human-to-human dialogues. For that, it’s best not to ask ChatGPT for input.

Ready to start blocking AI like ChatGPT? Start a free trial of DNSFilter today.

Search
  • There are no suggestions because the search field is empty.
Latest posts
Revving up the Fun: DNSFilter's IndyCar Experience Recap — Long Beach Edition Revving up the Fun: DNSFilter's IndyCar Experience Recap — Long Beach Edition

What a weekend at the Long Beach street circuit! The energy was electric, the excitement palpable, and DNSFilter was at the heart of the action, ensuring our guests had an unforgettable experience with Juncos Hollinger Racing and Romain Grosjean, the #77 driver for Juncos Hollinger.

Securing Public Wireless Networks Securing Public Wireless Networks

In the current era of digital transformation, securing public wireless networks has emerged as a fundamental challenge for IT professionals worldwide. The evolution of technology and the increasing reliance on digital platforms for both business and personal use have made public Wi-Fi networks indispensable. However, greater access creates greater vulnerabilities, making these networks prime targets for cybercriminals. The imperative to secure pu...

How to Secure Public Wi-Fi Networks How to Secure Public Wi-Fi Networks

In the quest to safeguard public Wi-Fi networks from the myriad of cyber threats, certain proactive steps stand out as fundamental. These measures form the backbone of a comprehensive security strategy, ensuring that the network remains robust against unauthorized access, data breaches, and various forms of cyberattacks.

Explore More Content

Ready to brush up on something new? We've got even more for you to discover.