ChatGPT and Education: What are the Risks?
by Kory Underdown on Jun 12, 2023 10:19:39 AM
ChatGPT is a game-changer, and, as with anything that upends the status quo, it’s polarizing. This artificial intelligence chatbot uses natural language processing to generate text in ways that no existing technology has yet been able to match. ChatGPT is a fascinating example of how far AI has come, and it’s a gut punch to humanity that computers are quickly outpacing our intelligence. People are learning new ways to harness AI for amusement, curiosity, and — naturally — personal gain.
It’s within this context that academic institutions find themselves struggling to navigate: In a January 2023 survey, 89% of students admitted to using ChatGPT for school, whether that be to write essays, build outlines for written assignments, or assist with take-home exams.
ChatGPT has the potential to be a powerful tool, but its existence poses serious questions and considerable consequences. Is it cheating to use ChatGPT? How can instructors monitor AI usage in the classroom? Perhaps most importantly, what are the broader implications for the future of learning?
The ethics of ChatGPT: Is it plagiarism?
To understand whether ChatGPT should be considered plagiarism, it’s necessary to examine both the intent of the platform and the intent of the user.
To grossly oversimplify the technology, AI works by feeding massive amounts of existing information into a computer, which then searches for patterns and uses them to make educated guesses. If this sounds exactly like the learning process students undergo in schools, it’s because it is — again, grossly oversimplified. Supporters and critics have reached a general consensus that ChatGPT doesn’t knowingly plagiarize the information it’s been fed. A ChatGPT-generated response, therefore, isn’t plagiarism.
However, when a user prompts ChatGPT to generate original content, it is plagiarism if that user fails to attribute the AI bot or attempts to pass it off as their own work. If a student has ChatGPT write an academic essay for them or if they ask the chatbot to answer exam questions, they’re outsourcing their work to an unattributed third-party. And, with ChatGPT now able to pass the bar exam with flying colors, educators — especially those in high-stakes fields such as medicine and law — are desperately looking for ways to protect academic integrity.
Ensuring academic integrity in the ChatGPT era
Academic dishonesty is nothing new. For decades, the International Center for Academic Integrity has studied the prevalence of plagiarism in education. Perhaps unsurprisingly, its statistics show a clear increase in academic dishonesty since it first began its research. In its latest survey, conducted before the rollout of ChatGPT, the majority of college students and nearly all high school students already admitted to cheating. When it came to plagiarism, 15% of undergraduates and 58% of high schoolers confessed their participation.
OpenAI, ChatGPT’s developer, recognizes the potential for plagiarism. With ChatGPT, it’s easier for students to plagiarize and it’s much more difficult for educators to catch. OpenAI has committed to designing future iterations that will make it easier for educators to spot whether ChatGPT has been used to generate content. Still, AI tools like ChatGPT are making it more difficult for educators to monitor and ensure academic integrity.
In an interview with the Observer, Thomas Lancaster, a computer scientist at Imperial College London who researches academic integrity and plagiarism, said that universities are “panicking”:
“If all we have in front of us is a written document, it is incredibly tough to prove it has been written by a machine, because the standard of writing is often good. The use of English and quality of grammar is often better than from a student.”
It’s a growing challenge. How do educators detect AI usage if ChatGPT has mastered the art of natural language? If the solution is to enlist the help of computer software to uncover ChatGPT content, will all of academia use the same tool, and if not, how can educators ensure fairness?
Educational institutions are learning how to approach, navigate, and discipline AI usage. It’s a matter of both upholding existing academic standards and creating a sustainable, scalable approach to the use of technology in classrooms that can, for all intents and purposes, think for students.
The falsification of learning
Ethics and surveillance aside, educators have another problem: the falsification of learning.
In an ideal world, students would use ChatGPT as a resource to help guide them through the learning process. Instead, the fear is that users will replace critical elements of the learning process with AI and miss out on important skill development as a result. Search engines have given users immediate access to human-written answers, but ChatGPT both generates the answer and provides reasoning, too. If users aren’t careful, their critical thinking skills could atrophy.
ChatGPT is a misinformation generator as much as it is a novel content creator. On OpenAI’s website, the ChatGPT creator admits that the technology is “still not fully reliable (it ‘hallucinates’ facts and makes reasoning errors).” Its experts warn users to exercise “great care [...] when using language model outputs, particularly in high-stakes contexts,” and OpenAI recommends “human review, grounding with additional context, or avoiding high-stakes uses altogether” to prevent factual errors from slipping through.
One news outlet conducted an experiment to discover how susceptible ChatGPT is to generating misinformation. In January 2023, NewsGuard had ChatGPT respond to 100 leading prompts relating to false narratives. ChatGPT advanced 80% of the false narratives. When ChatGPT-4 was released in March, NewsGuard tested its experiment a second time and found that the technology now responded with false and misleading claims 100% of the time. The researchers concluded: “The results show that the chatbot — or a tool like it using the same underlying technology — could be used to spread misinformation at scale,” and that “the new ChatGPT has become more proficient not just in explaining complex information, but also in explaining false information — and in convincing others that it might be true.”
Preventing AI-assisted plagiarism
ChatGPT’s astonishing abilities (and its numerous limitations) pose dangers to academic institutions at a scale not seen before. Left unchecked, students could outsource their own learning, and their plagiarism would be much more difficult to detect.
Educators face a significant challenge: Teach students how to properly coexist with ChatGPT and institute solutions to catch plagiarism. There are temporary fixes, too. For example, schools can invest in AI detection tools and block ChatGPT on its servers. In fact, 72% of college students believe that the chatbot should be banned from their school’s network.
Arguably, the greatest impact will come from candid, human-to-human dialogues. For that, it’s best not to ask ChatGPT for input.
Ready to start blocking AI like ChatGPT? Start a free trial of DNSFilter today.
When researchers talk about DNS security, they often refer to anything that protects DNS infrastructure. Although protective DNS and DNS security fall under the cybersecurity umbrella, protective DNS takes a different approach to cybersecurity than standard DNS security. Both security strategies are important for the stability of your business, but protective DNS reduces risks from your weakest link–human error. Protective DNS is critical for you...
The impending Cisco Umbrella RC End-of-Life has many Umbrella users concerned about their next steps and questioning which protective DNS solution might be able to fill the gap for their organization.
Industry State of the Art
This month there was a high level of focus on compliance issues spanning several focus areas from governments and oversight agencies around the world. And while there were actions taken with regard to specific vulnerabilities, a larger spotlight was placed on bigger picture security considerations in a more general context.