Photo via ChatGPT Website Homepage
ChatGPT is the greatest threat to English departments since the widespread availability of Sparknotes. However, even then there was Turnitin, a plagiarism-detection software, so assurances were made that, at the very least, students were learning the intricacies of paraphrasing they were writing. Now, with ChatGPT, some are weeping farewells and singing songs in memoriam of high school English and any form of out-of-class writing assignments.
ChatGPT is a chatbot program from OpenAI, which can write and produce a variety of content from jokes to poetry to essays from a prompt imputed by the user in a matter of seconds. It was introduced to the public in late November and became the fastest-growing consumer application in January, as it had garnered 100 million users in its first two months, surpassing TikTok’s nine months and Instagram’s two and a half years. However, the application has raised plagiarism-esque and misinformation concerns. Students and writers who hate their jobs have begun using the application to supplement material for assignments. It is also further propagating visions of a post-apocalyptic AI dominated future wherein even the most human aspects of being human, such as creating art, can be accomplished by AI with little to no difference in quality. Will AI be writing out postmodern, post-ironic Substack reflections and dissections of niche social issues instead of TikTok micro-influencers? It is one of the many questions at the forefront of discussions regarding AI.
However, some teachers, professors and educational experts are optimistic at how ChatGPT can be integrated into an educational environment. Consistent conversations have been occurring in the educational sphere surrounding an overemphasis on grades, final results and fact acquisition. It is the opinion of some educators that where ChatGPT can successfully complete an assignment for a student, the answer is to leave behind the assignment and not ban the software, but engage with it to enrich the learning process. As detailed in the MIT Technology Review, teachers like Emily Donahoe, a writing tutor and educational developer at the University of Mississippi, have utilized ChapGPT to not solely focus on a student’s knowledge but their ability to perform higher levels of cognition such as evaluation. Where Donahoe would typically assign students a writing project creating an argument for a given project, the assignment was altered to have students create an argument via ChatGPT, evaluate the effectiveness of the argument, and then re-write according to the outlined criticisms. There is hope in education that ChatGPT can be used to enrich student’s learning and assist teachers in their planning process. This is not to say such a transition would be smooth, teachers are often overworked and underpaid individuals and a significant portion of schools lack the infrastructure to undertake such a drastic transformation in assessment style. With the appropriate resources, support and restrictions, ChatGPT and AI as a whole could be exceedingly beneficial, but poor infrastructure will give leeway to issues of academic dishonesty and cheating.
Another, and to some, a more pressing issue than the authenticity of Substack authors, would be how ChatGPT can assist in disseminating misinformation. In the most immediate sense, the greatest concern facing misinformation and AI is individuals with poor intentions utilizing the service aid in the creation and promotion of false narratives. Such a conclusion is reached due to the AI not having a concrete understanding of the bounds that define fact and fiction. Its goal is to satisfy the users requests, and in that way, AI has become sort of the final boss of people pleasers. AI models are not too limited to producing misinformation, but could be learning from it as well. It is not implausible that certain generative AI systems will be susceptible to bad acting individuals “teaching” the AI models misinformation, a process known as “injection attacks.”
Newsguard, an institution which tracks misinformation, found that ChatGPT does have safeguards in place to prevent said misinformation responses, but possessed a limited capability at detecting and correcting falsehoods and inhibiting itself from creating stories based in fiction. For 80% of the prompts Newsguard inputted, the chatbot created disinformation related to Covid-19, vaccines and the insurrection in the U.S. Capitol on Jan. 6, 2021, among other topics. OpenAI has acknowledged ChatGPT’s capacity to play an influential role in the dissemination of misinformation and disinformation. They have similarly recognized their software’s propensity for fallacious and incorrect information as a result of its learning program, and they advise users to double-check the information ChatGPT provides them, the verbal equivalent of shrugging and mildly flicking one’s wrist off to the side. The point is: items produced by AI should be similarly scrutinized as publications with human authors and creators.
Alexis Stakem is a first-year Social Work major. AS996397@wcupa.edu.