As people use AI to plagiarise and cheat, here are tools to tell AI-generated text and human text apart

Published Feb 4, 2023

Share

ChatGPT is, without a doubt, taking the world by storm. Whether it is making the AI create poems or using it for a bit of research, the AI is now fast becoming everyone’s go to for entertainment.

But this artificial intelligence has also a less benign side. While it can be used for entertaining purposes, reports have emerged that students and learners are using the tool to cheat and plagiarise.

According to an article in The Guardian, a judge has also used the tool to decide whether an autistic child’s insurance should cover all of the costs of his medical treatment. While the ruling itself was not the issue, the usage of the AI in a court ruling left a bad taste for many people.

To fight the abuse of ChatGPT, new apps and tools have come out to the rescue of teachers, lecturers and anyone else who might fall victim to the abuse of AI technology.

ChatGPT creator, OpenAI has introduced a classifier to distinguish between text written by a human and that written by AIs from a variety of providers.

On their website, OpenAI says that while it is impossible to reliably detect all AI-written text, good classifiers can inform mitigations for false claims that AI-generated text was written by a human.

They listed the following as examples: running automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human.

GPTZero was built by Edward Tian specifically for educators. In a tweet he said: “There's so much chatgpt hype going around. is this and that written by AI? we as humans deserve to know!”

Tian added that what drove him to build the programme GPTZero was the alleged increase in AI plagiarism.

There are also a number of paid or free plagiarism checkers to verify if a text is original.

If it's a duplicate content, it was most likely written using AI.

It is worth noting that all these tools aren't perfect and may be ineffective against new language models and small amounts of text.

IOL