Is AI technology making us a dumber species? - experts weigh in

ChatGPT is a natural language-processing tool driven by artificial intelligence (AI) technology. File image.

ChatGPT is a natural language-processing tool driven by artificial intelligence (AI) technology. File image.

Published Jul 1, 2023

Share

Johannesburg - It has taken the world by storm for its ability to seamlessly assist any individual with a number of tasks and to simulate human conversation.

But could the new Open Artificial Intelligence tool ChatGPT do more harm than good for a human being's critical thinking skills?

Technology experts in the country think it’s possible.

While ChatGPT is slowly revolutionising the way we go about our tasks, making life a million times easier for us, tech experts in the country believe that one of AI’s latest tools could be making our species dumber by the day if used the wrong way.

ChatGPT is a natural language-processing tool driven by artificial intelligence (AI) technology, that allows an individual to have human-like conversations and much more with the chatbot.

The language model can answer questions and assist you with tasks, such as composing emails, essays and code.

It can also describe art in great detail, create AI art prompts, and even have philosophical conversations with you.

It has become so popular that within two months of its launch, ChatGPT had over 100 million active users.

Anna Collard, SVP content strategy and evangelist at KnowBe4 Africa, says tools such as ChatGPT could very well have an impact on children developing their critical thinking skills.

“AI tools are here to stay, so instead of seeing it as a threat, it should be seen as an asset, but one that supports cognitive growth and memory function rather than replacing it,” says Collard.

“Research by George Millar in 1956 found the average person can only keep around seven items in their working memory. Miller’s Law of seven plus-minus two means up to nine or as few as five items are the limit of a human’s processing abilities.

“Today, that number has gone down by four. This research, along with other academic papers and analysis points to reduced memory due to an over-reliance on technology, also called the Google effect.

“This is a concern, one that has grown increasingly vocal over the past year as educators and researchers have pondered the impact of technology, and now AI, on cognitive behaviour and memory retention. However, it is also balanced by research that has pointed out that actually, human beings have been outsourcing their memory to various materials and solutions for centuries.

“Paper, parchment, papyrus and wood are some prime examples. Modern technology is no different. It can be a tool to bolster memory and make it far easier for humans to manage lives that are deluged by information, noise and digital clutter.

“The research goes in both directions, suggesting that technology is both an enabler and an inhibitor of human memory,” says Collard.

“This points to the fact that actually, the impact does not lie in using it, but in how it is used or, in the case of ChatGPT, abused. ChataGPT can be an immensely useful tool that supports students in their research and studies, but if it becomes the sole source of information and does all the writing for them, that is where the problems start.”

ChatGPT has suffered bans across schools in the English-speaking world due to its abilities to produce essays and other school work in a flash without any effort by the pupils.

Collard says she hopes South African schools don’t take the same step, but rather teach students how to use it in a practical way.

“Don’t ban it, rather teach students how to use it within practical guidelines and policies that help them to enhance their understanding of AI and this type of tool. This will enhance their own critical thinking skills by asking them to question the sources, content, truthfulness and accuracy of the content that the platform serves up to them, and it will turn the threat into an opportunity.”

She added that while reliable, ChatGPT also uses machine learning to infer information, something which could introduce inaccuracies.

A response by ChatGPT, an AI chatbot developed by OpenAI, is seen on its website in this illustration picture taken February 9, 2023. Picture: REUTERS/Florence Lo/Illustration

“If you ask ChatGPT what happens if you break a mirror, it replies ‘You will have seven years of bad luck’.

“This is not a fact; it is based on superstition. If users do not constantly check the factual accuracy of ChatGPT, they run the risk of sharing fake news, inaccurate information, and even conspiracy theories.”

Jelle Wieringa, security awareness advocate for EMEA at KnowBe4, said ChatGPT could prove to be very helpful if used in the right way.

“ChatGPT is an application developed by OpenAI. It is designed to answer text-based questions with human-like answers and engage in chat-like conversations with the user. The primary function for ChatGPT is to understand and generate natural language text. In other words, it can comprehend questions asked to it, and responds with explanations and suggestions, and comes up with ideas. It can also engage in conversation with the user, guiding him through a topic.

“ChatGPT uses deep learning. A form of machine learning, which in turn is based on artificial neural networks. These are all aspects of artificial intelligence. ChatGPT understands language. The model has been trained using vast amounts of data from the internet. This allows it to comprehend and interpret a large number of language inputs, including the meaning and context behind questions.

“This context awareness allows for it to keep track of the history of a conversation between ChatGPT and the user. This feature allows for chat-like conversation where a user can expand upon his prior requests and questions to ChatGPT. It allows for a conversation to evolve and extend, similar to how a conversation between two humans would evolve.

“Because of its understanding of language and the context behind it, ChatGPT is able to adapt to the user’s input style, essentially learning how to best answer a question in a style that suits the user best. Because of the vast data set used to train the underlying model of ChatGPT, it has access to extensive knowledge and insights. These allow it to guide the user by giving tips, providing recommendations, and assisting with certain tasks.”

Wieringa says it’s important that individuals use the technology responsibly, as it comes with risks.

“I think ChatGPT indeed has value to anyone, including younger generations. But as with any technology, the use comes with a responsibility. Following an AI tool like ChatGPT blindly is certainly not without risk. The data used to train the underlying model of ChatGPT has been fed into it unbiased and unscreened. This means that replies that come out of it are not necessarily true.

“Using ChatGPT bears the responsibility that you use critical thinking to evaluate what the answer entails. A skill that can be very difficult for youth to apply, given that their own knowledge level is often not equipped to judge or weigh the answers given by ChatGPT.”

People could also become over-reliant on it, says Wieringa.

“People may think that it will do all of the thinking for them. But when looked at through the eyes of a cybercriminal, ChatGPT is an excellent tool to utilise in social engineering schemes. As ChatGPT mimics human-like conversations, it could easily be used to manipulate targets into doing things they might not want.

“Leveraging the engine behind ChatGPT to strike up an email-, chat-, or social media-based conversation might look like you are actually communicating with a real person, even though the truth might be very different. This opens up a variety of new attack vectors that cybercriminals can profit from.”

He says school pupils ould become lazy and over-reliant on the AI tool.

“In a way, this works like a double-edged sword – relying on ChatGPT-like tools to do all the hard work, while students forgo their responsibility of thinking for themselves and not really considering the truth of the information provided to them.

“On the other hand, it forces the education and research institutes to innovate to combat this trend – something that is long overdue. As with any (disruptive) advancement, it is never a one-sided ordeal.”

ChatGPT functions like a virtual assistant. Supplied image.

Asked if human beings were setting themselves up to be ruled by robots eventually, Wieringa says: “The challenge with answering this question is the lack of clear and concrete guidelines to govern both the development and use of AI for now.

“Without rules and regulations, AI development and use will turn rampant. It is in the very nature of humans to keep on evolving. Without boundaries put in place, we will try to look for innovation in whatever way we can.

“Both for profit and (hopefully) the betterment of humankind. But if we are able to put those boundaries – in the form of ethical, moral and considerate frameworks, laws and legislations – in place, then we will be able to benefit from AI without the worry of it overtaking and eventually ruling us. It is important to note here that we need to create these boundaries on an organisational, nation-state, and global level for them to actually be successful.”

While it could be considered a dangerous tool, Wierenga believes that ChatGPT shouldn’t be banned in South African schools.

“Banning will only force it into the shadows. The movement that this disruptive technology created cannot be stopped, only guided in the right direction. It is not about stopping it. That would kill advancement, and this would only hurt us in the end. It’s all about taking charge of it and making sure we develop and use it responsibly. This is where the real challenge surrounding AI lies.”