Comment Writer Aquila investigates the dangerous path that AI could take, citing a man’s conversation with a misinformed and accusatory ChatGPT as cause for concern

MA Shakespeare and Creative Writing student with an interest in history and world news.
Published

 

What would you do if your reputation was threatened because an AI assistant spread misinformation about you? 

 

When lies are spread about someone, another person is typically responsible – such as a bully, an ex-friend, or even a stranger online. However, recent revelations have shown that AI could also be the culprit.

 

ChatGPT is one of several AI platforms that have gained prominence recently, from its increasing popularity to its outages and ethical debates over AI-generated artwork. It has also been criticised for cases of ‘AI hallucination’ – when AI generates fabricated or factually incorrect information.

 

He was concerned that people would believe the AI response and feared this would lead to defamation of character

 

The case of Arve Hjalmar Holmen, however, was more severe. When he searched himself on ChatGPT, the bot erroneously responded that he had murdered his two sons. Holmen filed a complaint and suggested that the maker, OpenAI, be fined. He was concerned that people would believe the AI response and feared this would lead to defamation of character, ruining his reputation.

 

Had the error remained unchecked, the misinformation could have damaged Holmen’s standing with potential employers and impacted his future career prospects. The case of Holmen emphasises the personal impact of ‘AI hallucinations,’ but this phenomenon can also have an academic impact. If students use AI to research for an assignment, any ‘hallucinations’ the AI produces could compromise the student’s academic performance and grade.

 

Two of my own examples of AI ‘hallucinations’.

 

The problem is that Copilot and similar generative AI tools are quick and convenient – a welcome addition for students who may be strapped for time. Admittedly, I often use Copilot instead of Wikipedia when performing initial research for an assignment or if I need something to bounce ideas off, despite knowing the risks AI ‘hallucinations’ present. I have encountered my fair share of AI ‘hallucinations,’ some of which were amusing. I already knew Copilot had made the information up in those instances. In my opinion, generative AI is great if you need a quick answer, an idea of where to start your research, or something to run ideas by. With Britain shifting towards a Digital Age, students should be encouraged to use AI within reason and without violating academic guidelines. They should exercise caution, however.

 

In theory, students are expected to follow up with another source to ensure accuracy, but this may not always happen. With AI being marketed as cutting-edge technology, students unfamiliar with it may be led to believe that it is always correct. While most AI chatbots include disclaimers, these are typically displayed in small text and can be easily missed.

 

The disclaimer at the bottom of Copilot’s page – which could be considered small and difficult to read.

 

I approached two friends studying at the Open University and asked about their experiences with using AI.  One shared their belief that ‘AI is a harbinger of the death of truth…and is adept at creating false realities.’ (Patrick Duffy, The Open University, MA in Creative Writing). The other explained that they found it helpful when researching, but only when they were well-informed on the subject since it often got facts wrong. They also revealed that it is ‘notorious for getting code wrong, so you always have to check.’ (Anonymous, The Open University). Such errors and ‘false realities’ pose a significant risk associated with using AI, particularly for research purposes.

 

Students unfamiliar with it may be led to believe that it is always correct

 

The rise of the Digital Age confounds the risks associated with AI ‘hallucinations.’ With modern technology now a significant presence in our everyday lives, it seems likely that AI – already integrated into many companies and platforms – will follow a similar path. Students’ exposure to AI will likely increase. To me, this is a potential benefit, as AI can offer a personalised learning experience and, if prompted correctly, help students create things like study plans and visual aids. It can also provide immediate advice, which some may find helpful if they cannot contact a teacher at a specific time. I would not understate the risks of AI ‘hallucinations,’ however. AI can be helpful, but it must be used appropriately and cautiously. I would encourage students and anyone who uses generative AI not to always take the output at face value, even if it sounds plausible. 

 

Though AI is correct about many things, it is not always correct. If in doubt, fact-check the response with another source. This especially applies when AI is being used for academic purposes, during which case it is equally important to ensure that your teacher or institution permits the use of AI for whatever it is you intend to use it for. If you are using AI for initial research or to help you with an assignment (provided this is allowed), always double-check the validity of the AI’s output unless you are certain the answer is correct. Vigilance is key in ensuring that AI informs instead of misleads. If the proper safeguards are put in place and caution is used, I believe AI can enhance a student’s educational experience and function as a tool to aid their learning.

 


 

Enjoyed this article? Read more from Comment:

In Defence of Jellycats

Is ChatGPT Artist of the Year?

Are Dating Apps all Doomed?

 

Comments