This holiday season, give to:

Truth and understanding

The Media Line's intrepid correspondents are in Israel, Gaza, Lebanon, Syria and Pakistan providing first-person reporting.

They all said they cover it.
We see it.

We report with just one agenda: the truth.

Please support TML's boots on the ground.
Donate
The Media Line The Media Line
The Hallucinations and Delusions of ChatGPT
(Jaap Arriens/NurPhoto via Getty Images)

The Hallucinations and Delusions of ChatGPT

Masrawy, Egypt, March 11

Whenever humanity steps toward something new, we often face exaggerated claims about its capabilities. This is certainly true for ChatGPT, the chatbot powered by artificial intelligence owned by Microsoft and developed by OpenAI. Launched in November 2022, ChatGPT can apparently pass as a real person, providing answers to questions and even writing articles, content and jokes. However, we must be cautious in our judgments of ChatGPT, and not simply rely on the enthusiastic claims of influencers. Instead, our assessments must be based on realistic experiences. My preliminary opinion on the use of ChatGPT in the workforce is that it could lead to the loss of millions of jobs, particularly those related to writing and creativity. Through two examples, I have found that ChatGPT can produce dramatically different results: one example showed it passing the theoretical exams for medical students in the US, while another showed it failing the mathematics and science exam for the sixth grade in Singapore. To further test its capabilities, I asked it to write a story about altruism, and what it produced was funny and sad at the same time. This raises serious issues about ChatGPT’s ability to understand words with multiple meanings. In a recent experiment, I asked ChatGPT about my identity and received the answer that I was an Egyptian writer and poet, though I had never written poetry apart from a failed exercise in my teenage years. The bot’s responses varied, with some of them being correct, but many of them being misleading. Its final response this morning was that I am one of the most renowned Arab comedians. I then asked the bot to name the rulers of Egypt from Muhammad Ali to the present day, and it provided me with inaccurate information, including the inclusion of Moammar Gadhafi. In a second experiment, the same query elicited the answer that Egypt was ruled by a woman named Khadija Al-Adl from 1894 to 1907. I was surprised when I asked my bot who Khadija Al-Adly was. He apologized for his incorrect answer, and instead suggested that she was the editor-in-chief of the Al-Masry Al-Youm newspaper, though this was also untrue. I looked for more information about Khadija Al-Adly, but all I could find was that she was an Egyptian writer and journalist. This made me realize that often when people ask “how to do something” the answer they get is usually the first result on a search engine, regardless of its accuracy. Google’s Vice President Prabhakar Raghavan has noted that artificial intelligence can sometimes lead to “hallucinations,” emphasizing the importance of not misleading the public. This warning may be applicable to Google’s competitors, including Microsoft, that owns the ChatGPT application. When assessing the accuracy of ChatGPT’s answers, we can observe its significant shortcomings. For example, its answer concerning Khadija Al-Adl was unreliable and lacked valid documentation. Therefore, it can be concluded that ChatGPT cannot be trusted. We cannot accept answers that are not inferred, or come from unreliable sources, or are distorted by falsehoods and misleading information. Machines, no matter how sophisticated or advanced, will not be able to recognize or comprehend these faulty answers. Thus, the results generated by such algorithms will be neither objective nor fair, and suspicions of bias will persist. Journalism is an art. It is not something that can be achieved through the use of ChatGPT or any other tool. High-level journalism cannot be accomplished with scattered and sometimes catastrophic answers that lack accuracy, verification or documentation. ChatGPT’s canned responses cannot replace the human touch. Journalists should not rely solely on the information provided by ChatGPT; instead, they should always verify the information themselves. —Alaa Al-Ghatrifi (translated by Asaf Zilberfarb)

TheMediaLine
WHAT WOULD YOU GIVE TO CHANGE THE MISINFORMATION
about the
ISRAEL-HAMAS WAR?
Personalize Your News
Upgrade your experience by choosing the categories that matter most to you.
Click on the icon to add the category to your Personalize news
Browse Categories and Topics