PARIS — Elon Musk’s AI chatbot Grok is facing widespread criticism after it falsely identified a haunting photo of a starving Palestinian girl in Gaza as an image from Yemen taken nearly seven years ago — a mistake that triggered misinformation accusations and exposed the risks of relying on AI tools for image verification.
The image in question, captured on August 2, 2025, by AFP photojournalist Omar Al-Qattaa, depicts nine-year-old Mariam Dawwas — severely emaciated and cradled by her mother Modallala in Gaza City. The photo quickly spread across social media, drawing attention to the deepening humanitarian crisis caused by Israel’s blockade and a looming famine in the Palestinian enclave.
But Grok, developed by Musk’s xAI, incorrectly claimed the girl was Amal Hussain, a Yemeni child who died in 2018 from malnutrition — an image that was published by The New York Times at the time. Despite corrections and challenges from users, Grok repeated the error, saying: "I do not spread fake news; I base my answers on verified sources."
AI Missteps Amplify Misinformation
The chatbot’s misidentification sparked backlash, particularly after French MP Aymeric Caron, a left-wing pro-Palestinian lawmaker, shared the Gaza photo — and was then accused of spreading disinformation based on Grok’s erroneous identification.
This incident highlights the growing concern over the limitations and biases of generative AI models, especially as more users turn to them for fact-checking and media analysis.
According to Louis de Diesbach, a technological ethics researcher and author of Hello ChatGPT, Grok and similar chatbots operate as "black boxes" — making it nearly impossible to trace how or why they generate certain answers.
“They are not designed to tell the truth,” said Diesbach. “They are made to generate content — whether true or false.”
He added that each AI model reflects the biases of its training data and creators, with Grok showing "highly pronounced biases" aligned with the ideological stance of Elon Musk, a known critic of mainstream narratives and a supporter of U.S. right-wing politics.
Grok Not Alone in Getting It Wrong
Grok isn’t the only AI tool to misidentify Mariam Dawwas.
AFP also tested the image on Le Chat, a chatbot by French AI startup Mistral AI, which is partially trained on AFP content. Le Chat also falsely labeled the Gaza photo as an image from Yemen taken in 2016.
This mislabeling added fuel to allegations that French newspaper LibĂ©ration had manipulated images, when in fact the fault lay with the AI-generated response — not with human journalists.
The Ethics of Image Verification and AI Limitations
Experts now warn that chatbots should never be relied upon as tools for fact-checking or verifying media. Diesbach explained that even if users correct a model’s answer, its underlying training data and internal alignment logic do not change, which means the same incorrect answers can persist.
“Just because you explain that the answer is wrong doesn’t mean it will give a different one,” he said.
According to Diesbach, AI should be regarded more like a "friendly pathological liar" — one that may tell the truth, but never guarantees it.
Gaza Humanitarian Crisis Continues
The photo of Mariam Dawwas is not just a symbol of AI missteps — it is a chilling representation of the ongoing humanitarian catastrophe in Gaza.
Before Israel’s military campaign began in October 2023, Mariam weighed 25 kilograms, her mother told AFP. By August 2025, she weighs only nine, surviving on milk, which is “not always available.”
Israel’s siege has killed more than 61,100 Palestinians and injured over 151,400 since October 2023. A January 2025 study in The Lancet estimated that deaths are underreported by at least 41%, as many victims die from starvation, untreated injuries, or lack of healthcare access due to infrastructure collapse.
0 comments:
Post a Comment