(The opinions expressed in this article are those of the author and do not necessarily reflect those of Al-Fanar Media).
The possibilities of artificial intelligence seem limitless. AI applications are countless, both in the sense of programs using machine learning and in the potential uses to which intelligent machines can be put. Even the frontier of death seems to be within the reach of computing. In an article titled “Should AI Speak for the Dying?”, data scientist Muhammad Aurangzeb Ahmad asks whether AI should be allowed to act as surrogates for incapacitated individuals, making medical decisions that align with their preferences when they cannot communicate.
Whether the matter is one of life and death or not, it is hard to assess the almost infinite promises made by AI companies to customers and investors. Even in the area of generative AI alone, I find it virtually impossible to keep up with the various kinds of software and their different (paid and unpaid) versions. Fortunately, I have whole classrooms full of students to try out new releases of ChatGPT, Midjourney, Copilot. and the AI feature built into Canva, Adobe Creative Cloud, et cetera. My institution, the Virginia Commonwealth University School of the Arts in Qatar (VCUarts Qatar), subscribes to some of these products. Studying in a wealthy and well-connected city like Doha, most of my students are able to access and pay for at least one premium version out of pocket.
Because the latest tools are so alluring, it is all the more important for students to separate hype from reality and actual capability from science fiction. I thus constantly challenge my learners to test the limits of what artificial intelligence is able to generate. As they are acquiring new skills and thinking about possible careers during their time at university, it is crucial for them to know which activities are likely going to be taken over by machines and which ones are not. Is it still worth spending four years to earn a Bachelor of Fine Arts degree at an institution like mine?
Most of my students quickly learn that AI is excellent at creating drafts, but less adept at producing precise finished products. Image generators can create fantastic concept art, especially of the kind that is not meant to be photorealistic. Machines might therefore, indeed, take over some of the work previously done by concept artists, especially those specializing in speculative genres like science fiction and fantasy. However, in order to draw polished high-resolution pictures, especially those showing faces in a realistic manner, human eyes and hands are still required. There is thus a lot of hope for my graduates of graphic design or painting and printmaking on the job market.
As English is a second language for many of my students in Qatar, they have also found AI to be very useful in generating or correcting texts that are free of grammar and spelling errors. However, it is less able to capture the unique voices of my students, who speak in a mixture of languages on a daily basis. I have not come across an AI tool that has impressed me with its ability to generate authentic dialogue in Qatari Arabic dialect, which includes words like mawater “cars” (from English “motor”) or chabra “market hall” (from Turkish köprü, “bridge”).
When it comes to generating images with writing in them, AI has been even less capable than at creating texts alone, requiring many manual edits by my students. So, even a simple bilingual map or scene with shop signs in Arabic and English, can hardly be done by an AI alone.


In my history courses, my students also quickly discovered the political limits built into even the most powerful apps. When they prompted ChatGPT about the conflict in Israel and Palestine, they tended to receive answers describing Hamas as a “terrorist” organization without any indication that this label was disputed. Other students asked ChatGPT to create visuals for an essay based on Brian E. Crim’s book, “Planet Auschwitz: Holocaust Representation in Science Fiction and Horror Film and Television” (Rutgers University Press, 2020). ChatGPT refused to do so, arguing that the prompt violates community standards.
Students would not share their experiences in trying to generate politically sensitive content unless they felt safe to do so. In all my courses, I would therefore try to avoid making accusations of plagiarism or passing other negative judgements on their experiments with artificial intelligence software. I always aim to have my learners be as honest as possible in reporting how they used the tools on their devices. The old virtue of honesty thus remains prized in an age of AI. Most proprietary software will not reveal its inner workings, including core algorithms, to its customers. It might not even admit when it made up or “hallucinated” entire bibliographies. However, the non-robots in my classrooms need to be as open as possible in order to learn the most from one another.
Jörg Matthias Determann teaches history at the Virginia Commonwealth University School of the Arts in Qatar. He benefited from conversations with Jeanne Vaz, Khalid AlHashmi and Chris Alario in writing this article.
Recommended:
• As AI Transforms Arab Higher Education, Universities Navigate Benefits and Challenges
• AI and Critical Thinking: A Crucial Challenge for Arab Academics in 2025
• AI Won’t Kill the Essay: The Real Threat is How We Approach Learning






