Generative Artificial Intelligence (AI) is rapidly creeping into every facet of our lives – now including creative spaces such as music composition and creative writing. But do you think you could tell the difference between human and AI-generated works?
Two researchers from Villanova University in the US, Sydney Sears and Dr Deena Weisberg, demonstrate in their 2025 preprint that, unfortunately, humans may not be as good at distinguishing human- and AI-generated works as we think we are. They conducted two studies to examine whether people could separate human from AI work. In one study, participants were asked to read a human-written and a ChatGPT-generated short story and to guess the stories’ origins. Unsurprisingly, most people failed at the task.
In the second study, the authors found that participants who were assigned to read ChatGPT-generated work rated the story higher in quality and absorption than those assigned to read human-authored work. More interestingly, regardless of what they actually read, those who believed they were reading human-written work rated the story higher in quality and absorption. This further highlights that we humans struggle to distinguish human and AI-generated work, and our judgment is heavily influenced by our own preconceptions.
This difficulty extends to other creative fields. A survey conducted by Ipsos of 9,000 participants from eight countries revealed that 97% of participants could not reliably tell when music was composed by AI, further illustrating our inability to distinguish works generated by AI from those by humans.
It is easy to recognise the differences between human and AI-generated works when the works require facts and arguments, because generative AI tend to misquote or hallucinate sources of information. With creative works, on the other hand, it can be harder to spot the differences. For example, human writing is considered deeper, more emotional, and more coherent, and the majority of people are confident that they should have a ‘sense’ of the difference, and are seriously surprised when that sense fails them.
A reason as to why this sense might fail is that generative AI functions as a predictor of the next most likely outcome in a sequence (i.e., the next word in a sentence, the next note in a melody, or the next frame in a series of images to create a video). This prediction is based on every piece of work that the AI model has access to, including human-authored works. This results in new works that resemble human creations. The more human-created content is used to train the AI model, the better the AI will get at generating works that could be mistaken for human works.
Notably, some scholars and critics argue that artificially generated creative works blur the line between inspiration and plagiarism. AI works may be technically quite impressive, using the right techniques and phrasing as seen in a good piece of work, but some critics say that they lack originality, imagination, and rule-breaking that define human art. In other words, AI ‘creativity’ lacks intention.
Despite this, people seem to consistently fall prey to AI-generated works and fail to tell them apart from human creations. Now, everywhere, catchy AI songs on Spotify are mistaken for compositions by human artists. AI-generated videos and images on Instagram, TikTok, and YouTube attract comments from users, many failing to recognise they are not real.
AI-generated books are also being sold and bought on Amazon, some without any acknowledgement of AI involvement. The problems with these AI-generated works are not only about their being generated based on human creations, aka copyright concerns. They can distort the truth, thus tainting the minds of the uninformed audience. This is dangerous, as it further amplifies the spread of misinformation. As generative AI models improve at creating works that resemble human creations, identifying their origins has become increasingly challenging.
So what’s next? As AI embeds itself deeper into the creative industries, questions remain about how to respond to its generated works and how to determine whether a work is made by AI or by a human. One suggestion is to have clear labelling of the source; however, until this practice is enforced by law everywhere, unfortunately, we can still expect to see masses of AI-generated works that fool even discerning audiences.
Another article you may enjoy: https://thebadgeronline.com/2026/02/the-psychology-behind-standing-up-to-the-far-right/

