AI robot writing

AI robot writing (© Emmy Ljs - stock.adobe.com)

GAINESVILLE, Fla. — Can you spot an article written by artificial intelligence? It’s not as easy as you might think. Whether you’re able to discover robot-generated content or not, a new study finds the mere suggestion that something was written by AI is enough to anger people now.

Specifically, a team from the University of Florida and the University of Central Florida suggests our prejudices about artificial intelligence might be clouding our judgment. Their research shows that people automatically downgrade stories they believe were written by AI – even when they were actually penned by humans!

The team discovered that the latest version of ChatGPT can produce stories that nearly match the quality of human writing. However, there’s a catch: simply suggesting that AI wrote a story makes people less likely to enjoy reading it.

“People don’t like when they think a story is written by AI, whether it was or not,” explains Dr. Haoran “Chris” Chu, a public relations professor at the University of Florida who co-authored the study, in a media release.

The research, published in the Journal of Communication, involved showing participants different versions of the same stories – some written by humans, others by ChatGPT. To test people’s biases, the researchers cleverly switched up the labels, sometimes correctly identifying the author and other times deliberately mislabeling them.

The study focused on two key aspects of storytelling. The first, called “transportation,” is that familiar feeling of being so absorbed in a story that you forget your surroundings – like when you’re so engrossed in a movie that you don’t notice your uncomfortable theater seat. The second aspect, “counterarguing,” happens when readers mentally pick apart a story’s logic or message.

Person using ChatGPT on their smartphone
The team discovered that the latest version of ChatGPT can produce stories that nearly match the quality of human writing. (Photo by Ascannio on Shutterstock)

While AI-written stories proved just as persuasive as human-written ones, they weren’t quite as successful at achieving that coveted “transportation” effect.

“AI is good at writing something that is consistent, logical and coherent. But it is still weaker at writing engaging stories than people are,” Chu notes.

The findings could have important implications for fields like public health communication, where engaging narratives are crucial for encouraging healthy behaviors such as vaccination. However, this new research suggests that being upfront about AI authorship might actually undermine these efforts due to reader bias.

There’s some good news for creative professionals, though.

“AI does not write like a master writer. That’s probably good news for people like Hollywood screenwriters – for now,” Chu concludes.

Paper Summary

Methodology

This study used a series of pre-registered experiments to compare human- and AI-generated narratives. Participants were divided into groups to read either AI-created or human-created stories. Each narrative was designed with similar characters and plots to isolate the effect of the narrative source on participants’ reactions.

Methods included scales for measuring how transported the readers felt into the story, their level of counterarguing (or disagreement with the narrative), and their belief in the story’s messages. Using a controlled sample, researchers analyzed whether AI-generated stories achieved similar levels of reader engagement and persuasion as human-written narratives.

Key Results

The results showed mixed responses to AI and human-created narratives. AI narratives often led to lower levels of counterarguing, meaning participants were less likely to mentally challenge the story. However, narratives attributed to humans often resulted in higher levels of transportation or the feeling of being drawn into the story. This suggests that while AI stories are less scrutinized, human-authored ones are more emotionally engaging, potentially due to perceived authenticity.

Study Limitations

One limitation of the study is the dependency on specific narrative topics that may not generalize across all storytelling contexts. Additionally, labeling effects (where a story’s source is disclosed as either AI or human) may influence participants’ openness to the story, adding a psychological factor unrelated to the narrative content itself. The design could also benefit from diverse cultural contexts to assess general applicability.

Discussion & Takeaways

This study suggests that AI, while capable of constructing coherent and engaging stories, still lacks the creative depth that human-authored narratives often provide. Human creativity and lived experience appear to enhance narrative engagement, which AI cannot fully replicate. However, AI’s logical cohesion and ability to deliver structured stories indicate its potential as a complementary storytelling tool, especially in cost-effective, large-scale content production.

About StudyFinds Staff

StudyFinds sets out to find new research that speaks to mass audiences — without all the scientific jargon. The stories we publish are digestible, summarized versions of research that are intended to inform the reader as well as stir civil, educated debate. StudyFinds Staff articles are AI assisted, but always thoroughly reviewed and edited by a Study Finds staff member. Read our AI Policy for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

Sophia Naughton

Associate Editor

Leave a Reply