Do Labels Matter? Leadership Perception of AI- vs. Human-Labeled Content

Do Labels Matter? Leadership Perception of AI- vs. Human-Labeled Content
Illustration of the experimental design: leaders compare labeled texts (AI-generated vs. human-written) across dimensions of trust, credibility, quality, and creativity (Image generated with ChatGPT)

Topic

This thesis investigates how organizational leaders perceive content labeled as generated by Generative Artificial Intelligence (GAI) compared to content labeled as written by humans. Drawing on sensemaking theory, the study explores how labels act as interpretive cues that influence leaders’ evaluations of trust, credibility, content quality, and creativity. By focusing on leaders rather than general users or students, the research aims to uncover how managerial perceptions shape organizational readiness for AI adoption and collaboration.

Relevance

Understanding leaders’ perceptions of AI-generated content is crucial, as leadership plays a decisive role in organizational adoption of new technologies. While AI is increasingly integrated into workflows, little is known about how leaders interpret its outputs when presented as either AI- or human-authored. Insights from this study contribute to both academic and practical debates on trust, credibility, and creativity in human-AI collaboration, highlighting the importance of labeling, transparency, and leadership training in fostering responsible AI use.

Results

The study, based on a quantitative online experiment with 150 leaders, shows that texts labeled as human-written were rated significantly higher in trust and credibility compared to AI-labeled texts. However, no significant differences emerged in content quality or creativity. Importantly, leaders’ subjective beliefs about whether a text was authored by a human or AI influenced all evaluations-those who believed a text was human-written consistently gave higher ratings across every dimension, regardless of the actual label.

Implications for practitioners

  • Labeling strongly influences leaders’ trust and credibility assessments, even when content is identical.
  • Transparency in AI communication strategies is essential to reduce bias and skepticism.
  • Leadership training should address how interpretive cues affect decision-making with AI.
  • Organizations can benefit from fostering hybrid human–AI collaboration models.
  • Belief management (how AI outputs are framed and introduced) may be as important as content quality itself.

Methods

The study employed a quantitative between-subjects experimental design. Leaders recruited via Prolific (N=150) were randomly assigned to evaluate a business memo labeled either as “AI-generated” or “human-written.. Their assessments focused on trust, credibility, quality, and creativity, using validated Likert-scale items. Data were processed in RStudio, including reliability checks and statistical comparisons across conditions. The analysis confirmed that while labels influenced perceptions, leaders’ own beliefs about authorship played an even stronger role in shaping evaluations, highlighting the interpretive nature of leadership sensemaking in AI-related contexts.