The Psychology of AI Credibility

neub9
By neub9
6 Min Read
Enrique Leon, AI and Cloud Enterprise Architect at American Sugar Refining

Enrique Leon, AI and Cloud Enterprise Architect at American Sugar Refining

Artificial intelligence (AI) is increasingly used to generate content, such as text, images, music, and videos, that can influence human beliefs, attitudes, and behaviors. However, not all AI-generated content is accurate, reliable, or ethical. Some AI systems may produce misleading, biased, or harmful content, intentionally or unintentionally, that can negatively affect individuals and society. Therefore, it is important to understand how people evaluate the credibility of AI-generated content and how it compares to human-generated content.

This article explores the psychological factors that affect people’s trust in AI-generated content and why they may accept it as true more than human-generated content. It reviews the existing literature on the topic and proposes a conceptual framework that explains the main cognitive and affective processes involved. It also discusses the implications of the findings for the design and regulation of AI systems and the education and empowerment of users.

‘Users should be empowered and engaged in the co-creation and governance of AI systems and have the opportunity to express their opinions and concerns about the systems and their outputs.’

Literature Review

A growing body of research examines how people perceive and respond to AI-generated content, especially in the domains of text and image generation. Some of the main themes that emerge from this literature are:

  • People generally tend to trust AI-generated content, especially when they are unaware of its source or have a positive attitude toward AI.
  • People are influenced by the quality, coherence, and consistency of AI-generated content and by the cues and context accompanying it.
  • People are more likely to accept AI-generated content as true when it confirms their prior beliefs, preferences, or expectations or when it appeals to their emotions or motivations.
  • People are less likely to question or verify AI-generated content than human-generated content due to their lower perceived accountability, responsibility or intentionality of AI sources.
  • People are more susceptible to the effects of AI-generated content when they have low levels of media literacy, critical thinking, or digital skills or when they are in situations of high uncertainty, complexity, or information overload.

Conceptual Framework

Based on the literature review, a conceptual framework is proposed that illustrates the main psychological factors that affect people’s trust in AI-generated content and how they compare to human-generated content. The framework consists of four components: source, message, receiver, and situation. Each component has several subcomponents that represent the specific variables that influence people’s trust. The framework concludes with the interactions and feedback loops among the components and subcomponents.

Perceived Objectivity – AI is just perceived as objective.

Consistency and Reliability – A trust based on consistent and high-quality content

Authority Attribution – AI uses advanced technologies and most people do not realize AI goes back decades

Lack of Emotional Biases – AI lacks emotions, thereby reducing concerns associated with those. Transparency – Trust is achieved via users’ perceived transparent explanations

Accuracy and Precision – Users believe AI is accurate and precise

Social Proof – Widespread adoption of AI and positive user experiences

Confirmation Bias Mitigation – content may mitigate confirmation biases by presenting information objectively

Discussion

The conceptual framework proposed can help understand the psychological mechanisms that underlie people’s trust in AI-generated content and why they may accept it as true more than human-generated content. The framework can also inform the design and regulation of AI systems and the education and empowerment of users. Some of the possible implications are:

  • AI systems should be transparent and accountable about their sources, methods, and goals, and provide clear and accurate information about their outputs’ quality, reliability and limitations.
  • AI systems should be ethical and responsible in generating content that respects human values, rights, and dignity, and avoids producing content that is misleading, biased or harmful.
  • AI systems should be adaptable and responsive to users’ feedback and preferences, allowing users to control and customize their interactions with the systems.
  • Users should be aware and informed about the existence and potential effects of AI-generated content and develop the skills and competencies to critically evaluate and verify the content they encounter.
  • Users should be empowered and engaged in the co-creation and governance of AI systems and have the opportunity to express their opinions and concerns about the systems and their outputs.

In this article, the psychology of AI credibility and why people trust AI-generated content more than human-generated content are explored. The existing literature on the topic is reviewed and a conceptual framework is proposed that explains the main cognitive and affective processes involved. The implications of the findings for the design and regulation of AI systems and the education and empowerment of users are discussed. The paper aims to contribute to the advancement of research and practice in this important and emerging field.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *