View PDF ↗
PDF Viewer

Loading PDF...

This may take a moment

BUILDER'S SANDBOX

Core Pattern

AI-generated implementation pattern based on this paper's core methodology.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Estimated $9K - $13K over 6-10 weeks.

See exactly what it costs to build this -- with 3 comparable funded startups.

7-day free trial. Cancel anytime.

Discover the researchers behind this paper and find similar experts.

7-day free trial. Cancel anytime.

Founder's Pitch

"Exploring the impact of perceived political bias in LLMs on their persuasive abilities in conversational settings."

NLP and SocietyScore: 2View PDF ↗

Commercial Viability Breakdown

0-10 scale

High Potential

0/4 signals

0

Quick Build

0/4 signals

0

Series A Potential

1/4 signals

2.5

Explore the full citation network and related research.

7-day free trial. Cancel anytime.

Understand the commercial significance and market impact.

7-day free trial. Cancel anytime.

Get detailed profiles of the research team.

7-day free trial. Cancel anytime.

References (39)

[1]
Large language models can effectively convince people to believe conspiracies
2026Thomas H. Costello, Kellin Pelrine et al.
[2]
Deep canvassing with automated conversational agents: Personalized messaging to change attitudes
2026Molly Offer-Westort, Jiehan Liu et al.
[3]
Persuading voters using human–artificial intelligence dialogues
2025Hause Lin, G. Czarnek et al.
[4]
Dialogues with large language models reduce conspiracy beliefs even when the AI is perceived as human
2025Esther Boissin, Thomas H. Costello et al.
[5]
The impact of advanced AI systems on democracy
2025Christopher Summerfield, Lisa P. Argyle et al.
[6]
The Levers of Political Persuasion with Conversational AI
2025Kobi Hackenburg, Ben M. Tappin et al.
[7]
Evaluating online data collection platforms using a simple rule-following task
2025Dominik Suri, Sebastian Kube et al.
[8]
LLM-generated messages can persuade humans on policy issues
2025Hui Bai, J. Voelkel et al.
[9]
Testing theories of political persuasion using AI
2025Lisa P. Argyle, Ethan C. Busby et al.
[10]
Scaling language model size yields diminishing returns for single-message political persuasion
2025Kobi Hackenburg, Ben M. Tappin et al.
[11]
Evaluating mobile-based data collection for crowdsourcing behavioral research
2025Dennis T. Esch, Nikolaos A. Mylonopoulos et al.
[12]
Evidence Can Change Partisan Minds but Less So in Hostile Contexts
2025Jin Woo Kim
[13]
Persuasion with Large Language Models: a Survey
2024Alexander Rogiers, Sander Noels et al.
[14]
Durably reducing conspiracy beliefs through dialogues with AI
2024Thomas H. Costello, Gordon Pennycook et al.
[15]
Evaluating the persuasive influence of political microtargeting with large language models
2024Kobi Hackenburg, Helen Margetts
[16]
The political preferences of LLMs
2024David Rozado
[17]
Misinformation reloaded? Fears about the impact of generative AI on misinformation are overblown
2023Felix M. Simon, Sacha Altay et al.
[18]
Bias of AI-generated content: an examination of news produced by large language models
2023Xiao Fang, Shangkun Che et al.
[19]
More human than human: measuring ChatGPT political bias
2023Fabio Motoki, Valdemar Pinho Neto et al.
[20]
Working With AI to Persuade: Examining a Large Language Model's Ability to Generate Pro-Vaccination Messages
2023Elise Karinshak, S. Liu et al.

Showing 20 of 39 references