New Insights Reveal Growing Political Manipulation Risks in AI Systems Shaping Public Discourse

Understanding the Risks of AI in Shaping Public Discourse



Recent studies have shed light on the pressing issues surrounding the use of Artificial Intelligence (AI), particularly Large Language Models (LLMs), in shaping public conversations. A comprehensive report by PSG Consulting and Innovating for the Public Good highlights alarming trends that could endanger the very fabric of democratic discourse in the U.S.

The Foundation of the Research



This groundbreaking investigation, conducted by the Dewey Square Group, scrutinizes the sources of data used to train these AI models. The findings reveal a significant imbalance in the accessibility of information that feeds into AI systems. It appears that the vast majority of high-factuality, center-left media outlets impose strict barriers against AI web crawlers, effectively limiting the data available for model training.

Conversely, sites with lower factual accuracy, often aligned politically to the far-right, offer much greater accessibility to AI crawlers. The implications of these findings are substantial, as they suggest that the quality of information processed by AI could become heavily skewed towards less reliable sources, consequently shaping the narratives and facts encountered by the public.

Key Findings of the Study



The research presents several alarming statistics about the current state of data accessibility:
  • - Center-left media outlets have less than 40% of their data available to AI crawlers, severely limiting the quality of training data.
  • - In stark contrast, nearly 80% of far-right sites are accessible to these crawlers, raising concerns about the resulting bias in AI outputs.
  • - Websites rated with

Topics Policy & Public Interest)

【About Using Articles】

You can freely use the title and article content by linking to the page where the article is posted.
※ Images cannot be used.

【About Links】

Links are free to use.