Every new communication technology arrives with a familiar promise: more access, more participation, more democracy. The printing press promised liberation from clerical authority; radio promised a national conversation; the early internet promised a decentralized public sphere. Artificial intelligence arrives wrapped in similar rhetoric. But beneath the optimism lies a quieter shift: political speech is becoming automated, and the power to shape that automation is drifting into the hands of institutions—public and private—that are not always accountable to the people they claim to serve.
AI systems—especially the large language models that now mediate how millions of people search, write, and understand the world—are no longer just tools for expression. They are emerging as producers of political discourse. They summarize legislation, explain court rulings, and generate arguments with a fluency that once required education, time, and institutional support. Courts, legislatures, and administrative agencies are already experimenting with AI‑assisted research and summarization, integrating these systems into the everyday work of governance.
But the same systems can generate distortions at a speed no human propagandist could match. During the 2024 New Hampshire primary, voters received robocalls using an AI‑generated imitation of President Joe Biden’s voice, urging them not to vote. Analysts also noted that synthetic political images and videos circulated online throughout the election cycle, often in small, fast‑moving bursts that were difficult for platforms or journalists to track in real time. These incidents revealed how cheaply political speech can now be manufactured and how precisely it can be targeted.
The risks extend beyond elections. AI systems have been documented fabricating legal citations—complete with invented case names and plausible‑sounding quotations—when used by attorneys who trusted them too readily. In 2023, a federal judge sanctioned lawyers for submitting a brief filled with AI‑generated cases that did not exist. Similar episodes have surfaced in immigration courts, bankruptcy filings, and academic research. These “hallucinations” expose how fragile the infrastructure of truth verification becomes when machines can produce convincing falsehoods faster than institutions can check them.
The political implications are contradictory. AI can clarify public debate, but it can just as easily flood it. It can restate arguments neutrally, but it can also tailor persuasion to individual psychology. It can empower citizens, but it can also empower the actors—campaigns, corporations, governments—who already possess disproportionate influence. These contradictions reflect the incentives of the institutions deploying the technology, not the technology itself.

Every medium has reshaped politics by shifting the cost of producing and distributing ideas. The press made text abundant; newspapers industrialized narrative; radio nationalized persuasion; television professionalized performance; social media fragmented the public square. AI introduces a new shift: it makes arguments abundant. When arguments become cheap, the advantage goes to those who can generate the most, distribute the widest, and target the best.
AI systems are also being shaped by geopolitical boundaries. The European Union has taken the most comprehensive regulatory approach. The EU AI Act classifies systems by risk, with political communication subject to heightened scrutiny. By 2026, deepfakes and any AI‑generated text intended to “inform the public on matters of public interest” must be clearly labeled. This reflects the EU’s rights‑based governance tradition, though it raises questions about enforcement and bureaucratic overreach.
The United States has taken a more fragmented path. Federal agencies have begun to intervene at the edges of political communication. In 2024 and 2025, the Federal Communications Commission proposed rules requiring broadcasters to disclose when political advertisements contain AI‑generated content. Several states have introduced their own disclosure laws. These measures reflect the U.S. preference for light‑touch regulation and constitutional sensitivities around political speech, but they leave large gaps—especially online, where most persuasion now occurs.
China represents a third model: AI as an extension of state authority. Chinese regulations require that AI outputs align with “core socialist values,” a phrase that functions less as guidance than as a flexible instrument of political control. Providers must ensure that their systems do not generate content that undermines social stability or challenges official narratives. The rules are explicit about the state’s prerogative to intervene, but opaque about how decisions are made or enforced. In this context, AI becomes not just a tool of governance but a mechanism for shaping the boundaries of permissible thought.
These approaches—rights‑based transparency, market‑mediated disclosure, and state‑directed stability—show how geopolitical priorities become technical constraints. They also show why the tension is not simply private versus public. State control can be as opaque and unaccountable as corporate control, and sometimes more so. The real question is whether any governance regime—European, American, Chinese, or otherwise—can shape AI in ways that strengthen democratic life rather than narrowing it.
For all the transformative rhetoric, there are reasons to temper expectations. Political identity is stubborn. People rarely abandon their beliefs because a machine produces a more articulate counterargument. And for every AI‑powered persuasion tool, there will be an AI‑powered countermeasure. The likely outcome is not a revolution in political consciousness but an escalation in political tactics—a more articulate, more automated version of the conflicts we already have.
Still, the stakes are real. AI will shape who gets heard, who gets drowned out, and who gets to define the boundaries of legitimate debate. It will influence how institutions function and how citizens understand the world. And because these systems are being built and governed by a mix of private firms, regulatory bodies, and state agencies—none of them fully transparent—the public has limited insight into how they evolve.
If AI is becoming part of the political infrastructure, the question is not whether we can halt its spread, but whether democratic societies can shape the conditions under which it operates. That work will not be accomplished by a single law or a single regulator. It will require a patchwork of norms and institutions—some old, some new, some public, some professional, some civic—that together create the friction, transparency, and accountability democratic life depends on.
Some guardrails will take the form of transparency norms. Campaigns, newsrooms, and advocacy groups may come to treat disclosure of AI‑generated content the way they treat conflict‑of‑interest statements: not as a legal requirement, but as a basic expectation of public life. Platforms could adopt voluntary labeling standards for synthetic media. Government agencies might publish logs of when AI systems are used to draft letters or summarize case files. These norms are imperfect, but they create a baseline of visibility.
Other safeguards will look like institutional slow lanes—intentional friction in systems that would otherwise accelerate beyond human oversight. Courts and administrative agencies may require human review of AI‑generated filings. Legislatures might impose waiting periods before AI‑assisted regulations are released. Citizen assemblies or ethics boards could evaluate high‑impact uses of AI in public institutions. These measures do not ban automation; they ensure that speed does not substitute for judgment.
A third category involves public‑interest infrastructure. Just as societies built public libraries, public broadcasters, and open‑data portals, they may need publicly funded or open‑source AI models designed for civic use. Independent auditing organizations—universities, nonprofits, standards bodies—could evaluate AI systems the way Underwriters Laboratories evaluates electrical devices. Shared datasets for legislative summaries or public‑health communication could reduce reliance on proprietary systems. These institutions would not replace private models, but they would counterbalance them.
There is also a role for platform governance norms that shape how AI‑generated content circulates. Rate limits on synthetic political messages during election periods, traceability requirements for mass‑generated communications, and public archives of political ads—AI‑generated or otherwise—would not dictate what people can say. They would simply make the mechanics of persuasion visible.
Professions, too, will need their own ethical codes. Lawyers are already being sanctioned for submitting AI‑fabricated citations; bar associations may soon require verification of machine‑generated research. Journalists will need guidelines for when and how AI can be used in reporting. Academics will need norms around disclosure of AI assistance. These are cultural guardrails, not regulatory ones.
And finally, democratic societies will need civic literacy—not the tech‑industry version that treats AI as a marvel to be mastered, but a democratic version that teaches people how persuasion works, how information circulates, and how to evaluate sources. Public‑school curricula, library workshops, and election‑season public‑service campaigns can help citizens recognize synthetic media and understand its limits. This is the slowest solution, but historically the most durable.
None of these measures, taken alone, will resolve the contradictions of AI in political life. But together they sketch the outline of democratic stewardship: a system in which no single actor—corporate, governmental, or algorithmic—sets the terms of public debate. AI is not destiny. It is infrastructure. And like all infrastructure, it will serve democracy only if democracy insists on shaping it.