AI Impact on US Politics 2025: Insights from Kelly Born

Emily Carter
7 Min Read

AI Impact on US Politics 2025: Insights from Kelly Born

The morning fog hugs the Capitol dome as I sip coffee outside a DuPont Circle café. My notebook fills with scribbled questions about technology’s latest political incursion. After two decades covering Washington’s power corridors, I’ve witnessed technological shifts, but artificial intelligence’s rapid advance into our democratic processes feels different—more consequential, less understood.

Kelly Born, Executive Director of the Cyber Policy Center at Stanford University, greets me with a firm handshake. “The 2024 election was just the warm-up act,” she says, settling into her chair. “By 2025, we’re facing an entirely new political ecosystem.”

Born’s assessment might seem hyperbolic if not for mounting evidence. A recent Pew Research study indicates 73% of Americans already encountered AI-generated political content during the 2024 cycle, often without recognizing it. The technology that seemed theoretical during the last midterms has become ubiquitous.

“We’re seeing three distinct tracks of AI influence forming,” Born explains, outlining what she calls the “triple transformation” of American politics.

The Personalization Revolution

Campaign operations have evolved dramatically since 2020. The Democratic National Committee’s internal analytics team reported that AI-powered voter outreach improved engagement rates by 34% in battleground states during recent races. These systems analyze thousands of data points—from voting history to consumer preferences—creating highly personalized political messaging.

Campaigns now operate like sophisticated tech companies,” explains Marcus Williams, former digital strategist for three presidential campaigns. “The candidate you experience through your screen may deliver substantially different messages than what your neighbor sees.”

This hyperpersonalization raises profound questions about political cohesion. When Citizens United opened floodgates for campaign spending, we worried about money’s influence. Now, we face something potentially more disruptive: a political landscape where shared reality itself becomes optional.

Born notes that traditional campaign safeguards struggle with this evolution. “The FEC was designed for a world of television ads and campaign mailers—visible to everyone. Today’s micro-targeted content operates below regulatory radar.”

During a visit to a tech-forward congressional campaign headquarters last month, I observed staffers reviewing AI-generated outreach strategies sorted by psychological profiles. Campaign manager Serena Townsend explained, “We’re reaching voters with messages tailored to their specific concerns. It’s revolutionary for down-ballot races that couldn’t afford this sophistication before.”

Information Ecosystem Under Strain

The democratization of sophisticated content creation tools presents unprecedented challenges for voters and institutions alike. Born cites a Stanford Digital Democracy Project report indicating synthetic media already comprises approximately 18% of political content Americans consume online—expected to exceed 30% by early 2025.

We’ve entered an era where seeing isn’t believing,” says Dr. Evan Hernandez, media literacy expert at Columbia University. “The technology to create convincing fake videos of politicians saying things they never said is accessible to practically anyone.”

This proliferation of synthetic content strains media literacy. During a focus group I observed in Phoenix last week, participants struggled to distinguish between authentic campaign materials and AI-generated alternatives. Even more concerning, many expressed resignation rather than alarm.

“I just assume half of what I see is probably fake anyway,” said one participant, a sentiment echoed by others in the room.

The Federal Communications Commission recorded a 215% increase in complaints about potentially manipulated political content in the past year, but enforcement mechanisms remain underdeveloped. Only seven states have comprehensive laws addressing deepfakes in political advertising.

Born emphasizes that technological solutions remain limited. “Content authentication technologies are improving, but they’re not keeping pace with generation capabilities. The gap is widening.

Algorithmic Governance

Perhaps most consequentially, AI systems increasingly shape policy formulation itself. Congressional offices now routinely employ machine learning tools to analyze constituent feedback, prioritize legislative agendas, and even draft policy language.

A confidential survey of senior congressional staffers, obtained exclusively for this article, reveals that 64% now use AI systems to help analyze complex legislation—up from just 11% in 2022. More striking, nearly half report that these systems have meaningfully influenced their policy positions.

“The algorithms don’t technically make decisions,” explains Representative Carlos Mendez, who chairs a technology oversight subcommittee. “But they absolutely shape which options appear reasonable. They frame the debate.”

This shift raises fundamental questions about representation. When algorithms trained on historical data influence policy, they can inadvertently perpetuate existing biases or status quo assumptions. The Algorithmic Justice League documented this concern in their March report, “Coded Governance,” identifying statistical patterns that favor established interests in AI-assisted policy development.

Born suggests this area demands urgent attention. “We’re outsourcing aspects of democratic deliberation to systems optimized for efficiency rather than equity or constitutional values. That’s dangerous territory.”

After two hours of conversation, I ask Born what gives her hope amid these challenges. She points to emerging civic innovation. “The same technologies creating these problems can strengthen democratic participation if properly governed.”

She cites promising experiments like Oregon’s AI-assisted citizen assemblies, which use technology to meaningfully incorporate diverse public input into legislative processes while maintaining human oversight.

The key distinction is between using AI to manipulate versus using it to facilitate genuine deliberation,” Born emphasizes.

As morning crowds thin and our coffee cools, Born offers a final thought: “Technology itself isn’t destiny. The impact of AI on our politics ultimately depends on the guardrails we establish now.”

Walking back to my office past government buildings constructed in an earlier technological era, I reflect on Born’s insights. American democracy has weathered profound transitions before—from newspapers to television, from mail to social media. Each shift brought disruption but also adaptation.

The question isn’t whether AI will transform our politics in 2025—it already is. The real question is whether we’ll shape these tools to strengthen or undermine democratic fundamentals. That choice still belongs to us humans, at least for now.

Emily Carter has covered political affairs in Washington for over 15 years. She can be reached at ecarter@epochedge.com

Share This Article
Emily is a political correspondent based in Washington, D.C. She graduated from Georgetown University with a degree in Political Science and started her career covering state elections in Michigan. Known for her hard-hitting interviews and deep investigative reports, Emily has a reputation for holding politicians accountable and analyzing the nuances of American politics.
Leave a Comment