Trump details AI plan designed to restrict regulations and minimize ‘bias’

Trump unveils AI plan that aims to clamp down on regulations and 'bias'

Former President Donald Trump has announced a new artificial intelligence project that focuses heavily on reducing federal oversight and tackling what he terms political partiality within AI systems. As artificial intelligence quickly grows in numerous fields—such as healthcare, national defense, and consumer tech—Trump’s approach marks a shift from wider bipartisan and global endeavors to enforce stricter scrutiny over this advancing technology.

Trump’s newest proposition, integral to his comprehensive 2025 electoral strategy, portrays AI as a dual-faceted entity: a catalyst for American innovation and a possible danger to free expression. At the core of his plan is the notion that governmental participation in AI development should be limited, emphasizing the need to cut down regulations that, according to him, could obstruct innovation or allow ideological domination by federal bodies or influential technology firms.

While other political leaders and regulatory bodies worldwide are advancing frameworks aimed at ensuring safety, transparency, and ethical use of AI, Trump is positioning his plan as a corrective to what he perceives as growing political interference in the development and deployment of these technologies.

At the core of Trump’s AI strategy is a sweeping call to reduce what he considers bureaucratic overreach. He proposes that federal agencies be restricted from using AI in ways that could influence public opinion, political discourse, or policy enforcement in partisan directions. He argues that AI systems, particularly those used in areas like content moderation and surveillance, can be manipulated to suppress viewpoints, especially those associated with conservative voices.

Trump’s proposal suggests that any use of AI by the federal government should undergo scrutiny to ensure neutrality and that no system is permitted to make decisions with potential political implications without direct human oversight. This perspective aligns with his long-standing criticisms of federal agencies and large tech firms, which he has frequently accused of favoring left-leaning ideologies.

His plan also includes the formation of a task force that would monitor the use of AI within the government and propose guardrails to prevent what he terms “algorithmic censorship.” The initiative implies that algorithms used for flagging misinformation, hate speech, or inappropriate content could be weaponized against individuals or groups, and therefore should be tightly regulated—not in their application, but in their neutrality.

Trump’s artificial intelligence platform also focuses on the supposed biases integrated into algorithms. He argues that numerous AI systems, especially those created by large technology companies, possess built-in political tendencies influenced by the data they are trained with and the objectives of the organizations that develop them.

While researchers in the AI community do acknowledge the risks of bias in large language models and recommendation systems, Trump’s approach emphasizes the potential for these biases to be used intentionally rather than inadvertently. He proposes mechanisms to audit and expose such systems, pushing for transparency around how they are trained, what data they rely on, and how outputs may differ based on political or ideological context.

His plan does not detail specific technical processes for detecting or mitigating bias, but it does call for an independent body to review AI tools used in areas like law enforcement, immigration, and digital communication. The goal, he states, is to ensure these tools are “free from political contamination.”

Beyond concerns over bias and regulation, Trump’s plan seeks to secure American dominance in the AI race. He criticizes current strategies that, in his view, burden developers with “excessive red tape” while foreign rivals—particularly China—accelerate their advancements in AI technologies with state support.

In response to this situation, he suggests offering tax incentives and loosening regulations for businesses focusing on AI development in the United States. Additionally, he advocates for increased financial support for collaborations between the public sector and private companies. These strategies aim to strengthen innovation at home and lessen dependence on overseas technology networks.

On national security, Trump’s plan is less detailed, but he does acknowledge the dual-use nature of AI technologies. He advocates for tighter controls on the export of critical AI tools and intellectual property, particularly to nations deemed strategic competitors. However, he stops short of outlining how such restrictions would be implemented without stifling global research collaborations or trade.

Interestingly, Trump’s AI strategy hardly addresses data privacy, a subject that has become crucial in numerous other plans both inside and outside the U.S. Although he recognizes the need to safeguard Americans’ private data, the focus is mainly on controlling what he considers ideological manipulation, rather than on the wider effects of AI-driven surveillance or improper handling of data.

This absence has drawn criticism from privacy advocates, who argue that AI systems—particularly those used in advertising, law enforcement, and public services—can pose serious risks if deployed without adequate data protections in place. Trump’s critics say his plan prioritizes political grievances over holistic governance of a transformative technology.

Trump’s AI agenda stands in sharp contrast to emerging legislation in Europe, where the EU AI Act aims to classify systems based on risk and enforce strict compliance for high-impact applications. In the U.S., bipartisan efforts are also underway to introduce laws that ensure transparency, limit discriminatory impacts, and prevent harmful autonomous decision-making, particularly in sectors like employment and criminal justice.

By supporting a minimal interference strategy, Trump is wagering on a deregulation mindset that attracts developers, business owners, and those doubtful of governmental involvement. Nevertheless, specialists caution that the absence of protective measures may lead AI systems to worsen disparities, spread false information, and weaken democratic structures.

The timing of Trump’s AI announcement seems strategically linked to his 2024 electoral campaign. His narrative—focusing on freedom of expression, equitable technology, and safeguarding against ideological domination—strikes a chord with his political supporters. By portraying AI as a field for American principles, Trump aims to set his agenda apart from other candidates advocating for stricter regulations or a more careful embrace of new technologies.

The proposal also reinforces Trump’s broader narrative of fighting against what he describes as an entrenched political and technological establishment. AI, in this context, becomes not just a technological issue, but a cultural and ideological one.

The success of Trump’s AI proposal largely hinges on the results of the 2024 election and the composition of Congress. Even if some elements are approved, the plan will probably encounter resistance from civil liberties organizations, privacy defenders, and technology professionals who warn against a landscape where AI is unchecked.

As artificial intelligence continues to evolve and reshape industries, governments around the world are grappling with how best to balance innovation with accountability. Trump’s proposal represents a clear, if controversial, vision—one rooted in deregulation, distrust of institutional oversight, and a deep concern over perceived political manipulation through digital systems.

What we still don’t know is if this method can offer the liberty alongside the protections necessary to steer AI progress towards a route that rewards society as a whole.

By Oliver Blackwood

You May Also Like