Nuestro sitio web utiliza cookies para mejorar y personalizar su experiencia y para mostrar anuncios (si los hay). Nuestro sitio web también puede incluir cookies de terceros como Google Adsense, Google Analytics o YouTube. Al utilizar el sitio web, usted acepta el uso de cookies. Hemos actualizado nuestra Política de Privacidad. Haga clic en el botón para consultar nuestra Política de Privacidad.

Trump introduces AI policy to fight ‘bias’ and cut regulations

Trump unveils AI plan that aims to clamp down on regulations and 'bias'


Former President Donald Trump has introduced a new artificial intelligence initiative that places a strong emphasis on limiting federal regulations and addressing what he describes as political bias within AI systems. As the use of artificial intelligence rapidly expands across various sectors—including healthcare, national security, and consumer technology—Trump’s approach signals a departure from broader bipartisan and international efforts to apply tighter oversight over the evolving technology.

Trump’s latest proposal, part of his broader 2025 campaign strategy, presents AI as both an opportunity for American innovation and a potential threat to free speech. Central to his plan is the idea that government involvement in AI development should be minimal, focusing instead on reducing regulations that, in his view, may hinder innovation or enable ideological control by federal agencies or powerful tech companies.

While other political leaders and regulatory bodies worldwide are advancing frameworks aimed at ensuring safety, transparency, and ethical use of AI, Trump is positioning his plan as a corrective to what he perceives as growing political interference in the development and deployment of these technologies.

At the heart of Trump’s plan for AI is a broad initiative aimed at decreasing what he perceives as excessive bureaucracy. He suggests limiting federal agencies’ ability to utilize AI in manners that may sway public perspectives, political discussions, or policy enforcement towards partisan ends. He contends that AI technologies, notably those employed in fields such as content moderation and monitoring, can be exploited to stifle opinions, particularly those linked to conservative perspectives.

Trump’s plan indicates that any employment of AI by federal authorities needs examination to guarantee impartiality, and no system should be allowed to make decisions that could have political consequences without direct human monitoring. This viewpoint is consistent with his persistent criticisms of governmental bodies and major tech companies, which he has often alleged to lean towards left-wing beliefs.

His strategy also involves establishing a team to oversee the deployment of AI in government operations and recommend measures to avoid what he describes as “algorithmic censorship.” The plan suggests that systems employed for identifying false information, hate speech, or unsuitable material could potentially be misused against people or groups, and thus should be strictly controlled—not in their usage, but in maintaining impartiality.

Trump’s artificial intelligence platform also focuses on the supposed biases integrated into algorithms. He argues that numerous AI systems, especially those created by large technology companies, possess built-in political tendencies influenced by the data they are trained with and the objectives of the organizations that develop them.

Although experts within the AI sector recognize the dangers of bias present in expansive language models and recommendation algorithms, Trump’s perspective highlights the possibility that these biases might be exploited purposely instead of accidentally. He suggests strategies to examine and reveal these systems, advocating for openness concerning their training processes, the data they utilize, and the potential variations in outcomes influenced by political or ideological settings.

His plan does not detail specific technical processes for detecting or mitigating bias, but it does call for an independent body to review AI tools used in areas like law enforcement, immigration, and digital communication. The goal, he states, is to ensure these tools are “free from political contamination.”

Beyond worries about fairness and oversight, Trump’s strategy aims to ensure that America leads in the AI competition. He expresses disapproval of current approaches that, in his opinion, impose “too much bureaucracy” on developers, while international competitors—especially China—progress in AI technologies with government backing.

To address this, he proposes tax incentives and deregulation for companies developing AI within the United States, along with expanded funding for public-private partnerships. These measures are intended to bolster domestic innovation and reduce reliance on foreign tech ecosystems.

En cuanto a la seguridad nacional, la propuesta de Trump carece de detalles, aunque reconoce el carácter dual de las tecnologías de IA. Promueve tener un control más estricto sobre la exportación de herramientas de IA cruciales y propiedades intelectuales, especialmente hacia naciones vistas como competidores estratégicos. No obstante, no detalla la forma en que se aplicarían tales restricciones sin obstaculizar las colaboraciones globales de investigación o el comercio.

Notably, Trump’s AI framework makes limited mention of data privacy, a concern that has become central to many other proposals in the U.S. and abroad. While he acknowledges the importance of protecting Americans’ personal information, the emphasis remains primarily on curbing what he views as ideological exploitation rather than the broader implications of AI-enabled surveillance or data misuse.

The lack of involvement has been criticized by privacy advocates, who claim that AI technologies—especially when utilized in advertising, law enforcement, and public sectors—could present significant dangers if implemented without sufficient data security measures. Opponents of Trump argue that his strategy focuses more on political issues rather than comprehensive management of a groundbreaking technology.

Trump’s AI agenda stands in sharp contrast to emerging legislation in Europe, where the EU AI Act aims to classify systems based on risk and enforce strict compliance for high-impact applications. In the U.S., bipartisan efforts are also underway to introduce laws that ensure transparency, limit discriminatory impacts, and prevent harmful autonomous decision-making, particularly in sectors like employment and criminal justice.

By advocating a hands-off approach, Trump is betting on a deregulatory strategy that appeals to developers, entrepreneurs, and those skeptical of government intervention. However, experts warn that without safeguards, AI systems could exacerbate inequalities, propagate misinformation, and undermine democratic institutions.

The timing of Trump’s AI proposal appears closely tied to his 2024 election campaign. His message—framed around freedom of speech, fairness in technology, and protection against ideological control—resonates with his political base. By positioning AI as a battleground for American values, Trump seeks to differentiate his platform from other candidates who support tighter oversight or more cautious adoption of emerging tech.

The suggestion further bolsters Trump’s wider narrative of battling what he characterizes as a deeply rooted political and tech establishment. In this situation, AI transforms into not only a technological matter but also a cultural and ideological concern.

Whether Trump’s AI plan gains traction will depend largely on the outcome of the 2024 election and the makeup of Congress. Even if passed in part, the initiative would likely face challenges from civil rights groups, privacy advocates, and technology experts who caution against an unregulated AI landscape.

As artificial intelligence continues to evolve and reshape industries, governments around the world are grappling with how best to balance innovation with accountability. Trump’s proposal represents a clear, if controversial, vision—one rooted in deregulation, distrust of institutional oversight, and a deep concern over perceived political manipulation through digital systems.

What we still don’t know is if this method can offer the liberty alongside the protections necessary to steer AI progress towards a route that rewards society as a whole.

Por Sofía Carvajal