Connect with us

AI

AI at a Crossroads: The Impact of the U.S. Presidential Election on Artificial Intelligence Regulation and Innovation

Published

on

To view this article again, go to My Profile and then select Saved Stories.

A Victory for Trump Might Accelerate Risky AI Advances

Should Donald Trump secure the presidency in the upcoming November election, the safeguards surrounding the advancement of artificial intelligence could be weakened, amidst escalating concerns over the risks posed by flawed AI systems.

If Trump were to be reelected for another term, it could significantly alter and potentially harm initiatives aimed at safeguarding US citizens from the various risks associated with flawed artificial intelligence. This includes issues such as the spread of false information, bias, and the corruption of algorithms in technologies like self-driving cars.

Under an executive order issued by President Joe Biden in October 2023, the federal government has started to supervise and provide guidance to AI firms. However, Trump has promised to cancel this directive, with the Republican Party’s stance being that it restricts AI innovation and forces extreme leftist ideologies onto AI advancement.

Trump's pledge has excited detractors of the executive order, whom they view as unlawful, perilous, and a blockage in America's technological competition with China. Among these detractors are several of Trump's key supporters, ranging from X CEO Elon Musk and venture capitalist Marc Andreessen to Republican congress members and almost twenty GOP state attorneys general. Trump's vice-presidential candidate, Ohio Senator JD Vance, is firmly against AI regulation.

"Jacob Helberg, a technology executive and advocate for artificial intelligence, known as the 'Silicon Valley’s liaison to Trump,' mentions that Republicans are cautious about hastily imposing too many regulations on this sector."

However, technology and cybersecurity specialists caution that removing the Executive Order's protections could compromise the reliability of artificial intelligence systems, which are gradually becoming a part of every facet of life in the United States, including areas such as transportation, healthcare, job markets, and monitoring activities.

The forthcoming presidential race could play a pivotal role in deciding if AI transforms into an unmatched instrument of efficiency or an unmanageable force of disorder.

Supervision and Guidance Together

The directive issued by Biden encompasses a range of applications for AI, including enhancing health care for veterans and establishing protective measures for its use in pharmaceutical research. However, the majority of the political debate surrounding the executive order revolves around two specific parts related to managing cybersecurity threats and the effects of AI on physical safety.

A stipulation mandates that proprietors of potent AI technologies must disclose to authorities the methods employed in training these models and the measures taken to safeguard them against manipulation and pilferage. This includes sharing outcomes from "red-team tests," which are intended to expose weaknesses in AI systems through orchestrated assault simulations. Additionally, another directive instructs the Commerce Department's National Institute of Standards and Technology (NIST) to create guidelines aiding firms in crafting AI models that are resilient to cyber threats and devoid of discriminatory biases.

Efforts on these initiatives are progressing steadily. The authorities have suggested that AI creators should submit reports every three months, while NIST has made public various AI advisory materials covering topics such as risk management, the development of secure software, the implementation of watermarks in synthetic content, and measures to prevent the misuse of models. Furthermore, NIST has initiated several programs aimed at encouraging the testing of models.

Proponents argue that these measures are crucial for ensuring fundamental regulatory supervision over the swiftly growing artificial intelligence sector and encouraging developers to enhance security measures. However, opponents from the conservative spectrum view the mandate for reporting as an unlawful intrusion by the government, which they fear will stifle innovation in AI and risk the confidentiality of developers' proprietary techniques. Additionally, they criticize the guidelines provided by NIST as a strategy by the left to introduce extreme progressive ideas concerning misinformation and prejudice into AI, which they believe equates to the suppression of conservative voices.

During a gathering in Cedar Rapids, Iowa, in December, Trump criticized Biden's Executive Order, claiming without proof that the Biden administration had previously employed artificial intelligence for malicious reasons.

"In my next term," he declared, "I intend to revoke Biden's executive order on artificial intelligence and immediately prohibit AI from suppressing the expression of Americans from the very first day."

Investigative Overreach or Necessary Precaution?

The initiative led by Biden to gather data on corporations' creation, examination, and security measures for their artificial intelligence systems ignited immediate controversy among lawmakers as soon as it was introduced.

Republican members of Congress highlighted that President Biden based the new mandate on the 1950 Defense Production Act, a law from wartime that allows the government to control activities in the private sector to guarantee a steady flow of goods and services. GOP representatives criticized Biden's action as unwarranted, against the law, and superfluous.

Critics from the conservative side have criticized the mandate for reporting as an unnecessary strain on businesses. During a March session she led, focusing on "White House overreach on AI," Representative Nancy Mace expressed concerns that the requirement "might deter potential innovators and hinder further advancements similar to ChatGPT."

Helberg argues that a demanding obligation would favor existing corporations while disadvantaging new ventures. He further mentions that critics from Silicon Valley are apprehensive that these obligations might pave the way for a regulatory system where developers are required to obtain official approval before experimenting with models.

Steve DelBianco, the chief executive officer of the right-leaning technology organization NetChoice, expresses concern that the mandate to disclose outcomes from red-team assessments essentially acts as indirect censorship. This is because the government will be searching for issues such as bias and misinformation. "I'm deeply troubled by the prospect of a liberal administration…whose red-team evaluations could lead AI to limit its output to avoid setting off these alarms," he states.

Conservative voices contend that regulatory measures which hinder the advancement of artificial intelligence could significantly disadvantage the United States in its tech rivalry against China.

"Helberg notes that their approach is highly assertive, with the pursuit of AI superiority being a fundamental goal in their military strategy. He also mentions that the difference in capabilities between us and the Chinese is narrowing each year."

"Socially Conscious" Safety Protocols

Incorporating societal injuries into its AI safety regulations, NIST has sparked anger among conservatives, igniting a new battle in the ongoing cultural conflict over regulating content and freedom of expression.

Republicans criticize the NIST guidelines, labeling them as indirect government suppression of speech. Senator Ted Cruz strongly criticized NIST's artificial intelligence 'safety' protocols, which he believes are an attempt by the Biden administration to regulate speech under the guise of preventing vague societal damages. NetChoice has cautioned NIST against overstepping its bounds with semi-regulatory measures that disrupt the proper equilibrium between openness and the freedom of expression.

Numerous conservatives outright reject the notion that AI can continue to cause societal issues and argue that it should be intentionally created to avoid such problems.

"Helberg argues that the proposed solution is addressing a non-issue, stating, 'There hasn't been significant proof of widespread problems concerning AI bias.'"

Research and inquiries consistently reveal that artificial intelligence systems exhibit prejudices that continue to fuel discrimination across various sectors such as employment, law enforcement, and medical services. Studies indicate that individuals exposed to these biases might inadvertently internalize them.

Conservatives are more concerned about AI firms going too far in their attempts to address this issue than the issue itself. "There's a clear negative relationship between how 'woke' an AI is and how useful it is," Helberg mentions, referring to a preliminary problem with Google's AI creation tool.

Republicans are calling for the National Institute of Standards and Technology (NIST) to prioritize the examination of the physical dangers posed by artificial intelligence, particularly its potential in aiding terrorists in creating biological weapons, a concern that is addressed in President Biden's executive order. Should Trump secure a victory, it's expected that his selections for appointments would shift focus away from the exploration of AI's societal impacts. Helberg has voiced frustration over the extensive research dedicated to AI bias, arguing that it overshadows the more severe risks associated with terrorism and biological warfare.

Advocating for a Minimalist Strategy

AI specialists and legislators strongly support Biden's AI security plan.

Representative Ted Lieu, the Democratic co-leader of the House's AI task force, states that these initiatives ensure that the United States continues to lead in AI advancement and safeguards its citizens against possible dangers.

A US government official involved in AI matters emphasizes the importance of reporting obligations to notify authorities about possible threats posed by the advancement of AI technologies. The official, who wished to remain unnamed to express opinions openly, highlights OpenAI's acknowledgment regarding its newest model's sporadic unwillingness to create nerve agents upon request.

The spokesperson states that the mandate for reporting is not excessively demanding. They contend that, in contrast to AI rules in the European Union and China, Biden's Executive Order embodies "a comprehensive, minimal-interference strategy that still encourages creative advancement."

Nick Reese, the inaugural director of emerging technology at the Department of Homeland Security from 2019 to 2023, disputes right-leaning arguments suggesting that the reporting mandate will endanger firms' proprietary technology. He believes it may, in fact, advantage new companies by motivating them to create AI models that are "more computationally efficient" and use less data, thus staying below the reporting limit.

Ami Fields-Meyer, a former White House technology official involved in crafting Biden's executive order, emphasizes the necessity of governmental regulation due to the significant influence of artificial intelligence.

Fields-Meyer points out, “These are firms claiming to develop the strongest technologies ever known. The primary duty of the government is to safeguard its citizens. Simply saying 'Trust us, we're handling it' doesn't really hold much weight.”

Industry specialists commend the National Institute of Standards and Technology's (NIST) security recommendations as crucial for incorporating safety measures into emerging technologies. They highlight that defective artificial intelligence (AI) models can lead to significant societal issues, such as discrimination in housing and loan services and the wrongful denial of government assistance.

Trump's initial AI directive during his first term mandated that federal AI technologies adhere to civil liberties, necessitating investigations into societal damages.

The artificial intelligence sector has generally embraced President Biden's focus on safety measures. According to a US official, there's a consensus that clearly defining these guidelines is beneficial. For emerging companies with limited personnel, "it enhances their team's ability to tackle these issues."

Reversing Biden's Executive Order would indicate a concerning message that "the US government plans to adopt a laissez-faire stance towards AI security," according to Michael Daniel, a previous advisor on cybersecurity to the president, who currently heads the Cyber Threat Alliance, a nonprofit dedicated to sharing information on cyber threats.

Regarding the rivalry with China, supporters of the Executive Order argue that implementing security regulations will enable the United States to gain an upper hand. They believe these measures will enhance the performance of American AI technologies over their Chinese counterparts and safeguard them against China's attempts at economic espionage.

Diverging Futures Ahead

Should Trump secure a victory in the upcoming White House race, anticipate a complete transformation in the government's strategy towards AI security.

Republicans are inclined towards mitigating AI-related risks through the application of "current tort and statutory laws" rather than introducing expansive new regulations on the technology, according to Helberg. They prefer to emphasize significantly on leveraging the benefits AI presents, instead of concentrating too much on minimizing risks. This approach could potentially jeopardize the reporting mandate and might also impact some of the guidelines proposed by NIST.

The mandate for reporting might encounter judicial obstacles, especially after the Supreme Court has diminished the level of respect that courts previously afforded to agencies in assessing their rules.

Resistance from the GOP might also put at risk the voluntary partnerships that NIST has formed with top companies for testing AI. "What will become of those agreements under a new government?" questions the US official.

The division regarding AI has caused concern among tech experts who fear that Trump's actions could hinder efforts to develop more secure algorithms.

"Nicol Turner Lee, who leads the Center for Technology Innovation at the Brookings Institution, points out that while AI offers great potential, it also comes with significant risks. She emphasizes the importance of the forthcoming president maintaining the safety and security of these technologies."

You May Also Enjoy …

Delivered directly to your email: A selection of the most fascinating and peculiar tales from the archives of WIRED.

Interview: Marissa Mayer Doesn't Identify as a Feminist. She Prefers Being Called a Software Enthusiast.

An AI application aided in apprehending individuals, until it underwent closer examination.

How a Thin Foam Layer Revolutionized the NFL

Event: Don't miss out on The Major Interview happening on December 3rd in San Francisco.

Additional Content from WIRED

Critiques and Manuals

© 2024 Condé Nast. All rights reserved. A percentage of revenue from products bought via our website, as part of our retail affiliate partnerships, may go to WIRED. The content on this website cannot be copied, shared, broadcasted, stored, or utilized in any form without explicit written consent from Condé Nast. Advertising Choices

Choose a global website


Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe to get the latest posts sent to your email.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE FOR FREE

Advertisement
F118 mins ago

McLaren Fumes Over ‘Unjust’ Penalty: Andrea Stella Criticizes F1 Stewards in Norris-Verstappen Clash

Automakers & Suppliers20 mins ago

Driving the Future: Lamborghini’s Pioneering Strides in High-Performance and Luxury Automotive Excellence

Moto GP34 mins ago

Who Will Replace Di Giannantonio? VR46’s Crucial Decision Analyzed

F136 mins ago

Lance Stroll Sets Unwanted F1 Record After United States GP: Most Starts Without a Fastest Lap

Moto GP1 hour ago

Drama Down Under: Marquez’s Triumph, Social Media Buzz, and Wildlife Chaos at the 2024 Australian MotoGP

F11 hour ago

Esteban Ocon’s Apology to Franco Colapinto: Fastest Lap Drama Unfolds at Austin Grand Prix

F12 hours ago

United States Grand Prix 2024: Hamilton Hits Season Low as Leclerc and Verstappen Shine

AI2 hours ago

AI in the Balance: How the 2024 US Presidential Election Could Shape the Future of Artificial Intelligence Regulation

AI2 hours ago

AI at a Crossroads: The Impact of the U.S. Presidential Election on Artificial Intelligence Regulation and Innovation

F12 hours ago

Unequal Packages: Sergio Perez Voices Concerns Over Equipment Disparity with Verstappen at US Grand Prix

Politics3 hours ago

Stemming the Tide: Government Steps In as HS2 Costs Skyrocket, Independent Review Launched

F13 hours ago

Racing Stars and Stripes: The Quest for America’s Next F1 Driver Amid Austin’s High-Octane Drama

Politics3 hours ago

National Call to Action: Government Seeks Public Input to Shape Future of NHS Amidst Historic Crisis

Politics4 hours ago

Milkshake Assault: Woman Pleads Guilty to Attacking Nigel Farage During Election Campaign

Politics4 hours ago

MEPs Advocate Bold Climate Finance and Fossil Fuel Phase-Out Ahead of COP29 in Baku

Politics4 hours ago

JK Rowling Declines Peerage Offers, Cites Personal Reasons Amid Political Praise and Criticism

Politics4 hours ago

European Parliament Gears Up for Crucial Plenary Session: Key Discussions and Press Briefing Details

Automakers & Suppliers4 hours ago

Maranello’s Marvels: Unveiling Ferrari’s Iconic Innovations and Performance-Driven Supercar Technologies

Politics3 months ago

News Outlet Clears Sacked Welsh Minister in Leak Scandal Amidst Ongoing Political Turmoil

Moto GP5 months ago

Enea Bastianini’s Bold Stand Against MotoGP Penalties Sparks Debate: A Dive into the Controversial Catalan GP Decision

Sports5 months ago

Leclerc Conquers Monaco: Home Victory Breaks Personal Curse and Delivers Emotional Triumph

Moto GP5 months ago

Aleix Espargaro’s Valiant Battle in Catalunya: A Lion’s Heart Against Marc Marquez’s Precision

Moto GP5 months ago

Raul Fernandez Grapples with Rear Tyre Woes Despite Strong Performance at Catalunya MotoGP

Sports5 months ago

Verstappen Identifies Sole Positive Amidst Red Bull’s Monaco Struggles: A Weekend to Reflect and Improve

Moto GP5 months ago

Joan Mir’s Tough Ride in Catalunya: Honda’s New Engine Configuration Fails to Impress

Sports5 months ago

Leclerc Triumphs at Home: 2024 Monaco Grand Prix Round 8 Victory and Highlights

Sports5 months ago

Leclerc’s Monaco Triumph Cuts Verstappen’s Lead: F1 Championship Standings Shakeup After 2024 Monaco GP

Sports5 months ago

Perez Shaken and Surprised: Calls for Penalty After Dramatic Monaco Crash with Magnussen

Sports5 months ago

Gasly Condemns Ocon’s Aggressive Move in Monaco Clash: Team Harmony and Future Strategies at Stake

Business5 months ago

Driving Success: Mastering the Fast Lane of Vehicle Manufacturing, Automotive Sales, and Aftermarket Services

Cars & Concepts5 months ago

Porsche 911 Goes Hybrid: Iconic Sports Car’s Historic Leap Towards Electrification Revealed on May 28

Mobility Report5 months ago

**”SkyDrive’s Ascent: Suzuki Propels Japan’s Leading eVTOL Hope into the Global Air Mobility Arena”**

Cars & Concepts3 months ago

Chevrolet Unleashes American Powerhouse: The 2025 Corvette ZR1 with Over 1,000 HP

Cars & Concepts5 months ago

Seat Leon (2024): Die Evolution des Spanischen Bestsellers – Neue Technik, Bewährtes Design

Business5 months ago

Shifting Gears for Success: Exploring the Future of the Automobile Industry through Vehicle Manufacturing, Sales, and Advanced Technologies

AI5 months ago

Revolutionizing the Future: How Leading AI Innovations Like DaVinci-AI.de and AI-AllCreator.com Are Redefining Industries

V12 AI REVOLUTION COMMING SOON !

Get ready for a groundbreaking shift in the world of artificial intelligence as the V12 AI Revolution is on the horizon

SPORT NEWS

Business NEWS

Advertisement

POLITCS NEWS

Chatten Sie mit uns

Hallo! Wie kann ich Ihnen helfen?

Discover more from Automobilnews News - The first AI News Portal world wide

Subscribe now to keep reading and get access to the full archive.

Continue reading

×