The AI executive order and the backlash
On Oct. 30, President Joseph Biden unveiled a sweeping Executive Order to steer the burgeoning field of Artificial Intelligence (AI) toward, what he hopes will be, a path of safety, innovation, and broader societal good.
The order, encapsulating eight distinct pillars in its close to 20,000 words, aims to foster a harmonious AI landscape by bridging federal directives with private sector aspirations. However, the journey towards a balanced AI ecosystem beckons a myriad of debates, with critics voicing concerns over potential bureaucratic hurdles. As America tiptoes into this new AI era, the dialogue between policy, innovation, and societal impact unfurls, promising a narrative rich with challenges and opportunities.
NetChoice is one such organization that’s already found some issues with President Biden’s approach. An association devoted to “light-touch regulation” of the Internet – which counts major companies like eBay and Airbnb among its members – is advocating for the same approach to AI.
Carl Szabo, vice president and general counsel for the association, called the order an “AI Red Tape Wishlist.”
“Biden’s new executive order is a back-door regulatory scheme for the wider economy which uses AI concerns as an excuse to expand the President’s power over the economy. There are many regulations that already govern AI. Instead of examining how these existing rules can be applied to address modern challenges, Biden has chosen to further increase the complexity and burden of the federal code. … (The order) will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation.”
U.S. Sen. Charles Schumer, D-New York and Senate Majority Leader, disagreed, calling the order a “crucial step,” but acknowledged that it will be up to Congress to enact meaningful legislation. To Sen. Schumer’s point, the Executive Order primarily sets the policy direction for federal agencies and signals the administration’s broader approach to AI. While this could impact private firms through subsequent legislation or regulations, it doesn’t obligate private firms to specific actions. Here is what President Biden’s order seeks to do and where the pushback could be.
Pillar 1: Safety and Security
The emphasis on safety and security reflects a proactive approach to mitigating risks associated with AI. By advocating for robust evaluations of AI systems, it hints at fostering a culture of diligence among developers and users, thereby potentially reducing the chances of misuse or unintended consequences. Critics might argue that the call for robust evaluations could lead to onerous regulatory hurdles, stifling innovation and delaying the deployment of potentially beneficial AI systems.
Pillar 2: Innovation and Competition
This pillar underscores the balance between fostering innovation and ensuring a fair competitive landscape. It’s a nod to the importance of nurturing a conducive environment for small and big players alike, which could lead to a diverse, vibrant AI ecosystem. Skeptics might see the policy as overly interventionist, possibly hindering free-market dynamics and creating a restrictive environment for AI developers.
Pillar 3: Supporting Workers
The order acknowledges the transformative impact of AI on the job market. By advocating for job training and education, it underscores a forward-thinking approach to workforce readiness, aiming to ensure that the benefits of AI extend across the societal spectrum. Critics may question the feasibility of retraining initiatives and worry about potential job displacements despite such measures.
Pillar 4: Equity and Civil Rights
This section resonates with a broader societal call to ensure that technological advancements do not exacerbate existing inequities. It’s an acknowledgment of the potential pitfalls of AI and a call to action to ensure its responsible deployment. Detractors might argue that the regulatory oversight could be overreaching, potentially stifling innovation in the guise of preventing discrimination.
Pillar 5: Consumer Interests
Protecting consumer interests is critical in maintaining trust as AI becomes ubiquitous. This pillar suggests a vigilant approach to consumer protection, aligning with a long-standing policy tradition of safeguarding consumers from potential harms of new technologies. Skeptics might view this pillar as a precursor to excessive regulation, which could hinder rapid technological advancements and market dynamics.
Pillar 6: Privacy and Civil Liberties
As AI technologies grow, so does the potential for privacy infringements. This pillar underscores the paramount importance of protecting individuals’ privacy, which is fundamental in ensuring public trust in AI systems. Critics might argue that stringent privacy regulations could impede the development of beneficial AI applications, especially in data-driven domains.
Pillar 7: Federal Government’s use of AI
This section portrays a vision of harnessing AI for enhanced governmental operations. It signifies an intent to modernize federal operations, which could lead to improved public services and policy implementations. Detractors may express concerns over the efficiency and effectiveness of government-led AI initiatives, citing bureaucratic hurdles.
Pillar 8: Global Leadership
The global leadership pillar underscores the importance of international collaboration in the AI domain. It reflects an understanding that AI is a global phenomenon and that responsible AI governance is a shared global responsibility, aligning with a broader narrative of global cooperation in the digital age. Critics might see this as an overextension of U.S. influence, potentially leading to geopolitical tensions in the global digital arena.
Editor’s note: Aric Mitchell is an AI consultant and communications strategist, and produces the AI-focused newsletter, Innovation Dispatch. The opinions expressed are those of the author.