Aligning Responsible AI With The White House Executive Order On Ethical AI

White House AI

Make it stand out

Whatever it is, the way you tell your story online can make all the difference.

Introduction

With great fanfare and the issuance of an expansive Executive Order (EO), the White House published its recommended approach to ethical AI in government operations on October 30, 2023. The scope and scale of this initiative is stated ambitiously:

[E]xecutive departments and agencies (agencies) shall, as appropriate and consistent with applicable law, adhere to these principles, while, as feasible, taking into account the views of other agencies, industry, members of academia, civil society, labor unions, international allies and partners, and other relevant organizations . . . .

This “leave no stone unturned” approach to regulating the federal government’s position on and use of AI leaves many gaps unfilled and seeks to impose undefined responsibilities on every segment of society and business.

The emphasis of the EO include the following areas of focus and omission:

 

Ambitious Reach

The EO is extremely ambitious. It seeks to set out a whole universe of analysis, regulation, and monitoring across a large swath of the U.S. government. Various agencies are given tight timelines in which to hire the right people, consider the Order and its implementation, and then effect an entire regime to address something that is unmanageably big, complicated, and changing.

Despite its limited application to US federal government executive functions, the EO casts a global net of responsibility for creating and maintaining ethical AI.  The US-centric planning does not provide for international collaboration to align ethical AI standards. Forming multilateral frameworks could prevent fragmented governance. Additional analysis could discuss the risks of a unilateral approach and propose mechanisms for cooperative global oversight.

 

Workforce Impact

The EO assumes that the advanced skills required to achieve its goals exist in adequate supply and are accessible to covered employers inside and outside the federal government. There is limited expertise in these areas and even fewer visionary thinkers who can grasp the big picture and suggest a full regulatory regime. Many of these workers will also be in high demand from technology companies and would be eligible to receive the types of compensation packages that would not be possible in the public sector. The EO even requires the loosening of immigration standards for these types of workers. This is a double-edged sword because the EO tries to protect this technology from foreigners and foreign governments while encouraging the recruiting of foreign nationals. 

Given this scarcity of workers and the global nature of the problem, perhaps there should be an international effort to create model global treaties on these topics, rather than implementing a patchwork of local regulations. 

The order directs federal agencies to make use of mechanisms like the Intergovernmental Personnel Act (IPA). This federal program that allows federal agencies to temporarily assign personnel between the federal government and state/local governments, colleges/universities, Indian tribal governments, federally funded research and development centers, and other eligible organizations. IPAs allow agencies to avoid complications with hiring and compensation by essentially "borrowing" talent from other organizations for a period of time.

IPAs may help bring in outside experts. However, these programs have limitations in scale and compensation competitiveness. The government faces an uphill battle in attracting leading AI experts when competing with private sector salaries and perks. More strategic public-private partnerships, academic collaborations, and expanded visa programs could help increase the talent pool. But the order lacks robust initiatives to make the government an attractive destination for top AI minds.

 

Intellectual Property Law

The EO requires the modernization of intellectual property law to address AI. It is a welcome development since otherwise, the law would evolve in a manner directed by state and federal judges (who are not AI experts) The results of this multitiered and poorly informed approach typically tries to squeeze transformative technologies into existing laws. This fit is unsuitable for the complexities of AI management and regulation.

While calling for IP law modernization, the EO does not provide specifics about how current laws are inadequate or should change to accommodate AI innovations. IP laws may need to evolve to properly protect iterative AI systems, data sets used to train AI, and output generated by AI systems. There are many open questions around IP for AI like patentability standards, fair use doctrines, and the balance between proprietary and open-source models. Without greater specificity, the EO merely invites a limited set of IP law modifications.

 

Synthetic Biology

 It is encouraging to see the term “synthetic biology” referenced in the EO. It would seem to recognize the inherent linking of AI to genetic manipulation technologies. Unfortunately, this issue is not significantly addressed, and the term is only used one time (in connection with a report on issues to be considered).

The order defines synthetic biology but does little to address the intersection of AI and engineered biology, which could significantly impact humanity. Establishing guardrails for synthetic organisms, setting standards for data sharing and transparency, and clarifying IP approaches in this domain could be prudent. Commentary could underscore this oversight and missed opportunity to focus on this ethical and humanitarian dilemma.

 

Competition and Small Business Impact

The EO attempts to foster competition and assist small businesses, but these efforts will likely have minimal impact due to the massive amount of private funding available to the leading technology firms that have insurmountable advantages. Legal attacks on the monopolistic control of BigTech already have been launched by the US Department of Justice. Yet the EO makes no reference to a primary obstacle to competition in the AI industry.

While paying lip service to preventing consolidation, the order lacks concrete steps to democratize AI development. Resources like computing power, vast data sets, and AI talent have concentrated among a few prominent tech firms. More assertive measures around open data access, platform transparency, antitrust enforcement, and grants/resources for smaller entities could help redistribute AI capabilities.

 

AI Testing and Audits

The Order requires the establishment of “appropriate guidelines (except for AI used as a component of a national security system), including appropriate procedures and processes, to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.” The order introduces the concept of "AI red teaming" but the requirements around such testing are vague. Questions remain about the scope, frequency, and auditability of such testing. Smaller organizations may lack the resources to retain comprehensive red teams. Clearer standards, support for test infrastructure accessibility, and public monitoring mechanisms could help ensure rigorous, fair testing.

The term “AI red-teaming” is defined as “a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI”.  Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.” We should expect greater testing requirements, to be conducted by expert teams, both in-house and outsourced. There will likely be an explosion in demand for these types of teams, which will demand to be highly compensated for their scarcity.

 

Appropriate Guardrails for AI

The high-profile nature of the EO helps bring needed attention to shaping the attributes of ethical (Responsible) AI. In our view, Responsible AI consists of identifiable features, only some of which are addressed by the expansive, but inadequate EO. 

In summary, we believe that Responsible AI must include the following attributes operating interdependently and in a holistic process.

  • Data Governance Clarity

  • Privacy and Confidentiality Assurances

  • Legal and Regulatory Compliance

  • Diversity and Bias Protections

  • Societal and Environment Conscious

  • Transparent and Accountable

  • Humans in the Loop

The EO published on October 30, 2023, is a great start, albeit short on details and long on applicability. Guardrail Technologies applauds the effort and encourages additional refinement of the principles and processes it proposes. 


Larry Bridgesmith J.D.

Executive Director Guardrail Technologies and Associate Professor Vanderbilt Law School

Larry brings consulting and training at the level of the Internet through emerging technologies such as blockchain, smart contracts, artificial intelligence, cryptocurrency and interoperable functionality.

LinkedIn Profile

https://www.linkedin.com/in/larrybridgesmith/
Previous
Previous

AI Sprawl – A Disaster Waiting to Happen

Next
Next

Reimagining Responsible AI Through Public Disputes and Consensus Building