AI Regulatory Developments: What You Need to Know, What to Do, and the Risks of Non-Compliance

Artificial Intelligence (AI) is ubiquitous today, and its rapid growth and uptake have raised many concerns for lawmakers, data regulators, and end-users alike. New regulatory frameworks, some voluntary and others legally mandated, are emerging in almost every country to make AI safe, equitable, and ethical for all.

 

Ultimately, AI is a massive boon to all kinds of organizations, facilitating benefits that result in more efficient use of resources and faster time-to-value overall. Without adequate controls and guardrails in place,  AI can put people and businesses at risk. Relevant key risks include gaps in cybersecurity and inauthentic of content, either of which may harm people, businesses, and organizations through dangerous or unreliable output.

 

Emerging regulations seek to establish a framework of trust and transparency so that people and companies can leverage AI to their greatest advantage without compromising our autonomy or right to privacy[1] .

The most pressing issue about the laws pertaining to AI is that we currently lack a global standard, which makes it challenging for organizations and users to understand their obligations and rights, respectively. Some types of AI carry more risks than others, and systems considered low-risk may only be subject to voluntary reporting, resulting in a landscape that’s hard to assess. [2] Compounding the risk, current laws cross geography and often conflict, challenging businesses to understand their obligations. The penalties for non-compliance are significant and failing to act due to lack of knowledge can put organizations at risk from myriad angles.

 

This paper will attempt to summarize emerging regulations worldwide, provide a broad view of considerations to mitigate risk while leveraging the benefits of artificial intelligence, and outline the risks of non-compliance.

 

Emerging Regulations Around the Globe

Almost every country has laws and regulations to govern technology and protect data. AI falls into this category. Organizations using AI need to be aware of how it is being used, how it was created, and how it operates so they can identify potential risks.

  •  Are you just trusting your tech team? A lack of understanding of how you have created, used AI, and who/what it relates to makes it impossible to assess what laws may relate to it in terms of compliance and potential exposure. Further, it would be near impossible to respond efficiently, cost-effectively, and reliably to litigants and regulators if questioned.

  •  Do you have adequate documentation?  Data used or to be used in AI must be documented to highlight its source, size, format, relevance, and validation. Documentation should also detail any related privacy, ethical, or legal issues related to collected data.

  •  Do you know? Are you on a need-to-know basis? Understanding how you are using AI, the process surrounding its creation and maintenance, and it’s actual impact in operation is fundamental to running the business and driving reliable outcomes.  Depending on where, what, and how you use AI, laws may also require this.

  •  Do you understand your exposure if things go wrong? Understanding your risk exposure is the first step to mitigation. Risk must be assessed from all standpoints— to the company, individuals, and society.

  •  Is your board aware of AI use and its prevalence within your organization? Board members may be personally responsible should the company be subject to litigation. As such, they should have knowledge of how AI is used (benefits to the company and end-users), how it operates (how the AI obtains the data it needs to work), and the implications of non-compliance (which varies by AI use case).

 

Laws vs. Regulatory Law

When we talk about laws and regulatory frameworks, it can get a bit murky. Laws can be passed at federal, state, regional, and municipal levels. Regulators, like the FTC or the SEC, can also enact laws.

 

Regulators today are being very public about their concerns and expectations, specifically the need for transparency from technology creators and providers.

 

For example, in recent years, there has been a prevalence of startups misleading the public regarding their use of AI, i.e., making users and investors think they are leveraging a more sophisticated level of AI than actually exists.

 

We find a similar trend in the environmental sphere of products and services called “greenwashing,” wherein a company falsely describes products and technologies as environmentally friendly. In the case of AI, this is known as AI-washing.

 

The SEC is starting to crack down on such claims and even company names that mislead people into thinking an organization is something other than what it is.

 

Why organizations misrepresent is anybody’s guess. Their claims may be half-truths, or they might just want to cash in on people’s or organization’s current interest in AI, or they just don’t know. A more structured set of guidelines and definitions should reduce these incidences and may prevent companies from making costly errors in judgment.

 

By and large, laws do not yet provide specific technical guidance on how requirements are to be achieved, resulting in hundreds of voluntary frameworks, like the National Institute of Standards and Technology (NIST) and the Organization for Economic Cooperation and Development (OCED).

 

What are the laws, and what are the requirements?

Federal laws and regulations in the US include Biden’s AI Executive Order, released in October 2023.

 

Highlights of the executive order include:

  • Restrictions on how AI can be used.

  • Mandatory identification and detection of synthetic content.

  • Reporting guidelines for producing “dual use” AI models that can be modified to perform tasks of significant risk to public health, economic, and national security.

  • Mitigating risk to an individual’s civil rights through bias and discrimination.

  • Protecting consumer, student, patient, and passenger rights.

  • Upholding data privacy standards per existing data privacy regulations.

 

Additionally, the order encourages organizations to engage in AI development that strengthens the US, such as streamlining immigration, promoting innovation, commercialization, and competition among AI developers.

 

States and cities often have their own laws and regulations, such as New York City’s AI hiring law, which states that employers using an AI tool in recruitment must disclose they are doing so. Candidates may also query employers as to what data they collect, with fines for non-compliance up to $1500.

 

Regulatory Law

Regulators like the FTC, DoJ, and Securities and Exchange Commission (SEC) also enforce compliance.

 

The FTC’s Omnibus Resolution relates explicitly to Section 5 of the FTC Act, which describes unfair or deceptive practices and allows for investigations into whether people or organizations of various descriptions engage in:

 

“…unfair, deceptive, anticompetitive, collusive, coercive, predatory, exploitative, or exclusionary acts or practices in or affecting commerce relating to products and services that use or are produced using artificial intelligence, including but not limited to machine-based systems that can, for a given set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments, or that purport to use or detect the use of artificial intelligence, in violation of [the act].”

 

The language also allows for determining whether the Commission should act to obtain “monetary relief,” a somewhat vague framework that could open the door to significant penalties should they be warranted.

 

In light of the release of the Omnibus Resolution, tech firms would do well to take a proactive approach to organizing their internal documentation concerning third-party associations, AI claims, and AI development should they be required to disclose.

 

Model training data records, case studies showing impact on actual use cases, peer reviews, and accurate documentation should be collected to substantiate marketing-related claims. If third-party data is used to train models or enhance certain aspects of AI, these sources must be accessible to regulators, underscoring the need for stringent vetting, contractual protections, and ongoing due diligence where external vendors are involved.

 

In one example, the FTC took decisive action against WeightWatchers (WW) for illegally collecting children’s sensitive health data through a third-party app called Kurbo. The decision levied a judgement of $1.5 million in addition to outlining mandated compliance and accounting reporting to be in effect for ten years.

 

Federal regulations and guidelines

Organizations should also be aware of OMB 22-18, a cybersecurity supply chain requirement to provide a “software bill of lading” for all software sold to the government and outline supplier’s accountability, notably to their third-party components. If the software has AI as a part, companies will be required to produce this for the AI component as well. It’s also worth considering whether an AI bill of lading should be proactively created for each AI algorithm you use in your organization.

 

As an example of how tenuous the AI ecosystem might be, the recent shakeup at OpenAI leaves many unanswered questions about the trustworthiness of the popular ChatGPT app. In response, US Senator Amy Klobuchar (MN) stressed the need for regulators to step up and establish standards for responsible and safe AI development, referencing antitrust efforts like the American Innovation and Choice Online Act and The Journalism Competition and Preservation Act that have great relevance in the age of AI.

 

EU AI Act

The fact that AI is advancing faster than regulators can respond is of greatest concern to the public and companies that market and promote AI tools. Guidelines outlined by the UK’s National Cyber Security Centre support compliance with the EU AI Act, which is the most comprehensive and detailed regulatory blueprint issued to date. Considering the impact of the GDPR, it follows that the EU AI Act may serve as a standard from which all other acts will be modeled.

 

Highlights of the Act include the classification of AI into three categories:

Minimal Risk AI

Encompassing a wide range of single-general purpose AI, usually pre-trained. Providers are encouraged to apply the tenets of trustworthy AI and establish/adhere to voluntary codes of conduct.

 

High-Risk AI

As defined in the EU Charter, AI in this category could potentially infringe on public safety or human rights. Use cases include recruitment apps, medical devices, education and training, infrastructure management, biometric ID systems, and law enforcement.

 

Prohibited AI (unacceptable risk)

Prohibited AI includes any use cases that can potentially violate an individual’s fundamental rights, including real-time biometric identification in public places, exploitation of a person’s vulnerabilities, use of subliminal tactics, predictive policing, and emotion recognition systems. Of course, the above has some exceptions, but the circumstances are very narrow and specific.

 

Penalties for non-compliance with the EU AI Act range from €35M/7% of annual turnover for prohibited violations to €7.5M/1.5% for supplying incorrect information to €15M/3% for “most” other violations.

 

Additionally, any individual can register a complaint about non-compliance with the act, forcing an investigation for which companies must be prepared. Mandatory fundamental rights impact assessments are required.

 

Other items of note in the EU Act and Biden’s Executive Order:

  • Compute power must be reported, which will be significant for large language systems whose training requires a lot of processing energy.

  • All high-risk systems are subject to transparency mandates.

  • High-risk systems must be capable of effectively managing biases to prevent discrimination and protect human rights.


The Act lists several prohibited systems and allows organizations six months to comply before sanctions are imposed:

  • Biometric categorization systems leveraging political, religious, sexual orientation, race, etc.

  • Facial image scraping from the internet or CCTV to create facial recognition databases.

  • Social scoring based on behavior or individual characteristics.

  • AI models designed to manipulate human behavior.

  • AI models that exploit people’s vulnerabilities.

  • Emotion recognition in the workplace and educational institutions.

High-risk AI system providers must comply with the regulations via record keeping and documentation, noting data sets used, programming and training methods, and establishing written policies for oversight. Human oversight is mandatory to ensure adequate discretion over AI behavior and deployment.

Considering the abovementioned points, many organizations will be forced to make significant shifts to their business model. Considerations for protecting intellectual property must also be considered; since transparency is foundational to the regs, a balance must be struck between required disclosure and revealing proprietary secrets.


Investment may also be required to support advanced data and bias management tools, which may have operational cost implications.


The required human oversight, documentation, and record-keeping requirements for high-risk AI systems may require significant organizational changes, policy restructuring, and staff retraining, undoubtedly increasing the administrative burden.


Considering the potential fines for non-compliance, failure to plan adequately may place an organization at a significant financial risk.

Australian AI regulatory frameworks

Australia’s ongoing debate on AI strategy and regulation is, as of December 2023, still a voluntary framework and thus cannot be legislated. That being said, they were one of 28 countries to sign a declaration that AI threatens humanity.

Canadian AI regulation

Canada also operates under a Voluntary Code of Conduct under their Artificial Intelligence and Data Act (AIDA). The code establishes standards that allow companies to demonstrate they are developing and using AI responsibly, but it is considered a temporary stopgap until formal legislation comes into effect.

Compliance must consider the end-customer

Though voluntary frameworks like those of Canada and Australia are a starting point to inform strategy and ethics, it should be noted that any company doing business with EU, US, or UK citizens—or any country or territory that has established legislation around AI—are bound to that country’s rules. Knowing this makes the EU Act all the more pressing.

 

Following EU and US guidelines and frameworks will serve developers and tech providers well, helping them establish baseline policy and ethical superiority that will simplify compliance efforts once legislation is officially launched.

 

Overlaps in regional requirements between frameworks

Though regional regulations differ in various ways, two common objectives they share are a) the desire to reduce harm to people and society and b) the need to facilitate development for the social and economic benefit of the people.

 

Most jurisdictions regulate based on risk, and there is no question that the EU currently has the most stringent and comprehensive approach, especially regarding cybersecurity, data privacy, and protecting intellectual property.

 

Regulators commonly use sandboxes to enable responsible testing of private sector innovations and inform areas in which further oversight might be warranted.

 

In many cases, sector-specific guidelines are in place to address systemic risk in specific industries, usually focusing on areas where AI may be at risk of impacting human rights or influencing decisions.

 

 

Litigation and enforcement actions

Since most legislation around AI is relatively new, you may wonder what consequences have been handed down thus far. The headlines read a bit like the Wild West in some instances.

 

In Texas, for example, a personal injury attorney used ChatGPT to prepare for his day in court, but the AI created a bunch of fake cases, which he duly presented as part of his precedential defense. Upon learning that the cases (which could not be found) were generated by AI, the judge considered sanctioning the lawyer. As a result, a federal judge in the Northern District of Texas handed down a standing order that AI could not be used in any portion of a filing. This was the first documented case of AI ‘hallucinations’ making it into a courtroom.

 

For its part, the FTC has forced companies to erase or wipe models that used data without permission in an initiative known as model destruction.

 

Examples include cases where a company had built an AI model to parse loan applications for creditworthiness using data scraped from the internet.

 

Another company built a model to recognize fake news but did not have permission to use the copywritten articles it was trained on.

 

In a third example, an HR department used resumes of past and current employees to screen applicants—but did not obtain permission from the former employees to use their data.

 

These examples underscore the need for developers to consider regulatory frameworks and how they relate to automated decision-making, algorithms, and core data.

 

But it’s not all about new laws. Illegal wiretapping legislation dating back to the 1960s has provided a basis for multiple lawsuits against Old Navy, Home Depot, JCPenney, and General Motors over chatbots encouraging customers to provide personally identifying information and store conversations in their systems.

 

What are the regulators saying

The SEC is cracking down on AI washing in much the same way they are addressing the greenwashing phenomenon. Companies are prohibited from making false claims and must provide “full, fair, and truthful disclosure.”

 

The Financial Stability Oversight Council issued a warning in this year’s annual financial stability report—a first for the organization. Their stance is that while AI can spur innovation and enable efficiencies, it is recommended that both firms and regulators enhance their ability to identify emerging risks, which include increasingly large datasets and more third-party vendors, which amplify cybersecurity and data privacy risks.

 

What can you do to comply or prepare for the road ahead

(we will write this section. It’s a placeholder but happy for you to draft content to be added to)

[pls let me know if you need me to flesh out any of the following sections!]

 

Do you know…

  • How prevalent is your usage of AI?

  • How was your AI created?

  • Does your AI comply with the law?

  • What is your AI’s impact on customers, employees, investors, and stakeholders?

What steps should you take:

  • Document and Assess current AI usage

  • Perform regulatory mapping

  • Risk Assessment

  • Targeted audits and reviews

  • Risk Mitigation

  • One time

  • Ongoing

  • Reporting

  • Regular audit

 

Compliance with applicable laws

Assess your use of data, AI, and technology relative to your use cases and the laws that govern them.

 

Response to regulators and litigation

Undoubtedly, we can expect a rising tide of regulatory inquiries and litigation related to AI use. To be prepared, many of the steps you would take to manage AI in your organization appropriately will also position you to be ready to respond if needed.

 

Final Thoughts

As AI technology advances at breakneck speed, regulators must establish policies and guidelines to ensure the ethical development and use of AI models and the data that informs them. Though requirements and laws vary federally, regionally, and locally, several common factors inform legislation, namely, risk, data privacy, security, ethics, and human rights. In equal measure, lawmakers understand the value of AI to the economy and are strongly motivated to encourage development.

 

Start your generative AI journey on solid ground. Guardrail Technologies puts you in control of your technology—for the good of your business, the planet, and the people you serve.

Larry Bridgesmith J.D.

Executive Director Guardrail Technologies and Associate Professor Vanderbilt Law School

Larry brings consulting and training at the level of the Internet through emerging technologies such as blockchain, smart contracts, artificial intelligence, cryptocurrency and interoperable functionality.

LinkedIn Profile

https://www.linkedin.com/in/larrybridgesmith/
Previous
Previous

Source Code Risks and Risk Mitigation With Guardrail Code Analyzer™

Next
Next

Making the Case for Algorithmic Accounting