How to prepare for evolving global AI legislation – Technologist

The global AI regulatory landscape

Over the past year, there have been significant AI-related legislative and regulatory developments across the world.

 

Europe

Europe has been one of the first regions to push for AI-focused regulations and its proposals have garnered much attention. The EU has taken a similar stance to its data protection regime by proposing comprehensive, cross-sector legislation in the form of the EU AI Act. While the final text of the proposed legislation is still under consideration by the EU’s legislative bodies, following a political agreement on December 8, 2023, the rules outlined in the AI Act follow a prescriptive, risk-based approach establishing four risk categories and accompanying obligations for the deployment and use of AI systems. At the heart of the AI Act is the classification of high-risk systems. These include AI systems in critical areas such as infrastructure, biometrics, education, employment, access to essential services, and administration of justice. Recent developments in the EU legislative process also include obligations for providers of foundation models to mitigate the potential risks to safety, health, democracy, and fundamental rights. These obligations center around transparent governance and require several familiar obligations including robust data governance, accuracy, ongoing risk management, and sufficient privacy and cybersecurity policies.

For its part, the UK government has eschewed taking a comprehensive regulatory approach, opting instead for a context-specific and principles-based framework. The principles underpinning the UK framework for the responsible development and use of AI include:

  • Safety, security, and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.
Asia

AI frameworks are emerging in China, Japan, and other countries in Asia.

China has already moved ahead to implement specific rules relating to generative AI technology. China’s Interim Administrative Measures for Generative Artificial Intelligence Services took effect August 15, 2023. Among other obligations, these AI measures require certain generative AI services to complete a security assessment and algorithm record-filing and impose explicit requirements on providers to make efforts to improve authenticity, accuracy, objectivity, reliability, and diversity of training data as well as require oversight of content generated by their services.

Other Asian countries have sought to pass recommendations addressing ethical, legal, and societal issues related to AI. For example, Japan published the Social Principles of Human-Centric AI, which advises for the creation of human-centric AI systems, the promotion of education and literacy, the protection of privacy and security, and the promotion of fairness, accountability, and transparency. Singapore has also taken a proactive approach to promoting the responsible development and deployment of AI through a variety of efforts, including the Model AI Governance Framework and the Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems.  

Latin America

Many Latin American countries have begun implementing frameworks and policies addressing AI-specific issues. For example, the Brazilian government has proposed a comprehensive AI bill like the AI Act which takes a risk-based approach, categorizing AI systems into varying levels of risk. The proposed law seeks to establish national norms for the ethical and responsible use of AI systems including rules pertaining to protecting human rights and non-discrimination. It also emphasizes the need for accountability, transparency, and individual rights. The Chilean parliament has proposed an AI bill aimed at establishing criteria for high-risk AI systems and safeguarding citizens’ fundamental rights. It would establish an authorization process and impose financial and custodial penalties for noncompliance. Authorities in Peru adopted an AI law outlining a set of principles including the adoption of risk-based security standards and the protection of individuals’ privacy. Other Latin American countries such as Argentina, Colombia, and Mexico, have taken a similar principle-based approach to AI regulation, focusing on the development of ethical and trustworthy AI systems for use in government, industry, and academia.

Trends in the U.S.

Similar to the approach taken in other countries, the U.S. has not yet passed comprehensive AI legislation despite common consensus at both the federal and state level that AI legal action is needed. The lack of a federal AI law does not mean, however, that the regulation of AI is far off. To the contrary, there has been a flurry of policy activity at both the federal and state levels in the U.S.

Federal activity

On October 30, 2023, the Biden Administration issued an Executive Order (“EO”) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO builds on previous executive actions related to AI as well as the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework. The EO directs a wide array of stakeholders to take actions to support the responsible development and deployment of AI in a variety of contexts. Guiding principles directing various federal agencies to set new AI standards include safeguarding American’s privacy, advancing equity and civil rights, and developing AI systems in a safe and secure manner.

Prior to this EO, there were similar calls for potential regulation by various agencies that were explicitly called out by the EO. For example, the U.S. Department of Health & Human Services Office of the National Coordinator proposed rules earlier this year specifically outlining new requirements for certain AI technologies in the health sector. The Federal Trade Commission has released several guidance documents with the goal of promoting fair competition in the AI marketplace and protecting consumers from deceptive practices arising from the use of new AI technologies in recent years. Similar calls for action have come from other agencies including the U.S. Department of Justice and the U.S. Equal Employment Opportunity Commission last year.

State activity

The proliferation of new and imminent federal agency guidance on AI has been met with a similar wave of activity at the U.S. state-level. Indeed, in the absence of a comprehensive federal AI law, many states have already passed laws containing regulations that impact the use of AI, while several more have proposed more sweeping legislation. While the AI state regulatory field is rapidly developing, many key themes have emerged. 

A focus on consumer rights

Under several existing state consumer privacy acts (including in California, Colorado, Connecticut, Delaware, Indiana, Montana, Oregon, Tennessee, Texas, Utah, and Virginia) and several proposed bills (including in Massachusetts, and Pennsylvania), consumers have the right to opt-out of profiling in furtherance of automated decisions. Most of the existing state consumer privacy laws require data protection assessments in addition to these opt-out rights for processing activities that present a heightened risk of harm to consumers, including targeted advertising and for profiling that presents a reasonably foreseeable risk of: unfair or deceptive treatment of, or unlawful disparate impact on, consumers; financial, physical, or reputational injury to consumers; a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where the intrusion would be offensive to a reasonable person; or other substantial injury to consumers.

Most recently, California released new draft regulations governing automated decisionmaking technology. The draft regulations contain expanded definitions of “automated decisionmaking technology” and “decisions that produce legal or similar significant effects” among other changes. Like the transparency requirements being proposed elsewhere, the draft regulations require a “pre-use notice” before a business processes personal information using automated decisionmaking technology. Such notice would require: an explanation of the purpose for using the automated decisionmaking technology; a description of the consumer’s right to access and opt-out of the business’s use of the automated decisionmaking technology; and access to additional information on the logic and key parameters used in the automated decisionmaking technology and whether it has been evaluated for validity, reliability, and fairness. In addition, the revised draft regulations on risk assessments contains new triggers for processing activities concerning AI and automated decisionmaking technology that necessitates a risk assessment.

A focus on information gathering, collaboration, and future AI guidelines

Supplementing state consumer privacy laws is a surge of state executive orders and resolutions calling for a review of the benefits and consequences of AI technologies including the creation of appropriate policies and procedures for using, developing, and procuring generative AI. California Executive Order N-12-23 is one of the most prescriptive of the state executive orders. This Executive Order contains key provisions including calling on state agencies to perform a joint risk analysis of potential threats to and vulnerabilities of California’s critical energy infrastructure, as well as instructing the state agencies to develop guidelines for public sector procurement.

A focus on combating general and sector-specific harms

Discussions about AI technologies have focused on the duality of benefit and harm inherent in AI technologies. Many regulators and policymakers seemingly understand the extraordinary potential while simultaneously warning of the potential peril. Many states have crafted proposed legislation with that in mind and seek to prevent and mitigate societal harms across a variety of sectors perceived to be most vulnerable to disruption, including healthcare and employment.

General harms. Several states, including California and New York, have focused their proposed legislation on guarding against general harms of AI. These states generally express concerns with using automated decisionmaking tools in critical sectors such as education, housing, healthcare, financial services, voting, insurance, and the criminal justice system, but do not impose specific prohibitions on use or bestow further protections. Those states tend to require impact assessments for the use of automated decisionmaking technologies.

Healthcare. Given the increased attention, potential benefit, and heightened risk of the use of AI in healthcare, this is unsurprisingly an area of particular interest from legislators. To help advance responsible AI innovation, use, and decisionmaking in the healthcare sector, several states (including California, Illinois, and Massachusetts) have proposed legislation that would regulate the use of AI in health services. These laws focus on preventing healthcare providers from using automated decision systems to discriminate against patients while also affording patients the right to know when an algorithm was used to diagnose them. Further, some would require consent and use of only pre-approved technologies that are monitored and shown to achieve accurate results.

Employment. Preventing employment harms related to hiring decisions is another common aim of state AI laws. For example, bias audits of automated employment decision tools would be required under a proposed New Jersey law. Like the federal EO discussed above, some states (including Illinois and Massachusetts) have focused on preventing the use of automated decisionmaking that produces discriminatory effects and providing employees with notice whenever algorithmic decisions are made. There are even similar proposals at the city-level. For example, the New York Department of Consumer and Worker Protection added rules regulating automated employment decision tools, including prohibiting employers from using such tools unless a bias audit has been completed within one year of the tools’ use.

A focus on ensuring the responsible use of AI systems

State laws (including in Connecticut, the District of Columbia, and Massachusetts) being developed around AI have focused on developing and maintaining trust in AI much like many AI laws proposed outside of the U.S. These proposals build on the state consumer privacy laws and laws related to preventing harm, which all focus on the responsible development of AI technologies. Common guiding principles within these state laws include:

  • Preventing algorithmic bias and promoting fairness;
  • Reducing the opacity of AI systems by increasing transparency;
  • Ensuring proper privacy and cybersecurity measures and practices;
  • Enhancing safety, reliability, and technical robustness;
  • Strengthening AI accountability;
  • Maintaining human-centricity;
  • Enhancing social and environmental well-being; and
  • Strengthening awareness and AI literacy.

These align with the general themes and principles highlighted by policy makers globally.

Looking ahead

There will continue to be activity around the world on the development of AI policy and regulation. Similar to the evolution of data protection laws globally, AI regulation and enforcement of these laws may progress with a mix of omnibus, sectoral, and regional requirements. Legislation is likely to continue the trends noted above, focusing on promoting the accountability, accuracy, and transparency of AI systems for the protection of individuals. Mitigating or eliminating potential harms, including discrimination-based harms, is likely to underpin most bills. 

Preparing for today and tomorrow

Despite the ongoing activity of AI regulations, companies do not have to wait to take a proactive approach in future proofing their compliance programs for proper AI governance. Effective AI governance would include core elements such as:

  • Adequate oversight and coordinate among teams within the organization;
  • Responsible AI by design incorporating measure to help prevent bias, discrimination, and misuse;
  • Demonstrated accountability through proper documentation of AI systems;
  • Implementation of quality assurance through human monitoring across the entire AI system lifecycle;
  • Continued transparency to users, particularly when using automated decisionmaking technologies, and mechanisms for honoring rights requests;
  • Identification and maintenance of appropriate controls for safety, accuracy, and reliability that account for sector and service specific risks; and
  • Deployment of an AI compliance program with proper risk assessments, privacy measures, and cybersecurity procedures.

 

Authored by Scott Loughlin, Eduardo Ustaran, Alyssa Golay, and Pat Bruny.

 

Add a Comment

Your email address will not be published. Required fields are marked *

x