top of page
Writer's pictureRic Armstrong

Texas Considers AI Governance Act


Texas is joining a growing number of states in considering comprehensive laws regulating use of AI. In particular, the Texas Legislature is scheduled to consider the “Texas Responsible AI Governance Act” which seeks to regulate development and deployment of artificial intelligence systems in Texas. The Act could serve as a model for other states and could prove tremendously impactful.


The bulk of the Act is focused on “high-risk artificial intelligence systems”, which include artificial intelligence systems that, when deployed, make or are otherwise a contributing factor in making, a consequential decision. The Act specifically excludes a number of systems, such as technology intended to detect decision-making patterns, anti-malware and anti-virus programs, and calculators. The Act also imposes specific obligations depending on the role of a party, including:


• A “deployer”, who is a party doing business in Texas that deploys a high-risk artificial intelligence system.

• A “developer”, who is a party doing business in Texas that develops a high-risk artificial intelligence system or who substantially or intentionally modifies such a system.


Duties of Developers


The Act requires that developers of a high-risk artificial intelligence system use reasonable care to protect consumers from known or reasonably foreseeable risks. In addition, the Act requires that developers, prior to providing a high-risk artificial intelligence system to a deployer, provide deployers with a written “High-Risk Report” which must include:


-- A description of how the high-risk artificial intelligence system should be used and not used, as well as how the system should be monitored when the system is used to make (or is a substantial factor in making) a “consequential decision.”


-- A description of any known limitations of the system, the metrics used to measure performance of the system, as well as how the system performs under those metrics.


-- A description of any known or reasonably foreseeable risks of algorithmic discrimination, unlawful use/disclosure of personal data, or deceptive manipulation or coercion of human behavior which is likely to occur.


-- A description of the types of data to be used to program or train systems.


-- A summary of the data governance measures which were implemented to cover the training datasets as well as their collection, the measures used to examine the suitability of the data sources, possibly discriminatory biases, and measures to be taken to mitigate such risks.


Prior to the deployment of a high-risk artificial intelligence system, developers are required to adopt and implement a formal risk identification and management policy that must satisfy a number of prescribed standards. Further, developers are required to maintain detailed records of any generative artificial intelligence training datasets used to develop a generative artificial intelligence system or service.


Specific Prohibited Activities


-- Manipulating Human Behavior – The Act prohibits use of an artificial intelligence system that uses subliminal or deceptive techniques with the objective or effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision.


-- Social Scoring – The Act prohibits use of an artificial intelligence system developed or deployed for the evaluation or classification of natural persons or groups of natural persons based on their social behavior or predicted personal characteristics, with the intent to determine a social score or a similar estimation/valuation.


-- Biometric Identifiers – The Act prohibits use of an artificial intelligence system which is developed or deployed with the purpose or capability of gathering or otherwise collecting biometric identifiers of individuals. In addition, the Act prohibits use of a system which infers or interprets sensitive personal attributes of a person or group of persons using biometric identifiers, except for the labeling or filtering of lawfully acquired biometric identifier data.


-- Protected Characteristics – The Act prohibits use of an artificial intelligence system that utilizes characteristics of a person based on their race, color, disability, religion, sex, national origin, age, or a special social or economic situation with the objective (or effect) of materially distorting the behavior of that person in a manner that causes or is reasonably likely to cause that person or another person significant harm.


-- Emotional Inferences – The Act prohibits use of an artificial intelligence system that infers, or is capable of inferring, the emotions of a natural person without the express consent of such person.

Source: National Law Review


Should you have any questions or concerns about the legal issues applicable to artificial intelligence (ai), please reach out to Derek Saunders, Keith Strahan, or Richard Armstrong of our firm, shown here: https://lfbrown.law/our-team



Comentarios


bottom of page