Texas Passes Unique AI Laws

By Carl Ford July 03, 2025

Our friend Glenn Richards (News - Alert), partner at Dickinson Wright, shared their write-up and analysis of the new Texas AI law. It’s important to review because it has some strong safeguards in it and looks fairly comprehensive, overall. In my opinion, while it seems like it’s limited in scope to medical AIoT devices, it’s probable that other solutions with sensitive information may also be within its scope.

Here is the write-up from Dickinson Wright:

On June 22, 2025, Governor Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect January 1, 2026. Any business or government agency working with AI in Texas should take note that TRAIGA is not a copy-paste of other states’ laws; rather, it specifically targets intentional misuse of AI, not just “high-risk” AI.

Unlike broader “high-risk AI” frameworks emerging in other states, TRAIGA puts intent at the center of its rules, with an emphasis on preventing deliberate misuse. It also makes meaningful changes to Texas’s privacy statutes to address AI-specific issues, particularly around biometric data and transparency obligations.

Who Must Comply?

Prohibited Conduct: Intent Is Key

TRAIGA targets deliberate misuse of AI systems, prohibiting private entities from developing or deploying AI systems that intentionally:

Of particular note, accidental or unintentional impacts alone are not sufficient to trigger a violation. The Attorney General must show a purposeful intent to discriminate or cause harm.

Notice and Disclosure Requirements: No More “Black Box (News - Alert)” Interactions

One of TRAIGA’s central compliance demands is transparency in government use of AI. Government agencies must provide clear, plain-language notice whenever an individual is interacting with an AI system, rather than a human, “regardless of whether it would be obvious to a reasonable person that the person is interacting with an [AI] system.” Any notice must be:

The law signals a move toward radical transparency and government agencies should begin reviewing all user-facing AI touchpoints to ensure compliance. Staff training will be essential, as will regular audits of disclosures and interface design.

Biometric Privacy: More Stringent Rules and New Exceptions

TRAIGA updates Texas’s biometric privacy framework, tightening the rules around notice and consent for the collection and use of biometric identifiers (e.g., fingerprints, iris scans, voiceprints). The law clarifies that it does not apply to (i) general photographs or other biometric identifiers made publicly available by the individual, (ii) voice recordings required by financial institutions, (iii) information collected, used, or stored for health care treatment, payment, or operations, or (iv) biometric data used solely for training or security purposes, provided it is not used to identify individuals.

Although biometric identifiers may be used to train AI systems, if that information is subsequently used for commercial purposes, the entity that collected the data may be subject to the Act’s enforcement provisions, unless it first obtains consent from the individual.

Government Use Restrictions

Under TRAIGA, government agencies are expressly prohibited from:

These protections create meaningful guardrails that prevent government overreach in biometric identification efforts.

Recap: Obligations for Government Use

In summary, government agencies using AI should take note of the following key requirements:

Regulatory Infrastructure: The Texas AI Council and Sandbox

Texas will launch a seven-member AI Council under the Department of Information Resources. The council will serve an advisory role, providing guidance, issuing reports, and supporting agency training, but it will not have rulemaking power.

Organizations pursuing innovative or high-impact AI projects can take advantage of Texas’s new “regulatory sandbox,” which offers a controlled environment to test real-world AI systems. The program promotes faster adoption of AI by temporarily easing regulatory requirements while providing oversight and risk management, allowing organizations to test their AI solutions without the risk of enforcement penalties.

Enforcement and Remedies

Compliance Action Items for Legal and Risk Teams

Why This Matters Now

TRAIGA is not just another AI regulation; it’s a focused, intent-driven statute with teeth, robust transparency requirements, and new obligations for biometric privacy and data processing. It also offers space for innovation via the sandbox, but the threat of federal preemption means the landscape could shift quickly. Now is the time to get your documentation, testing, and notices in order, and to make sure your compliance program is nimble.




Edited by Erik Linask


Original Page