Menu

IoT FEATURE NEWS

Texas Passes Unique AI Laws

By

Our friend Glenn Richards, partner at Dickinson Wright, shared their write-up and analysis of the new Texas AI law. It’s important to review because it has some strong safeguards in it and looks fairly comprehensive, overall. In my opinion, while it seems like it’s limited in scope to medical AIoT devices, it’s probable that other solutions with sensitive information may also be within its scope.

Here is the write-up from Dickinson Wright:

On June 22, 2025, Governor Abbott signed the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect January 1, 2026. Any business or government agency working with AI in Texas should take note that TRAIGA is not a copy-paste of other states’ laws; rather, it specifically targets intentional misuse of AI, not just “high-risk” AI.

Unlike broader “high-risk AI” frameworks emerging in other states, TRAIGA puts intent at the center of its rules, with an emphasis on preventing deliberate misuse. It also makes meaningful changes to Texas’s privacy statutes to address AI-specific issues, particularly around biometric data and transparency obligations.

Who Must Comply?

  • Government agencies are explicitly within scope if they use AI to interact with the public.
  • Private sector companies that develop, market, sell, or otherwise provide AI-generated content or AI services to Texas residents, even if the company is based outside the state, if its AI systems impact Texas residents.

Prohibited Conduct: Intent Is Key

TRAIGA targets deliberate misuse of AI systems, prohibiting private entities from developing or deploying AI systems that intentionally:

  • Encourage or incite self-harm, violence, or illegal activity
  • Engage in intentional discrimination against protected classes under law
  • Generate illegal sexual content, including AI-generated deepfakes. The Act also explicitly bans child pornography or sexually explicit chat systems that impersonate children.

Of particular note, accidental or unintentional impacts alone are not sufficient to trigger a violation. The Attorney General must show a purposeful intent to discriminate or cause harm.

Notice and Disclosure Requirements: No More “Black Box” Interactions

One of TRAIGA’s central compliance demands is transparency in government use of AI. Government agencies must provide clear, plain-language notice whenever an individual is interacting with an AI system, rather than a human, “regardless of whether it would be obvious to a reasonable person that the person is interacting with an [AI] system.” Any notice must be:

  • Conspicuous and easily understood: No legalese, no fine print, no ambiguous chatbots masquerading as people.
  • Provided at the start of the interaction: Users must be informed upfront, not after the fact or buried in a privacy policy.
  • Free of “dark patterns”: Agencies are expressly prohibited from using manipulative UX/UI techniques to obscure or downplay AI involvement.

The law signals a move toward radical transparency and government agencies should begin reviewing all user-facing AI touchpoints to ensure compliance. Staff training will be essential, as will regular audits of disclosures and interface design.

Biometric Privacy: More Stringent Rules and New Exceptions

TRAIGA updates Texas’s biometric privacy framework, tightening the rules around notice and consent for the collection and use of biometric identifiers (e.g., fingerprints, iris scans, voiceprints). The law clarifies that it does not apply to (i) general photographs or other biometric identifiers made publicly available by the individual, (ii) voice recordings required by financial institutions, (iii) information collected, used, or stored for health care treatment, payment, or operations, or (iv) biometric data used solely for training or security purposes, provided it is not used to identify individuals.

Although biometric identifiers may be used to train AI systems, if that information is subsequently used for commercial purposes, the entity that collected the data may be subject to the Act’s enforcement provisions, unless it first obtains consent from the individual.

Government Use Restrictions

Under TRAIGA, government agencies are expressly prohibited from:

  • Creating or applying “social scoring” algorithms that result in discriminatory or otherwise adverse treatment of individuals; and
  • Creating or using AI systems to uniquely identify individuals using biometric data or by collecting images or media from the Internet or other public sources, whether targeted or not, unless (i) the individual has given consent, and (ii) the use does not violate any rights protected by the U.S. Constitution, the Texas Constitution, or applicable state or federal law.

These protections create meaningful guardrails that prevent government overreach in biometric identification efforts.

Recap: Obligations for Government Use

In summary, government agencies using AI should take note of the following key requirements:

  • Government agencies must clearly disclose when AI is interacting with consumers, no confusing UX or manipulative design.
  • Social scoring and biometric identification (fingerprint, iris, voiceprint) by the government are strictly banned, except for routine photos and audio.
  • The state’s biometric privacy law tightens notice/consent and clarifies exemptions for training or security uses not tied to identification.
  • Processors handling AI-processed personal data must assist controllers in compliance.

Regulatory Infrastructure: The Texas AI Council and Sandbox

Texas will launch a seven-member AI Council under the Department of Information Resources. The council will serve an advisory role, providing guidance, issuing reports, and supporting agency training, but it will not have rulemaking power.

Organizations pursuing innovative or high-impact AI projects can take advantage of Texas’s new “regulatory sandbox,” which offers a controlled environment to test real-world AI systems. The program promotes faster adoption of AI by temporarily easing regulatory requirements while providing oversight and risk management, allowing organizations to test their AI solutions without the risk of enforcement penalties.

Enforcement and Remedies

  • Enforcement authority sits solely with the Texas Attorney General. There is no private right of action – clients can expect a formal complaint system and investigative process, with civil investigative demands as a tool.
  • Companies get a 60-day cure period following notice of an alleged violation before penalties accrue.
  • Affirmative defenses are available for organizations that document robust internal testing, conduct adversarial “red-teaming,” or follow industry standards like the NIST AI Risk Management Framework guidelines.
  • Penalties start at $10,000 per violation, scaling up depending on severity and remediation.

Compliance Action Items for Legal and Risk Teams

  • Document Intent Meticulously: Maintain comprehensive records for each AI system’s intended purpose, especially for use cases that could be construed as manipulative, discriminatory, or otherwise high-risk.
  • Audit and Train: Schedule regular adversarial and red-team tests, and keep detailed logs/audit trails (NIST AI RMF alignment is a best practice). Train staff on both technical testing and user-facing transparency.
  • Privacy Overhaul: Review and update data-use policies, privacy notices, and vendor agreements, with special attention to biometric exemptions and new processor obligations.
  • User Transparency: Map all AI touchpoints where users could interact with automated systems. Draft and test clear disclosures, and run usability audits to check for accidental “dark patterns.”
  • Prepare for the Sandbox: If you’re eyeing innovative or high-stakes AI deployments, consider the regulatory sandbox as a way to pilot systems with regulatory oversight but reduced risk.
  • Monitor the Federal Picture: Assign someone to track federal legislative developments, including potential budget riders that could preempt or override state laws. For example, Senator Ted Cruz recently proposed a federal budget measure that would prohibit states from enforcing their own AI regulations for ten years as a condition for accessing a proposed $500M federal AI deployment fund. If enacted, this could effectively suspend enforcement of TRAIGA, making it critical for compliance teams to remain agile and closely monitor federal activity that may impact state-level obligations.

Why This Matters Now

TRAIGA is not just another AI regulation; it’s a focused, intent-driven statute with teeth, robust transparency requirements, and new obligations for biometric privacy and data processing. It also offers space for innovation via the sandbox, but the threat of federal preemption means the landscape could shift quickly. Now is the time to get your documentation, testing, and notices in order, and to make sure your compliance program is nimble.




Edited by Erik Linask
Get stories like this delivered straight to your inbox. [Free eNews Subscription]

Partner, Crossfire Media

SHARE THIS ARTICLE
Related Articles

Texas Passes Unique AI Laws

By: Carl Ford    7/3/2025

What is the new Texas AI law all about and what does it mean for businesses and government entities?

Read More

I've Asked the Security Experts, But It's Time You Have Your Say

By: Carl Ford    6/27/2025

Security experts are quick to say they know what's happening, but here's your opportunity to weigh in on the state of cybersecurity in IoT.

Read More

Mary Meeker Returns with AI and Breezes Past AIoT

By: Carl Ford    6/26/2025

We are entering an era where intelligence is not just embedded in digital applications, but also in vehicles, machines, and defense systems

Read More

Nothreat Fights AI Fire with AI in Firewalls

By: Carl Ford    6/26/2025

According to Nothreat, the only way to fight AI cyber threats in IoT with AI is to go beyond detection and into active containment, deception, and aut…

Read More

How Kapitus is Reshaping SMB Funding

By: Carl Ford    6/16/2025

Kapitus is a financial institution that provides various financing solutions to SMBs, operating as both a direct lender and a financing marketplace.

Read More