by Jake Goldstein-Street, Washington State Standard
January 14, 2026
Yale Moon, a senior at Lake Washington High School, uses generative artificial intelligence in his free time to create fake images and videos.
He also sees other content that he recognizes as fake, making him feel the need for a “clear borderline” between what’s real and what’s AI.
“I feel like AI is improving and becoming realistic every day, day by day,” Moon told state lawmakers Wednesday. “Because people are facing AI more often, people have to clearly know this is AI.”
Reining in artificial intelligence is a key goal for Washington state lawmakers this year. But ideas about how to do so are drawing pushback from the tech industry and could set the state up for a clash with the federal government.
State lawmakers are considering bills requiring AI detection tools and disclosures to address deepfakes and to establish new safeguards for children using the technology.
They’re hoping the legislation will add guardrails for AI chatbots like ChatGPT, protect users from discrimination in algorithms, address the use of AI in school discipline decisions, and require union talks over government use of the burgeoning technology.
A state House panel considered three of the measures Wednesday.
State regulation of AI has been a sticking point for the Trump administration. President Donald Trump signed an executive order last month threatening federal broadband funding for states if the federal government believes they’ve passed “onerous AI laws.”
Trump believes the federal government should set regulations, instead of a patchwork of states. But there are no signs federal lawmakers will approve comprehensive AI regulations in the near-term.
The Legislature created a task force dedicated to devising potential legislation on AI in 2024.
Deepfakes
Moon was testifying Wednesday in support of House Bill 1170. The legislation would require generative AI companies with over 1 million users to make an AI detection tool available.
It also pushes these companies to disclose, such as through a watermark, that an image, video or audio recording was AI-generated.
The tech industry opposes the legislation. Amy Harris, director of government affairs for the Washington Technology Industry Association, said it’s not as simple as watermarking the content, since those can be removed.
“There’s no single reliable way today to detect AI content across formats,” Harris added.
The House Technology, Economic Development and Veterans Committee is set to vote on the legislation Thursday. Last year, the panel passed the bill along party lines, but it didn’t make it to the House floor.
Jai Jaisimha, co-founder of the Transparency Coalition, told the committee Wednesday that “things have only been worse” since lawmakers considered the legislation last year.
Chatbots
House Bill 2225 responds to cases of young people who shared their ideation for self-harm with AI chatbots that, in some cases, reportedly gave them ways to die by suicide.
OpenAI, the company behind ChatGPT, reported in October that more than a million users each week indicate “explicit indicators of potential suicidal planning or intent.” Roughly 560,000 show “possible signs of mental health emergencies.” The platform has hundreds of millions of users.
The company faces lawsuits from families of children who killed themselves after engaging with the chatbot companion. OpenAI says it has worked with mental health professionals to improve the platform.
Washington’s legislation sets requirements for the operators of these tools when dealing with minors. These systems include ChatGPT, Microsoft’s Copilot and Google’s Gemini.
If the operator knows a user is a minor, it must inform them that the chatbot is artificially generated and not human. It also must implement “reasonable measures” to prevent the chatbot from generating sexually explicit content or suggestive dialogue. And it prohibits “manipulative engagement techniques” that try to intensify an emotional relationship between the user and the bot.
“We know this is happening,” said prime sponsor Rep. Lisa Callan, D-Issaquah. “It’s happening for emotional manipulation, becoming your best friend, talking and supporting everything that you’re saying that’s making you feel good. The dangers are there.”
Katie Davis, co-director of the University of Washington Center for Digital Youth, confirmed teens go to these chatbots for more than help with schoolwork, sometimes seeking support for romantic issues and exploring their identity.
Companies with these chatbots must also implement protocols for addressing suicidal ideation, including by referring users to crisis resources and preventing responses that describe self-harm.
Violations of the policy would be enforced under the state’s Consumer Protection Act. This mechanism, which would allow individuals to sue, drew ire from the tech community. Harris said it would expose companies to “sweeping liability.”
“This approach risks reducing access to helpful tools without meaningfully improving safety,” she said. “We support targeted safeguards for truly high-risk uses, and urge the committee to pause and work with us further on this.”
The bill would take effect Jan. 1, 2027, if passed.
The legislation is modeled to an extent on a law in California that took effect this month. New York has also passed regulations on this issue.
Gov. Bob Ferguson requested the bill. Beau Perschbacher, senior policy advisor to the governor, said that when he talks to his boss about AI, the governor references his own teenage children and “the challenges of parents today with trying to keep up with rapidly evolving technology.”
“It will put us at the forefront of regulating AI companion chatbots,” Perschbacher said.
AI discrimination
Another bill focuses on the use of AI-fed algorithms in high-stakes decisions, like hiring and medical insurance.
House Bill 2157 would require both the companies developing and deploying the technology to take steps to protect people from discrimination potentially embedded in the algorithms. The bill covers businesses making over $100,000 in annual revenue.
Other potential issues mentioned in the bill where this could come up are school admissions, housing and loans.
“In the absence of federal guidelines and regulations, state oversight is essential,” said bill sponsor Rep. Cindy Ryu, D-Shoreline.
Ryu’s bill doesn’t cover spellcheck, calculators, robocall filters, antivirus technology and more benign applications. Governments are exempt from the requirements.
Business groups said the measure could have a chilling effect that pushes companies to stop using the technology entirely.
The legislation is based on a Virginia bill passed last year but vetoed by the state’s Republican Gov. Glenn Youngkin.
Colorado led the nation when it approved legislation focused on high-risk AI, but implementation has been delayed.
The state attorney general’s office supports the bill, but would prefer to enforce the law instead of a private right of action allowing lawsuits brought directly by members of the public. Ryu said she considered that alternative, but it would be too expensive in the state’s difficult budget environment. Instead, it doesn’t provide for financial damages stemming from lawsuits, only court-ordered relief to stop the discrimination.
“This was the best I could, we could do,” Ryu said. “So this is essentially a start, is what I’m thinking.”
Washington State Standard is part of States Newsroom, a nonprofit news network supported by grants and a coalition of donors as a 501c(3) public charity. Washington State Standard maintains editorial independence. Contact Editor Bill Lucia for questions: [email protected].

