Katie Britt presses AI companies to improve transparency, commit to timely and consistent safety disclosure

(Mark Fisher/Flickr, Steve Johnson/Unsplash, YHN)

U.S. Sen. Katie Britt (R-Montgomery) continues to sound the alarm about how Big Tech companies aren’t doing enough to protect vulnerable children online.

Britt and several of her colleagues sent letters this week that called on leading artificial intelligence (AI) companies to improve transparency around the capabilities of their models and the risks they pose to users.

In letters to OpenAI, Microsoft, Google, Anthropic, Meta, Luka, Character.AI, and xAI, the Senators highlighted reports of AI chatbots encouraging dangerous behavior among children, including suicidal ideation and self-harm, and requested commitments to timely, consistent disclosures around model releases as well as long-term research into chatbots’ impacts on the emotional and psychological wellbeing of users.

“If AI companies struggle to predict and mitigate relatively well-understood risks, it raises concerns about their ability to manage more complex risks,” the senators wrote. “While we have been encouraged by the arrival of overdue safety measures for certain chatbots, these must be accompanied by improved public self-reporting so that consumers, families, educators, and policymakers can make informed decisions around appropriate use.”

Britt has been on the forefront of this issue, recently introducing measures such as the Kids Off Social Media Act and the Stop the Scroll Act. Britt also recently questioned experts and parents of sextortion victims during a Senate Judiciary Committee hearing, reiterating her push to hold social media platforms accountable.

“Public disclosure reports, such as AI model and system cards, serve as the closest equivalent to nutrition labels for AI models,” the senators continued. “While they are essential public transparency tools, today’s changed landscape calls for assessing current best practices and how they can be better responsive to user risks … Companies must continue to monitor their model performance and publicly disclose new developments as they relate to security and user safety. This information enables third-party evaluators to assess a model’s risks and supports organizations, governments, and consumers in making more informed decisions.”

The letter was also signed by Sens. James Lankford (R-Okla.), Chris Coons (D-Del.), and Brain Schatz (D-Hawaii).

Britt discussed the dangers of AI and social media on Fox News last week as well.

“And I think when you look at what’s happening right now with our kids, ages 13 to 17, they have said they actually feel more negative, feel more depressed—almost 50% of them admit to that after being on social media … [T]he previous Surgeon General said kids shouldn’t be on social media until they’re 16. Now is the time to act,” she said. “But the truth is, is Big Tech has a grip on Congress, and Congress’ inaction is feckless. I do not have to ask people what it is like to raise kids right now, I am living it. And we know the harms, and it is our job to put up the proper guardrails so that these kids can flourish.”

Yaffee is a contributing writer to Yellowhammer News and hosts “The Yaffee Program” weekdays 9-11 a.m. on WVNN. You can follow him on X @Yaffee