Connect with us

Hi, what are you looking for?

Brilliant AchievementBrilliant Achievement

Editor's Pick

What If AI Chatbots Are Saving Lives?

Adam Omary and Jennifer Huddleston

AI healthcare

AI in Health Care: A Policy Framework for Innovation, Liability, and Patient Autonomy—Part 8

The Senate Judiciary Committee advanced Senator Josh Hawley’s Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act. The bill would require every American to verify their age before using a generative AI chatbot and would bar anyone under eighteen from using a “companion” chatbot at all. In the room during the markup were the parents of children who died by suicide after conversations with AI products. Their grief is unimaginable, and their motives are beyond reproach. But, concerningly, such a policy might quietly cost rather than save lives.

The strongest claim animating this bill is the belief that restricting minors’ access to AI chatbots will prevent suicide. On the available evidence, that claim is closer to a hypothesis than a finding—and a hypothesis that runs against several decades of data on how young people die. 

According to the Centers for Disease Control and Prevention, the American suicide rate began climbing around the year 2000—before ChatGPT, smartphones, or social media even existed. It accelerated through the 2010s, then, contrary to popular narrative, plateaued and modestly declined after 2018—even as generative AI moved from research labs into the pockets of nearly every teenager in the country. If chatbots were a meaningful driver of adolescent suicide, the curves should have moved together. They have not, and, importantly, suicide rates among young Americans remain the lowest among any age group. 

While any loss of a young life to suicide is a tragedy, whatever is killing young Americans predates the technology that lawmakers now propose to ban them from using. 

What the GUARD Act’s sponsors do not seriously consider is the other side of the ledger. There are cases where AI could help Americans of all ages when it comes to mental health. Roughly half of Americans with a diagnosable mental health condition never seek professional help; stigma, cost, and fear of involuntary intervention keep them silent. For some of them—especially adolescents in households where therapy is unaffordable, unavailable, or unsafe to disclose—a chatbot is their most reliable form of emotional support. 

In a survey of over 1,000 adolescents and young adults, 13 percent had used a chatbot for mental health support, and more than 90 percent of those found it helpful. In another study of over 1,000 users of Replika, a popular AI chatbot, 30 reported without solicitation that their artificial companion saved them from suicide. 

We do not know how many lives generative AI has saved by improving access to mental health care. But for every incidence of AI psychosis or suicide, there may be dozens of unobserved positive outcomes. Policy that presumes only the worst outcomes also prevents the best.

The consequences of the proposal could also dissuade investment or chill speech that would make better options available. Faced with $100,000 per-violation penalties, providers will not invest in better suicide-detection models and instead likely remove any content that could be related to such a topic, thus limiting resources to crisis hotlines for those who are struggling. It would also limit the availability of information for those seeking to understand a deeply traumatic event or help a friend who may be struggling. Clinicians have known for decades that abrupt treatment referrals without first building rapport can deepen shame and shut down disclosure. The best science suggests suicide-prevention frameworks place trust-building before resource provision precisely because the order matters. A regulatory regime that punishes providers for nuance will produce less of it.

Beyond being bad policy, such laws are almost certainly unconstitutional. The underlying policy is not based on a compelling government interest nor is it narrowly tailored. It impacts the speech rights and anonymity of all users of online tools, not just minors, on the basis of justifications that are far from accepted. The compliance regime is broad enough to capture homework helpers, customer-service chatbots, and search engines that produce conversational responses, placing a “papers, please” approach to a broad and growing swath of the internet. To enforce it, every American adult would have to upload a government ID or submit to biometric scanning to ask a question, complete a customer service interaction, or practice a foreign language.

More measured and better policy responses are available if policymakers want to support parents and teens who may encounter difficulties with AI chatbots or generative AI. That includes training and providing appropriate resources for law enforcement to go after the bad actors who abuse technology to create or solicit sexual content from minors. Investment in AI literacy, of the kind Idaho recently codified into its public schools, equips young people to use these tools the way they will inevitably need to use them as adults and can include information on what to do if they encounter problems. 

Far from being a problem, liability shields modeled on Section 230, paired with safe-harbor incentives for providers that invest in better mental-health detection, would reward the kind of careful development the current bill punishes. None of those would deliver the cathartic clarity of a ban, but all of them are more likely to save lives. Importantly, they also empower parents and other trusted adults, not policymakers, to be the ones who determine what makes sense when it comes to kids and teens’ AI use.

The bill’s sponsors are not acting in bad faith. The cases motivating them are real, and the impulse to protect the vulnerable is one of the more honorable features of our political instincts. But the pattern is familiar from earlier moral panics over comic books, rock music, and video games. Each was sincerely felt. Each rested on weak social science amplified by strong public emotion. Each produced a policy that aged poorly.

The GUARD Act asks us to trade a measurable loss of liberty and privacy for an unmeasured, and possibly negative, impact on safety. The forces that drive people toward suicide—isolation, family conflict, untreated illness, loss of meaning—operate on timescales and through mechanisms that no technology policy will address. To pretend otherwise is to offer grieving families a consolation that policy cannot honestly deliver while quietly closing a door through which other young people, less visible to us, are still walking toward help. 

To read other parts of this blog series, go here.

Join The Exclusive Subscription Today And Get Premium Articles For Free
Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

You May Also Like

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...

Tech News

Image: Ford Ford announced today that it would be working with bike company N plus to introduce two new e-bikes inspired by the automaker’s...