This news signals increasing regulatory scrutiny on ai, particularly concerning ethical use and user safety. while directly focused on chatbots and minors, it sets a precedent for broader ai regulation, which could affect ai-powered crypto projects by increasing compliance costs or limiting certain applications.
Based on direct company announcements, proposed bipartisan senate legislation, and expert legal opinions, indicating a clear trend towards stricter ai oversight.
Increased regulatory pressure and potential limitations on ai applications, even if not directly targeting crypto, can create uncertainty and negative sentiment for ai-related decentralized projects, potentially leading to a decrease in investment and valuation.
Regulatory processes and their full impact on the broader ai and crypto landscape will unfold over an extended period. the proposed legislation and its enforcement will take time to materialize and affect market dynamics.
In brief Character.AI will remove open-ended chat features for users under 18 by November 25, shifting minors over to creative tools like video and story generation. The move follows last year’s suicide of 14-year-old Sewell Setzer III, who developed an obsessive attachment to a chatbot on the platform. The announcement comes as a bipartisan Senate bill seeks to criminalize AI products that groom minors or generate sexual content for children. Decrypt’s Art, Fashion, and Entertainment Hub. Discover SCENE Character.AI will ban teenagers from chatting with AI companions by November 25, ending a core feature of the platform after facing mounting lawsuits, regulatory pressure, and criticism over teen deaths linked to its chatbots. The company announced the changes after "reports and feedback from regulators, safety experts, and parents," removing "the ability for users under 18 to engage in open-ended chat with AI" while transitioning minors to creative tools like video and story generation, according to a Wednesday blog post . "We do not take this step of removing open-ended Character chat lightly—but we do think that it's the right thing to do,” the company told its under-18 community. Until the deadline, teen users face a two-hour daily chat limit that will progressively decrease. The platform is facing lawsuits including one from the mother of 14-year-old son Sewell Setzer III, who died by suicide in 2024 after forming an obsessive relationship with a chatbot modeled on "Game of Thrones" character Daenerys Targaryen, and also had to remove a bot impersonating murder victim Jennifer Ann Crecente after family complaints. AI companion apps are “flooding into the hands of children—unchecked, unregulated, and often deliberately evasive as they rebrand and change names to avoid scrutiny,” Dr. Scott Kollins, Chief Medical Officer at family online safety company Aura, shared in a note with Decrypt . OpenAI said Tuesday about 1.2 million of its 800 million weekly ChatGPT users discuss suicide , with nearly half a million showing suicidal intent, 560,000 showing signs of psychosis or mania, and over a million forming strong emotional attachments to the chatbot. Kollins said the findings were “deeply alarming as researchers and horrifying as parents,” noting the bots prioritize engagement over safety and often lead children into harmful or explicit conversations without guardrails. Character.AI has said it will implement new age verification using in-house models combined with third-party tools, including Persona. The company is also establishing and funding an independent AI Safety Lab, a non-profit dedicated to innovating safety alignment for AI entertainment features. Guardrails for AI The Federal Trade Commission issued compulsory orders to Character.AI and six other tech companies last month, demanding detailed information about how they protect minors from AI-related harm. "We have invested a tremendous amount of resources in Trust and Safety, especially for a startup," a Character.AI spokesperson told Decrypt at the time, adding that, "In the past year, we've rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature.” "The shift is both legally prudent and ethically responsible," Ishita Sharma, managing partner at Fathom Legal, told Decrypt . "AI tools are immensely powerful, but with minors, the risks of emotional and psychological harm are nontrivial." “Until then, proactive industry action may be the most effective defense against both harm and litigation,” Sharma added. A bipartisan group of U.S. senators introduced legislation Tuesday called the GUARD Act that would ban AI companions for minors , require chatbots to clearly identify themselves as non-human, and create new criminal penalties for companies whose products aimed at minors solicit or generate sexual content. Generally Intelligent Newsletter A weekly AI journey narrated by Gen, a generative AI model. Your Email Get it! Get it!