OpenAI has announced plans to relax restrictions on its ChatGPT chatbot, including allowing erotic content for verified adult users, in line with the company’s principle of ‘treat adult users like adults.’
The updated ChatGPT will also enable users to customise their AI assistant’s personality, offering options such as more human-like responses, friend-like behaviour, or heavy emoji use.
The most significant change is scheduled for December, when OpenAI plans to introduce comprehensive age-gating that will permit adult content only to users who have verified their age. However, the company has not yet shared details on the age verification process or additional safeguards for adult content.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Now that we have…
— Sam Altman (@sama) October 14, 2025
In September, OpenAI launched a dedicated ChatGPT experience for users under 18. This version automatically redirects minors to age-appropriate content and blocks access to graphic or sexual material.
Additionally, the company is developing behaviour-based age prediction technology, which estimates whether a user is over or under 18 based on their interactions with the chatbot.
CEO Sam Altman stated on X that the stricter guardrails previously imposed to address mental health concerns had made ChatGPT “less useful/enjoyable to many users who had no mental health problems.”

These safety measures followed the death of Adam Raine, a California teenager, earlier this year. His parents filed a lawsuit in August claiming ChatGPT provided him with specific instructions on how to kill himself. Altman said that two months later, the company has “been able to mitigate the serious mental health issues.”
The US Federal Trade Commission has also opened an inquiry into several tech companies, including OpenAI, to investigate how AI chatbots could negatively affect children and teenagers.
Altman added that the company is confident its new safety tools allow for easing restrictions while still addressing serious mental health risks, stating, “Given the seriousness of the issue we wanted to get this right.”

