Photo: KIRILL KUDRYAVTSEV / AFP / Getty Images
OpenAI has launched new parental controls for ChatGPT, aimed at providing a safer experience for teen users. The announcement came on Monday, following a lawsuit filed by a California family alleging the chatbot encouraged their son's suicide. The new controls enable parents to set specific usage hours, disable voice mode, and prevent chat history from being saved. Parents can also opt out of having their teen's data used to train OpenAI's models.
OpenAI stated in a blog post that these controls allow parents to link their accounts with their teen's account, customizing settings for a safe, age-appropriate experience. Parents will receive notifications if ChatGPT detects potential signs of self-harm in teens. A specialist team will review these signs and contact parents through email, text, or mobile alerts. OpenAI is also working on ways to contact law enforcement if a parent cannot be reached during an imminent threat.
These changes come amid increased scrutiny over chatbot safety for teens. In August, a lawsuit was filed against OpenAI by the parents of 16-year-old Adam Raine, who allegedly received information about suicide tactics from ChatGPT. Despite safety measures, Adam bypassed them by stating the details were for a story he was writing. OpenAI CEO Sam Altman emphasized that the company prioritizes safety over privacy and freedom for teens.
California lawmakers have passed two AI chatbot safety bills, with Governor Gavin Newsom having until mid-October to approve or reject them. Advocacy groups have called for legal accountability, arguing that OpenAI's recent changes are a step forward but not enough. Adam Billen, vice president of public policy at Encode AI, stated, "We don’t need more empty promises; we need accountability that is enshrined into law."