Sydney: Australia has blocked internet users from accessing multiple websites that used artificial intelligence to generate child sexual exploitation material, the country’s internet regulator announced.
According to eSafety Commissioner Julie Inman Grant, three AI-powered ‘nudify’ websites, which digitally strip clothing from images of real people, have now withdrawn from Australia after receiving an official warning from her office.
The commissioner said that these sites had been attracting around 100,000 visits each month from Australians and were involved in high-profile cases where AI-generated child sexual abuse images of Australian school students had circulated. Grant described the impact of these platforms on school communities as ‘devastating.’
“We took enforcement action in September because this provider failed to put in safeguards to prevent its services from being used to create child sexual exploitation material and were even marketing features like undressing ‘any girl,’ and with options for ‘schoolgirl’ image generation and features such as ‘sex mode,’” Grant noted in a statement.

The decision follows a formal warning issued in September to the UK-based company operating the websites, advising it that it could face civil penalties of up to 49.5 million Australian dollars ($32.2m) if it did not implement appropriate safety measures to prevent image-based abuse.
Grant also stated that Hugging Face, a popular hosting platform for AI models, has taken independent steps to comply with Australian regulations. The platform updated its terms of service to require developers and account holders to put measures in place to minimize the misuse of their tools, particularly in relation to harmful or exploitative content.
Australia has positioned itself at the forefront of global efforts to protect children from online harm. The country has banned social media access for individuals under 16 and is aggressively targeting apps used to create deepfake images or facilitate stalking.
The rapid spread of advanced AI tools capable of producing photorealistic deepfake material has heightened global concern about non-consensual explicit imagery. A survey by US-based advocacy group Thorn reported that 10 percent of respondents aged 13 to 20 knew someone who had a deepfake nude image made of them, while 6 percent said they had personally been victims of such abuse.

