Published December 21, 20254 min read

As UK looks to ban ‘nudify’ apps, what does Indian law say about AI-generated deepfakes?

The proliferation of AI-powered ‘nudify’ apps on the internet has prompted legislative action in places like New Jersey in the United States.

As UK looks to ban ‘nudify’ apps, what does Indian law say about AI-generated deepfakes?

A growing misuse of generative AI is “nudification” or “de-clothing”, a process that uses the technology to digitally strip clothing from real photos and create hyper-realistic deepfake nude images. Though entirely fabricated, these non-consensual sexually explicit images can carry serious real-world harm in the form of harassment and reputational damage.

Now, the United Kingdom is looking to bring the ban hammer down on so-called nudification or nudify apps as part of its broader strategy to reduce online violence against women and girls by 50 per cent.

The British Government on Thursday, December 18, proposed a new set of laws that would make it illegal for anyone to develop and distribute AI-powered tools that specifically let users modify images to remove someone’s clothing. The ban would also apply to the creation and supply of “nudify” apps and websites.

The move comes amid the proliferation of AI-powered “nudify” apps on the internet. Reports have suggested that students learn about these “nudify” apps and websites through ads on Instagram and other social media platforms. While it has prompted some legislative action in places like New Jersey in the United States, critics have warned that the protections do not go far enough.

At the same time, digital rights advocates have argued that measures to detect and take down sexually explicit deepfakes pose risks of overreach as they could be used by governments to censor other forms of content.

“Women and girls deserve to be safe online as well as offline. We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes,” Liz Kendall, the UK’s technology secretary, was quoted as saying by BBC.

“We are also glad to see concrete steps to ban these so-called nudification apps which have no reason to exist as a product. Apps like this put real children at even greater risk of harm, and we see the imagery produced being harvested in some of the darkest corners of the internet,” Kerry Smith, chief executive of The Internet Watch Foundation (IWF), was quoted as saying. A helpline set up by the IWF for under-18s to confidentially report explicit images of themselves online saw over 19 per cent of reports related to manipulated imagery.

In April this year, Rachel de Souza, the children’s commissioner for England, called for a total ban on “nudification” apps. “The act of making such an image is rightly illegal – the technology enabling it should also be,” she said.

How will the UK’s ban on ‘nudify’ apps be implemented? Under the UK’s Online Safety Act, it is already a criminal offence to create explicit images of someone without their consent. The new laws proposing a total ban on “nudify” apps will build on existing rules around sexually explicit deepfakes and intimate image abuse, the British Government said.

image

The Government is also working with SafeToNet, a UK-based safety tech firm that has developed AI tools to identify and block sexual content, as well as block cameras when they detect that sexual content is being captured. Tech giants like Meta have also rolled out filters to detect and flag potential nudity in imagery, often with the aim of stopping children taking or sharing intimate images of themselves.

Also Read | Meta takes AI firm behind ‘nudify’ apps to court over ads on Facebook, Instagram In June this year, Meta said it filed a lawsuit against CrushAI app developer Joy Timeline HK Limited after finding that the Hong Kong-based company was behind several “nudify” apps and ran ads promoting these apps on Meta’s platforms, such as Instagram and Facebook.

How is India tackling AI-generated deepfakes?

In October this year, the Centre proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021. These proposed rules would require social media platforms such as YouTube and Instagram to seek a declaration from users on whether the uploaded content is “synthetically generated information”.

If the user declares that the uploaded content is AI-generated, then the platform is further required to ensure that such content is prominently labelled as AI-generated or embedded with a permanent, unique metadata or identifier.

Also Read | What are ‘Nudify’ sites and why is a 14-yr-old girl rallying against them? The IT Rules 2021 already mandate social media intermediaries to take down AI-generated deepfakes within 36 hours of receiving a court order or an intimation from the Government or its agency. If they fail to comply, the platforms may lose the legal immunity they enjoy regarding third-party content.