As UK looks to ban ???nudify??? apps, what does Indian law say about AI-generated deepfakes?
The proliferation of AI-powered ???nudify??? apps on the internet has prompted legislative action in places like New Jersey in the United States.

The proliferation of AI-powered ???nudify??? apps on the internet has prompted legislative action in places like New Jersey in the United States.

Google has launched Gemini 3 Flash, a new generative AI model built on the advanced Gemini 3 architecture that prioritizes speed, efficiency, and intelligent reasoning.
A growing misuse of generative AI, often referred to as “nudification” or “de-clothing,” allows users to digitally remove clothing from real photos and generate hyper-realistic deepfake nude images. Although entirely fabricated, these non-consensual images can cause serious real-world harm, including harassment, emotional distress, and long-term reputational damage.
In response, the United Kingdom is preparing to crack down on so-called nudify apps as part of its broader strategy to reduce online violence against women and girls by 50 per cent.

On Thursday, December 18, the British government proposed new laws that would make it illegal to develop, distribute, or supply AI-powered tools specifically designed to remove clothing from images. The ban would apply to both mobile applications and websites offering nudification features.
The move comes amid the rapid spread of AI-powered nudify tools online. Reports suggest that students often discover these apps through advertisements on platforms like Instagram and other social media networks.
While some jurisdictions, such as New Jersey in the United States, have introduced measures to address AI-generated explicit imagery, critics argue that existing protections often fail to go far enough.
As governments step up enforcement, digital rights advocates have raised concerns about potential overreach. They warn that systems designed to detect and remove sexually explicit deepfakes could also be misused to censor legitimate content or restrict freedom of expression.
Despite these concerns, UK officials say the risks posed by nudification apps outweigh the downsides.
“Women and girls deserve to be safe online as well as offline. We will not stand by while technology is weaponised to abuse, humiliate and exploit them through the creation of non-consensual sexually explicit deepfakes,”
ΓÇö Liz Kendall, UK Technology Secretary
Child protection groups have welcomed the proposed ban. Kerry Smith, Chief Executive of the Internet Watch Foundation (IWF), said nudification apps have no legitimate purpose and pose a serious threat to children.
The IWF reported that more than 19 per cent of cases received by its confidential helpline for under-18s involved manipulated or AI-generated explicit imagery. According to the organisation, such content often ends up circulating in some of the darkest corners of the internet.
Earlier this year, Rachel de Souza, the Children’s Commissioner for England, also called for a total ban on nudification apps, arguing that if creating such images is illegal, the technology enabling them should be illegal as well.

Under the UK’s Online Safety Act, it is already a criminal offence to create explicit images of someone without their consent. The proposed laws would expand on existing regulations by explicitly banning nudify tools themselves, rather than only punishing misuse.
The government is also collaborating with safety technology firms such as SafeToNet, which has developed AI systems to detect and block sexual content and even disable cameras when explicit material is being captured.
Major tech platforms are taking parallel steps. Companies like Meta have introduced filters to detect and flag nudity, particularly to prevent children from creating or sharing intimate images.
In June, Meta filed a lawsuit against Joy Timeline HK Limited, the developer behind the CrushAI app. The company alleged that the Hong Kong-based firm operated several nudify apps and promoted them through ads on Facebook and Instagram.
This legal action highlights growing pressure on both app developers and platforms that allow such tools to be advertised or distributed.
India has also begun tightening regulations around AI-generated content. In October, the government proposed draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.
Under the proposed changes:
Failure to comply could result in platforms losing their legal immunity for third-party content.
The UK’s proposed ban on nudify apps represents a significant escalation in efforts to combat AI-enabled sexual abuse. As generative AI tools become more powerful and accessible, governments around the world are grappling with how to protect individualsΓÇöespecially women and childrenΓÇöwithout undermining digital rights.
Whether these measures will be enough to curb the spread of non-consensual deepfake imagery remains to be seen, but the direction is clear: AI tools that exist solely to exploit and harm are increasingly facing zero tolerance.
What are “nudify” apps?
Nudify apps use AI to create fake nude or sexually explicit images of people without their consent, often using photos from social media.
Is the UK planning to ban nudify apps?
Yes. The UK is moving toward banning nudify and similar AI apps due to concerns over privacy violations, sexual abuse, and misuse of AI-generated content.
Does India have a specific law against AI-generated deepfakes?
No. India currently does not have a dedicated law that specifically defines or bans AI-generated deepfakes.
Are AI-generated deepfakes illegal in India?
Deepfakes are not illegal by default, but they become illegal if they violate existing laws related to privacy, obscenity, defamation, or impersonation.
Which Indian laws apply to deepfakes and nudify content?
Provisions under the IT Act, 2000 (Sections 66E, 67, 67A, 66D), along with criminal laws on defamation and harassment, can be used against harmful deepfake content.
Can creating AI-generated nude images be punished in India?
Yes. Creating or sharing AI-generated nude or sexually explicit images without consent can attract penalties for privacy invasion and publishing obscene content.
Are social media platforms responsible for hosting deepfakes in India?
Yes. Platforms must remove illegal content once notified or risk losing legal protection under India’s IT Rules.
Is the Indian government planning new rules for deepfakes?
Yes. The government is considering stricter regulations, including labeling of AI-generated content and faster takedown requirements.
How does India’s approach differ from the UK?
While the UK is moving toward a direct ban on nudify apps, India currently relies on existing cyber, privacy, and criminal laws, with dedicated AI rules still evolving.