Wednesday, January 14, 2026
  • English
  • Marathi
No Result
View All Result
Daily PRABHAT
  • Home
  • Latest News
  • National
  • International
  • Entertainment
  • Politics
  • Sports
  • Business
  • More
    • Health
    • Lifestyle
    • Technology
    • Science
Daily PRABHAT
No Result
View All Result
  • Home
  • Latest News
  • National
  • International
  • Entertainment
  • Politics
  • Sports
  • Business
  • More
Home Business

Stakeholders flag concerns over blanket labelling in draft IT rules on synthetically generated information

by Digital Desk
1 month ago
in Business
A A
Stakeholders flag concerns over blanket labelling in draft IT rules on synthetically generated information
Share on FacebookShare on Twitter

Representative Image (Photo/Reuters)

New Delhi [India], December 11 (ANI): A cross-section of creators, legal experts, brand representatives and digital platforms on Monday raised strong objections to what they termed “blanket labelling” requirements in the Draft IT Rules on Synthetically Generated Information (SGI), urging the government to adopt a more transparent, risk-tiered regulatory framework.

According to a press release issued by the organisers, the observations were made at a closed-door roundtable convened by The Dialogue, a New Delhi-based tech policy think tank, to examine the feasibility and legal viability of the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.

Participants warned that the current formulation risks clubbing routine AI-enabled creative processes with high-risk synthetic media. Creators argued that the digital economy is built on personal credibility, and excessive labelling could damage that trust.

“There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust… If every video I make ends up with an ‘AI’ banner just because I used captions or a clean-up tool, my credibility is at stake,” content creator Tuheena Raj said, stressing that strong labels should apply mainly to “finance, health, political messaging, deepfakes – not… routine, low-risk enhancements.”

Representatives from the advertising sector noted that AI is already deeply integrated into scriptwriting, editing, localisation, and testing workflows. They cautioned that unclear provisions might enable “liability dumping”, pushing compliance burdens onto smaller creators and agencies.

Platform representatives drew parallels with global regulatory trajectories, noting that even mature jurisdictions lean towards principle-based, risk-graded AI rules rather than rigid, format-specific mandates.

“We work across multiple jurisdictions… Even in those mature’ territories, you don’t yet see such detailed rules on how every piece of synthetic media must be tagged,” said Shivani Singh of Glance (InMobi Group). She questioned whether “blanket labelling will actually solve the deepfake problem we are worried about.”

Legal experts argued that the Draft Rules conflate transparency with harm prevention and lack a differentiated approach to risk. “The absence of risk grading results in overbroad mandates that treat all content with suspicion,” said Akshat Agarwal of AASA Chambers, adding that labelling could become “a blunt instrument that penalises innovation without meaningfully curbing harm.”

Across the discussion, stakeholders emphasised the need for clearer definitions, exemptions for routine or accessibility-related AI uses, and interoperable provenance standards rather than heavy detection obligations. They stressed the importance of frameworks that protect against deception without undermining legitimate creative expression. (ANI)

ShareTweetSendShareSend

Latest News

“No one can stop BJP from forming Govt in West Bengal:” Tripura CM at Araliya Community Hall meeting

“An ocean of corruption:” Union Minister Shivraj Singh Chouhan defends removal of MGNREGA

“PM Modi contributing greatly to development of Bengal:” CV Ananda Bose on West Bengal-Assam Vande Bharat Sleeper launch

“That’s going to be a big problem for him”: Trump rebukes Greenland PM’s remarks on ‘choosing Denmark’ over US

Bangladesh seeks UN support to combat misinformation ahead of Feb 12 polls

“Continue the struggle, don’t let this regime portray situation as normal”: Reza Pahlavi urges Iranians

“BJP killed MGNREGA scheme which helped rural development:” DK Shivakumar

“We’ll get some accurate numbers about killing, Iran better behave”: Trump warns amid unrest

US designates Muslim Brotherhood groups in Egypt, Jordan, Lebanon as terrorist organisations

“No political discussions with Rahul Gandhi,” Karnataka CM Siddaramaiah

Representative Image (Photo/Reuters)

New Delhi [India], December 11 (ANI): A cross-section of creators, legal experts, brand representatives and digital platforms on Monday raised strong objections to what they termed "blanket labelling" requirements in the Draft IT Rules on Synthetically Generated Information (SGI), urging the government to adopt a more transparent, risk-tiered regulatory framework.

According to a press release issued by the organisers, the observations were made at a closed-door roundtable convened by The Dialogue, a New Delhi-based tech policy think tank, to examine the feasibility and legal viability of the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.

Participants warned that the current formulation risks clubbing routine AI-enabled creative processes with high-risk synthetic media. Creators argued that the digital economy is built on personal credibility, and excessive labelling could damage that trust.

"There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust... If every video I make ends up with an 'AI' banner just because I used captions or a clean-up tool, my credibility is at stake," content creator Tuheena Raj said, stressing that strong labels should apply mainly to "finance, health, political messaging, deepfakes - not... routine, low-risk enhancements."

Representatives from the advertising sector noted that AI is already deeply integrated into scriptwriting, editing, localisation, and testing workflows. They cautioned that unclear provisions might enable "liability dumping", pushing compliance burdens onto smaller creators and agencies.

Platform representatives drew parallels with global regulatory trajectories, noting that even mature jurisdictions lean towards principle-based, risk-graded AI rules rather than rigid, format-specific mandates.

"We work across multiple jurisdictions... Even in those mature' territories, you don't yet see such detailed rules on how every piece of synthetic media must be tagged," said Shivani Singh of Glance (InMobi Group). She questioned whether "blanket labelling will actually solve the deepfake problem we are worried about."

Legal experts argued that the Draft Rules conflate transparency with harm prevention and lack a differentiated approach to risk. "The absence of risk grading results in overbroad mandates that treat all content with suspicion," said Akshat Agarwal of AASA Chambers, adding that labelling could become "a blunt instrument that penalises innovation without meaningfully curbing harm."

Across the discussion, stakeholders emphasised the need for clearer definitions, exemptions for routine or accessibility-related AI uses, and interoperable provenance standards rather than heavy detection obligations. They stressed the importance of frameworks that protect against deception without undermining legitimate creative expression. (ANI)

No Result
View All Result
  • Home
  • Latest News
  • National
  • International
  • Entertainment
  • Politics
  • Sports
  • Business
  • More
    • Health
    • Lifestyle
    • Technology
    • Science