On March 15, 2024, the Ministry of Electronics and Information Technology (MEITY), issued relating to the obligations of platforms and intermediaries while using Artificial Intelligence (AI) in their products and services (Advisory).

 

The Advisory requires all platforms and intermediaries to ensure compliance in following aspects:

 

Requirement Description
Ensure lawful use of AI Intermediaries and platforms should ensure that use of AI models, software, or algorithms does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content.
Prevent bias or discrimination Intermediaries and platforms should ensure that its computer resource in itself or through the use of AI models, software, or algorithms does not permit any bias or discrimination or threaten the integrity of the electoral process.
Labeling of under-tested AI Under-tested/unreliable AI models, software, or algorithms should be made available to users in India only after appropriately labeling the possible inherent fallibility or unreliability of the output generated.
Inform users of consequences Intermediaries and platforms should inform its users through the terms of service and user agreements about the consequences of dealing with unlawful information.
Labeling of synthetic information Where any intermediary permits or facilitates synthetic creation, generation or modification of information, it is advised that such information is labeled or embedded with permanent unique metadata or identifier.
Non-compliance consequences Non-compliance with the provisions of the Information technology Act, 2000 (IT Act 2000) and/or Information Technology Rules, 2021 (IT Rules) could result in consequences including but not limited to prosecution under the IT Act 2000 and other criminal laws.
Immediate compliance All intermediaries are required to ensure compliance with the above with immediate effect.

 

This advisory supersedes MEITY’s earlier advisory issued on March 1, 2024, which faced criticism, majorly on account of attempted over-regulation of the AI space. It is important to note here that the revised advisory issued has significant changes from the previous advisory. The key changes include:

 

Changes Description
Expanded scope of unlawful content Includes content deemed unlawful under applicable laws beyond the IT Rules
Permission for AI models Requirement of prior permission removed and replaced with mere labeling requirements
Informing users of unreliability Can be done through pop-ups or other equivalent mechanisms
Identification requirement where changes to metadata may be made Changes made by a user to metadata should be able to identify the user or computer resource utilized to make such change

 

The advisories issued within a span of two weeks had generated much debate. The backlash from several key stakeholders especially on prior Government permission requirement under the first advisory did force the government to make clarifications that made specific carveouts for start-ups, and eventually issue a revised advisory removing some of the most contentious aspects.

 

Although, even in its updated form, the Advisory may not be legally enforceable in itself, the language used therein, and the subsequent comments by government officials suggest that they expect compliance with the advisory to be compulsory. However, the extent to which it applies to various companies, their products or services, and their capacity to enforce or comply remains unclear because of the vagueness of several terms that are not defined and a broadened scope that covers more than what the IT Act, 2000 and its rules intended, including particularly, the due diligence obligations.

Authors & Contributors

Partner(s):

Akshay Jain

 

Principal Associate(s):

Gangesh Varma

 

Associate(s):

Yaqoob Alam