Recently, the Ministry of Electronics and Information Technology (MeitY) published the final version of “AI Governance Guidelines,” which is aimed to provide a framework that “balances AI innovation with accountability, and progress with safety.”
As India currenlty lacks the AI-domain specific laws in place, the recently published MeitY’s AI Governance Guidelines can serve as a defining policy document that outlines the current regime’s intent on how it wants to regulate AI and make legislative changes around it and how it wants to approach AI policies strategically, especially when India is still vulnerable to the global policies, especially from the US’s tarrifs and China’s swift AI advancements.
For some context, the Government of India established an advisory group in 2023 under the Principal Scientific Advisor of MeitY to examine AI-related issues and regulation in India. A subcommittee on AI governance, led by IIT-Madras professor Balaraman Ravindran, was formed in July 2025 to develop these Guidelines. After initially releasing a draft report in January that received over 2,500 public submissions, this subcommittee drafted these final Guidelines with a bunch of key recommendations to the government as well as other regulators.
The following article outlines the key recommendations briefly and then expands on the Committee’s reasoning behind these recommendations.
Key AI Governance Recommendations of the Committee:
Based on this feedback, MeitY formed a drafting committee to finalise India’s AI governance framework. Its mandate includes recommending a governance model that balances innovation and risk, building trust for sustainable AI growth, and providing practical guidelines and principles for industry, regulators and sectoral agencies.
Relating to GPUs:
- The report seeks to empower the India AI mission, various ministries and state governments, including several sectoral regulators, to enable AI adoption through developing infrastructure and computing resources, like allocating GPUs.
- Expanding the AI infrastructure into tier-2 and tier-3 cities.
On Datasets
- It also recommends increasing access to data, like through platforms like AIKosh. It also recommends “robust data portability standards and data governance frameworks.”
- Encouraging the use of locally relevant datasets to promote culturally inclusive models and applications.
- Promoting access to reliable evaluation datasets and computing infrastructure for safety testing.
AI in DPI
- Recommends integrating AI with existing digital identity frameworks, such as Aadhaar, UPI, and DigiLocker, to scale and enhance inclusivity and interoperability.
AI Training for Officials and Public:
- Developing the capacity of law enforcement agencies, such as ED, CBI, and police cybercrime departments, as well as prosecutors, to detect AI-related crimes.
- Similar training programs for government officials, civil servants and regulators to “encourage the responsible use of AI in the public sector.”
- Additionally, the policy recommends implementing training programs and publicity campaigns to enhance trust and public awareness, including those related to the risks associated with AI.
Liabilities and Immunities
- Reviewing the current legal framework to evaluate risks and regulatory gaps.
- legislative amendments to encourage innovation by making the amendments, if required, to the copyright and data protection.
- Redefining key terms like “intermediary” and “publisher” to clarify the liability and immunity protection.
- Creating regulatory sandboxes with “reasonable legal immunities” to test and publish the risks of AI systems that can foster innovation.
- Recommends principles for assigning liability and responsibility to entities based on their role and level of risk, including measures such as transparency reporting, audits, and grievance redressal mechanisms.
Safety and Risk Assessment Framework
- Creating and localising the Risk Assessment and classification framework that suits the Indian context and the vulnerable groups.
- Establishing a voluntary AI incident reporting mechanism to document and alert an analyst to the risks posed by AI systems.
- Human oversight to mitigate risks in sensitive sectors with critical infrastructure.
- Guiding the deployment of AI systems that are transparent, fair, open, non-discriminatory, explainable, and secure by design.
- Establishing a grievance redressal mechanism to report AI-related harms and ensure their resolution within a reasonable timeframe.
Groups and Committees are to be Formed:
- AI Governance Group (AIGG): under the chairmanship of Principal Scientific Advisor, with support from the TPEC Guidelines, suggests AIGG to study the issues of content authentication, moderation, and to develop and oversee India’s position and strategy on AI governance, consisting of international experts from government, industry, academia and standard-setting bodies.
- Technology & Policy Expert Committee (TPEC): to provide expertise to the AI Governance Group (AIGG) and enable it to perform its functions effectively.
- AI Safety Institute: the main body responsible for guiding, researching, assessing risk, capacity-building and ensuring safe and trusted development and use of AI in India.
On the Need to Revise the Existing Laws for AI Governance
While affirming that many existing Indian laws can cover the risks that could emerge from AI, such as malicious impersonation of individuals or personal data breaches, the Guidelines acknowledge the urgent need to conduct a comprehensive review of the relevant laws to identify regulatory gaps in the AI ecosystem.
Citing one such instance, the report noted the need for reviewing India’s Pre-Conception and Pre-Natal Diagnostic Techniques (PC-PNDT) Act, which aims to prevent the sexist sex-selective abortion practices in India. “The Act should be reviewed from the perspective of AI models being used to analyse radiology images, which could be misused to determine the sex of a foetus and enable unlawful sex selection,” reads the Guidelines.
The policy recommendation guidelines state that inter-ministerial consultations are underway to examine these regulatory issues.
Referring to the two-decade-old IT Act, the guidelines say the government must clearly define the roles of actors in the AI value chain, such as developers, deployers, and users, and specify how current definitions, including “intermediary,” “publisher,” and “computer system,” will govern them.
On the issue of Liability & Ambiguous Terms in AI Governance
The committee acknowledged that all AI systems are inherently probabilistic in nature and can produce unexpected outcomes that cause harm despite reasonable precautions. Thus, it cited the RBI’s FREE-AI Committee, which recommended a “tolerant” approach in the financial sector toward first-time or one-off errors. The committee said sectoral regulators should choose enforcement strategies suited to their domains, but emphasised that the rule of law must remain paramount and enforcement should focus on preventing harm while allowing responsible innovation.
The Guidelines question the potential immunity that could be granted to AI systems in the name of a vague and broad interpretation of “intermediaries” under Section 79 of the IT Act, which provides safe harbour protections to intermediaries for harmful content hosted on their platforms and applications.
It argued that under the current broad frameworks of laws, including those governing telecom companies, search engines, and even cyber cafes, these entities could also be classified as “intermediaries,” which are defined as entities that receive, host, or transmit content on behalf of the sender/user.
Furthering this argument, it stated that such legal immunity wouldn’t be applicable to the so-called “intermediary” AI systems and suggested that the existing legal framework needs to be re-examined from the perspective of liability for AI developers and deployers if they fail to comply with the required due diligence.
“Therefore, the Committee is of the view that the IT Act should be suitably amended to ensure that India’s legal framework is clear on how AI systems are classified, what their obligations are, and how liability may be imposed,” reads the policy recommendation guidelines.
On Data Protection
The much-awaited finalisation of the DPDP Rules, which enforce the DPDP Act, 2023, excludes publicly available data from the law’s purview, thereby enabling the massive scraping of publicly available information, including personal data, which the DPDP Act aims to protect. This means that many AI-based companies are set to benefit from this provision.
The Guidelines refer to the following key aspects that need to be addressed, formulating a new legal framework with amendments if required:
- Are the DPDP Act’s principles of purpose-driven limited consents compatible with how modern AI systems work?
- What is the scope of applicability of the exemptions for AI companies to train on AI models?
- What is the scope of legitimate usage of personal data under the name of AI development?
- What would be the role of consent managers for delivering contextual consent notices, especially in the dynamic multi-modal AI workflows?
On Content Moderation and Deepfakes
In order to prevent the harmful content generation and distribution, like child sexual abuse material (CSAM) and non-consensual images (‘revenge porn’), the Guidelines recommend that the AI Governance Group look into existing regulatory frameworks, such as content authentication and provenance of the AI-generated content.
Regarding the aspect of traceability, the Guidelines suggest AIGG to examine the current watermarking and labelling tools that can identify the origins of AI-generated content, which can potentially lead to the underlying databases or large language models (LLMs) that produced the content. “Such attribution tools have potential utility for both content authentication and provenance,” affirms the Guidelines. It Guidelines also cautioned the committee to examine the bypassing mechanisms of content authentication and provenance, as they could put citizens’ privacy at risk.
For more context, the recent draft amendment to the IT Rules, 2023, mandates such labelling, including the possible existing industry standard labelling protocol, such as the Coalition for Content Provenance & Authenticity (C2PA). Referring to these amendments, S. Krishnan, Secretary at MeitY, appreciated the government’s amendment and stated that users have the right to know whether the content being produced and distributed is a deepfake or not.
Advertisements
The guidelines recommend that the proposed AI Governance Group, supported by the Technology and Policy Expert Committee, review India’s regulatory framework and “issues of content authentication in detail and issue appropriate guidelines” to advise agencies such as MeitY on appropriate techno-legal solutions and additional legal measures to address AI-generated deepfakes without compromising user privacy. The guidelines recommend forming a committee of international experts from government, industry, academia, and standard-setting bodies to develop global standards for content authentication and certify information as genuine.
On Copyright Issues and Global Cooperation
The Guidelines avoided addressing matters related to copyright issues, as they are still awaiting recommendations from the Department for Promotion of Industry and Internal Trade (DPIIT), which established the committee in April 2025 to study the use of copyrighted material for AI training. At the core, the committee is set to examine, under the Indian Copyright Act, whether AI training would qualify for the fair use doctrine or not.
When compared to the US’ AI Action Plan and China’s Global AI Governance Action Plan, the report asserts that India’s balanced approach benefits countries in the Global South. “India should continue its participation in multilateral AI governance forums, such as the G20, UN, OECD, and deliver tangible outcomes as host of the ‘AI Impact Summit’ in February 2026,” recommends the Guidelines while emphasising the importance of research, policy planning and simulation exercises that can address the future issues.
On Risk Mitigation and Accountability in AI Governance
The guidelines identify several key risks that could affect end users. According to the committee, these include:
- Malicious uses: Misinformation through harmful AI-generated content, such as deepfakes, trojan attacks, data poisoning, and adversarial inputs targeting critical infrastructure.
- Bias and discrimination: Inaccurate or incomplete data used in decision-making, especially in employment, which can result in loss of opportunity or livelihood.
- Transparency failures: Lack of adequate disclosure, including the use of personal data to develop AI systems without user consent.
- Systemic risks: Disruptions in the AI value chain due to market concentration, geopolitical instability, or regulatory changes.
- Loss of control: Diminished human oversight over AI systems, potentially disrupting public order and safety.
- National security risks include AI-driven disinformation campaigns, cyberattacks on critical infrastructure, and autonomous weapons that pose threats to public safety, national sovereignty, and border security.
To address such risks amplified and produced with the help of AI systems, the guidelines define a national-level centralised public incident reporting database-based reporting mechanism that records the series of events where malfunctioning of AI systems has occurred, including the harms related to “health, disruption of critical infrastructure, human rights violations, or damage to property, communities, or the environment.” Indian Computer Emergency Response Team (CERT-In)-IN is one such example of an existing incident reporting mechanism.
“Over time, a structured feedback loop should be created: reports feed into threat analysis, which helps policymakers identify emerging risks, understand patterns of harm, and strengthen oversight. This process will also build a culture of accountability,” assures the report.
To encourage large-scale adoption of voluntary measures, the committee recommends a set of financial, reputational, technical, and regulatory incentives, including:
- Access to regulatory sandboxes for firms implementing voluntary safeguards.
- Public recognition through government certifications, ratings, or endorsements.
- Technical assistance, toolkits, and playbooks to simplify voluntary compliance.
- Venture capital support for firms deploying responsible approaches to innovation.
On Algorithmic Auditing and Transparency
Referring to the indigenously developed NITI Aayog’s Data Empowerment and Protection Architecture (DEPA), the Guidelines suggested that the “techno-legal system for permission-based data sharing through consent tokens” can be expanded for effective AI governance, including the following aspects:
- Algorithmic auditing to detect biases and unfairness in AI systems
- Transparency framework to enable explainability and accountability
- Sector-specific regulations that have high-risk use cases.
On Financial Incentives and Funding
The Guidelines recommend that the government provide “targeted incentives and financing support, including tax rebates on certified solutions, AI-linked loans,” to support AI development and adoption in the MSME sector. The policy also suggests offering subsidised access to GPUs. “This will help lower the cost of adoption, if supported by sector-specific AI toolkits and pre-built starter packs tailored to industries like textiles, retail, logistics, and food processing,” reads the Guidelines.
The policy recommends that the Small Industries Development Bank of India (SIDBI) and the Micro Units Development & Refinance Agency Ltd (MUDRA) disperse these loans.
For more context on how the Indian government is financing AI developments, we can examine the budget allocation of the India AI Mission, which is Rs. 10,372 crores. However, under this Mission, the Safety and Trust category is severely underfunded, accounting for only 0.2% of the total budget allocated to the Mission.
Current Status of IndiaAI Mission
While mentioning the infrastructure initiatives that the India AI mission took, the AI Governance guideline explained its current status as of August 2025:
- GPUs: “Over 38,231 GPUs are being made available to startups, researchers and developers at subsidised rates,” read the report without mentioning the subsidised rates.
- AIKosh reportedly provides “permission-based access” to over 1,500 datasets and 217 AI models from 34 entities across 20 sectors.
- India’s Foundational Models: The report, without naming the companies, states that four startups are being supported under the India AI mission. However, according to media reports, Sarvam, Soket Labs, Gnani AI, and GanAI are among the first cohorts, and the government plans to fund eight more. Under this policy, the startup companies receive 25% of the compute costs and a mix of grants (40%) and equity (60%).
- India AI Application Development Initiative (IADI): The report, without providing specific figures, states that several applications are currently in the prototype stage, comprising a subset of 30 applications from various sectors.
Few AI-Related Policy Documents:
- NITI Aayog (2018): National Strategy for AI [Read]
- NITI Aayog (Feb, 2021): Responsible AI for All [Read]
- ICMR (May, 2023): Artificial Intelligence Guidelines [Read]
- NASSCOM (Jun, 2023): Responsible AI Guidelines for Generative AI [Read]
- NASSCOM (Nov, 2024): The Developer’s Playbook for Responsible AI in India [Read]
- MeitY (Dec, 2024): A Competency Framework for AI Integration in India [Read]
- RBI (2025): Responsible and Ethical Enablement of Artificial Intelligence (FREE-AI) [Read]
- CRET-IN (2025): AI Bill of Materials (AIBOM) [Read]
NOTE: The article was updated on 07-11-2025 at 03:31 pm to fix a minor typo error, and on 05:29 pm to clarify a minor error under “On Content Moderation and Deepfakes.”
Also Read:
Support our journalism:


