The government’s decision to work through sectoral regulators rather than creating a comprehensive AI regulator reflects a keen understanding of the challenges associated with the domain. Financial AI programs face different risks than healthcare AI or autonomous vehicles.
By allowing specialized regulators such as the Reserve Bank of India to manage AI risks in their own domains, while MeitY provides an overall philosophical direction, the framework achieves coherence and specificity.
The experience of India’s digital public infrastructure offers valuable lessons for managing artificial intelligence. Platforms like UPI demonstrate how state-backed infrastructure can build core protocols while enabling innovation.
The strict-lax-strict model—establishing clear standards and accountability while allowing flexibility in application development—provides a practical framework. Learning from both the successes and challenges of previous DPI initiatives, including addressing access issues and protecting privacy, will strengthen the implementation of AI governance.
Some implementation issues are worth noting. Operationalizing principles such as Fairness and Equity and Clarity of Design will require the development of sector-specific standards and assessment methodologies. The framework would benefit from clearer guidance on transparency requirements, including provisions on algorithm documentation, impact assessments and disclosures appropriate for different risk levels and use cases.
The flexibility of the framework is both a strength and a potential limitation. Principles-based approaches allow for contextual adaptation, but require robust implementation mechanisms to ensure meaningful compliance. Developing clear procedures for explaining principles, establishing liability for violations, and creating accessible redress mechanisms will be critical to turning desired standards into effective safeguards.
As India prepares to host the AI Impact Summit in February 2026, it is presenting an alternative governance model to the global community. The success of infrastructure will be measured not only by economic performance, but also by its ability to facilitate inclusive AI development, protect against algorithmic harm, and maintain public trust. Early implementation efforts should focus on operationalizing core principles, building regulatory capacity, and establishing transparent accountability mechanisms.
The discourse of global AI governance has been dominated by structures with advanced economies, often reflecting their specific capabilities and priorities. India’s approach, designed for an economy with advanced innovation and development challenges, can offer insights to many countries in similar circumstances. The framework deserves both fair evaluation and constructive criticism as it is implemented.


