Indian Legal Frameworks for AI

Artificial Intelligence (AI) is no longer a futuristic concept; it has become an essential part of daily life and governance. From facial recognition in airports to predictive analytics in finance, AI is shaping industries in India. However, with technological growth comes the urgent need for a robust legal framework to ensure responsible, ethical, and safe use. India, like many other nations, does not yet have a comprehensive AI law, but several existing legislations, policies, and institutional measures together form the country’s emerging legal framework for AI.
This article explores these frameworks in detail, covering national strategies, data protection laws, sectoral regulations, institutional mechanisms, state-level policies, and the gaps that still need to be filled.
Evolution of AI Governance in India
Early Steps: National Strategy on AI (2018)
India’s AI journey formally began in 2018 when NITI Aayog released the National Strategy for Artificial Intelligence. It emphasised five priority sectors:
- Healthcare
- Agriculture
- Education
- Smart Cities and Infrastructure
- Smart Mobility
The strategy positioned India as the “AI garage of the world” and highlighted the need for ethical guidelines, innovation hubs, and public-private collaboration.
Responsible AI (2021)
Building on this, NITI Aayog published two critical documents in 2021:
- Principles for Responsible AI – which identified ethical pillars like transparency, accountability, inclusivity, and privacy.
- Operationalising Principles for Responsible AI – which focused on implementation mechanisms such as ethics-by-design, capacity building, and policy interventions.
These laid the foundation for a responsible AI ecosystem, though they were advisory in nature.
Constitutional and Fundamental Rights Dimension
While India does not yet have a dedicated AI statute, constitutional principles indirectly regulate AI deployment.
- Right to Privacy: The Supreme Court in Justice K.S. Puttaswamy v. Union of India (2017) recognised privacy as a fundamental right under Article 21. Any AI system handling personal data must comply with privacy safeguards.
- Equality and Non-Discrimination: Article 14, Article 15, and Article 16 prohibit discrimination. If AI algorithms display bias in recruitment, lending, or service delivery, such outcomes may be challenged under constitutional equality provisions.
- Freedom of Speech (Article 19): AI-driven moderation of online speech must balance between free expression and reasonable restrictions such as public order or decency.
Thus, constitutional rights act as a backdrop for AI regulation, ensuring technology does not override human dignity.
Information Technology Act, 2000 and IT Rules
IT Act as the Backbone
The Information Technology Act, 2000 (IT Act) is India’s primary cyber law. While drafted before AI became mainstream, it regulates issues closely connected to AI, such as:
- Data protection (Sections 43A, 72A)
- Cyber offences like identity theft, hacking, and cheating using computer resources
- Liability of intermediaries (platforms using AI tools for moderation or content management)
Intermediary Guidelines and Digital Media Ethics Code, 2021
The IT Rules, 2021 introduced compliance obligations on social media intermediaries. AI-powered platforms must ensure:
- Due diligence in removing unlawful content
- Grievance redressal mechanisms
- Traceability of information in certain cases
While not AI-specific, these provisions directly impact platforms deploying AI for content filtering, recommendations, and automated decision-making.
Digital Personal Data Protection Act, 2023 (DPDP Act)
The DPDP Act, 2023 is India’s first standalone data protection law and plays a crucial role in regulating AI.
Key Features Relevant to AI:
- Consent-Based Processing: AI systems must process personal data only with clear, informed consent.
- Purpose Limitation: Data collected for one purpose cannot be reused for another without consent—limiting AI’s potential misuse.
- Data Fiduciary Obligations: Organisations deploying AI must ensure transparency, security safeguards, and rights of individuals.
- Data Protection Board: A regulatory body to handle complaints, breaches, and enforcement.
AI systems thrive on data. Hence, the DPDP Act ensures a legal framework for data-driven AI applications, particularly in healthcare, fintech, and edtech.
Sector-Specific Regulations
Financial Sector – SEBI and RBI
- Securities and Exchange Board of India (SEBI) has issued consultation papers on regulating algorithmic trading and AI use in securities markets. Concerns include transparency, investor protection, and systemic risk.
- Reserve Bank of India (RBI) has set up committees exploring AI governance in the banking sector. In 2025, it considered a FREEAI Framework to govern AI in finance, covering governance, auditing, and indigenous AI model development.
Healthcare Sector
AI-based diagnostics, robotic surgeries, and patient data management fall under:
- Drugs and Cosmetics Act, 1940
- Clinical Establishments Act, 2010
DPDP Act for patient data
While no explicit AI law exists, medical AI must comply with existing clinical, data, and ethical obligations.
Law Enforcement and Public Safety
Facial recognition, predictive policing, and surveillance systems are increasingly used by police forces. However, they must respect:
Fundamental Rights to Privacy and Equality
Supreme Court guidelines on proportionality and necessity in surveillance
Without proper checks, such AI use risks mass surveillance and bias.
Institutional and Standard-Setting Mechanisms
IndiaAI Mission
The Government of India launched the IndiaAI Mission, a flagship programme with multiple pillars—Innovation, Skilling, Research, and Safe & Trusted AI.
IndiaAI Safety Institute (2025)
Under the “Safe and Trusted AI” pillar, the IndiaAI Safety Institute was established. Its goals include:
- Developing risk classification frameworks for AI
- Creating testing and certification standards
- Collaborating with international organisations like UNESCO
Bureau of Indian Standards (BIS)
BIS is working on drafting standards for AI safety, transparency, and ethics. These will be crucial for harmonisation with global standards.
Judicial Developments and Case Law
Indian courts have not yet directly ruled on AI liability or accountability. However, related jurisprudence provides insights:
- Shreya Singhal v. Union of India (2015): Struck down Section 66A of IT Act, emphasising free speech in digital spaces. This will apply to AI-based moderation.
- Puttaswamy (2017): Established privacy as a fundamental right—relevant for AI surveillance and data use.
Ongoing cases on facial recognition in policing and algorithmic decision-making may soon clarify constitutional limits on AI use.
Challenges and Gaps in India’s AI Legal Framework
Despite progress, India faces several challenges:
- No Standalone AI Law – Current framework is scattered across IT Act, DPDP Act, and sectoral guidelines.
- Algorithmic Accountability – No clear provisions on liability when AI systems malfunction or cause harm.
- Bias and Discrimination – Lack of statutory safeguards against algorithmic bias in employment, finance, or justice.
- AI in Criminal Justice – Use of predictive policing and surveillance without clear legal oversight raises civil liberties concerns.
- Intellectual Property Rights – Ambiguity around ownership of AI-generated works under Copyright Act, 1957.
- Cross-Border Data Flow – AI often requires global data processing, but India’s localisation requirements may conflict with innovation.
- Enforcement Capacity – Regulators like the Data Protection Board and BIS may lack the technical expertise initially to handle complex AI issues.
Conclusion
India stands at a crucial juncture in AI governance. While the country does not yet have a standalone AI law, the existing legal and policy frameworks collectively provide a foundation for responsible use of AI. The combination of constitutional rights, the IT Act, the DPDP Act, sectoral guidelines, institutional standards, and state-level policies creates a multi-layered regulatory environment.
However, as AI becomes more pervasive in courts, classrooms, hospitals, and workplaces, India will need a dedicated legal framework addressing accountability, transparency, bias, and liability. The upcoming Digital India Act, coupled with standards from the IndiaAI Safety Institute, is expected to mark the next big leap.
Attention all law students and lawyers!
Are you tired of missing out on internship, job opportunities and law notes?
Well, fear no more! With 2+ lakhs students already on board, you don't want to be left behind. Be a part of the biggest legal community around!
Join our WhatsApp Groups (Click Here) and Telegram Channel (Click Here) and get instant notifications.








