Share & spread the love

Artificial Intelligence is revolutionising the way we live, work, and interact with the world. From healthcare to education, agriculture to smart cities, AI is making its way into every industry. In India, this digital transformation is being propelled by the country’s large and growing high-tech workforce and increasing foreign investments. However, the rapid development and integration of AI also present a series of challenges, especially around ethical concerns, bias, discrimination, privacy violations, and the lack of specific AI regulations.

This article discusses the current legal framework governing AI in India, key government initiatives, and future perspectives on regulating AI, Generative AI, and Large Language Models (LLMs).

AI in India: An Overview

India has been steadily advancing its AI capabilities, with applications ranging from automating legal processes to driving healthcare innovations and facilitating smart city developments. Companies like OpenAI, Google, and Meta have made significant progress in generative AI, launching products like ChatGPT, Gemini, and LLaMA, which have sparked discussions on the need for regulatory oversight.

While India is quickly becoming a global player in AI, there are concerns regarding the legal and ethical challenges that AI technologies introduce. The government has recognised the need for regulatory measures and guidelines to ensure that AI development aligns with national interests, particularly in areas such as privacy, data protection, and security.

Current State of AI Regulation in India

Lack of Dedicated AI-Specific Laws

As of now, India does not have a dedicated regulatory framework for AI. However, the government has taken significant steps to ensure that AI development follows ethical guidelines and addresses key legal concerns. Existing legislation like the Information Technology Act, 2000, the Digital Personal Data Protection Act, 2023, and the Information Technology Rules, 2021 play a significant role in overseeing AI activities.

The government has also issued a series of advisories related to AI. In particular, the Ministry of Electronics and Information Technology (MeitY) has recently required platforms using under-tested or unreliable AI models to obtain explicit permission before deployment. These advisories, while not comprehensive laws, signal the government’s intent to regulate AI and manage the associated risks.

AI-Related Advisories

On March 1, 2024, MeitY issued an advisory aimed at regulating unreliable AI models, Generative AI, and LLMs. Platforms intending to introduce these technologies to the Indian public must ensure compliance with three key directives:

  1. Bias and Discrimination: AI models must not facilitate bias, discrimination, or violate the integrity of the electoral process.
  2. Under-Tested AI Models: Any AI model deemed under-tested must seek explicit permission from MeitY before being deployed. Users must be cautioned about the potential inaccuracies of the AI’s output.
  3. Labelling of AI-Generated Content: AI-generated media, including text, audio, and video, must be labelled with unique identifiers or metadata. This allows users to trace the origin of the content, especially in the case of deepfakes or misinformation.

Although the advisory primarily targets large platforms, the Ministry has clarified that startups will not need to adhere to these stringent guidelines. This focus on significant platforms shows that the government is willing to regulate AI, but only where the potential for misuse is substantial.

Key Laws Governing AI in India

While AI-specific laws are yet to be enacted, existing laws, such as the Information Technology Act, 2000 and the Digital Personal Data Protection Act, 2023, provide important legal oversight for AI development and implementation.

Information Technology Act, 2000 (IT Act)

The Information Technology Act, 2000 is India’s primary legislation that governs electronic transactions, digital governance, and cybersecurity. Although it was enacted before AI technologies came to the forefront, several provisions in the IT Act apply to AI-related activities.

  • Section 43A: This section enables compensation in case of a breach of data privacy due to negligent handling of sensitive personal data. AI systems that process user data must ensure that they comply with this provision to avoid legal repercussions.
  • Section 66D: This section penalises individuals for cheating by impersonation using a computer resource. It is particularly relevant for AI-driven deepfakes and other AI-generated fraudulent content.
  • Section 67: This provision prohibits the publishing or transmitting of obscene material in electronic form. AI systems capable of generating inappropriate or harmful content could fall under this section.

Case Law: Justice K.S. Puttaswamy v. Union of India (2017)

In Justice K.S. Puttaswamy v. Union of India, the Supreme Court of India recognised the right to privacy as a fundamental right under the Indian Constitution. Although not directly related to AI, this judgment sets a precedent for protecting personal data, which is crucial for AI systems that often process sensitive information.

Digital Personal Data Protection Act, 2023

The Digital Personal Data Protection Act, 2023, signed into law on August 11, 2023, is a comprehensive framework for protecting personal data in India. The Act covers how data can be collected, stored, processed, and shared, making it highly relevant for AI systems that handle large volumes of personal data.

Key provisions of the Act include:

  • Data Protection Principles: These principles mandate that AI platforms obtain user consent before processing personal data, ensure transparency, and allow users to withdraw their consent.
  • Data Localisation: The Act requires certain sensitive data to be stored within India, which impacts AI systems that rely on cross-border data transfers.
  • Data Breaches: Companies deploying AI must report data breaches to regulatory authorities within a specific timeframe, further ensuring accountability.

Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules 2021)

The IT Rules 2021 regulate intermediaries such as social media platforms, digital news media, and over-the-top (OTT) services. Under these rules, intermediaries must ensure that their platforms do not host, display, or transmit unlawful content, making it relevant for AI systems generating content, such as deepfakes or automated media.

Rule 3(1)(b): This rule specifically mandates that intermediaries should not allow users to upload or share any information that is “grossly harmful, harassing, or defamatory.” AI platforms that fail to comply with these provisions may lose their intermediary “safe harbour” protections and face legal penalties.

Draft National Data Governance Framework Policy (NDGFP)

Released in May 2022, the Draft National Data Governance Framework Policy (NDGFP) aims to modernise India’s data governance structure. Its core objective is to create an ecosystem that fosters AI and data-driven research and startups. The policy proposes a comprehensive repository of datasets that can be used to train AI models.

This policy is crucial for AI research and development, as it enhances access to high-quality data for training AI algorithms. The quality and accuracy of data used in AI models can significantly impact their output, and this policy plays a pivotal role in ensuring better datasets for AI innovation.

National Strategy for Artificial Intelligence (2018)

India’s first National Strategy for Artificial Intelligence, released by NITI Aayog in 2018, emphasised inclusive AI development under the initiative #AIForAll. The strategy focused on five key sectors:

  • Healthcare
  • Agriculture
  • Education
  • Smart Cities
  • Transportation

The strategy proposed creating high-quality datasets, enhancing research capabilities, and constructing legislative frameworks for AI-related cybersecurity and data protection. The aim was to strike a balance between innovation and regulation, ensuring responsible AI development while promoting growth in these critical sectors.

Principles for Responsible AI (2021)

Building on the National AI Strategy, NITI Aayog released the Principles for Responsible AI in 2021. These principles guide AI development in India with a focus on ethical considerations.

The system considerations cover principles such as decision-making transparency, accountability, and inclusivity, while the societal considerations focus on AI’s impact on job creation and the automation of industries. This document establishes guidelines for AI governance, ensuring that AI systems adhere to ethical and transparent practices.

Rules on Deepfakes and Misinformation

India currently does not have specific legislation addressing deepfakes or misinformation generated by AI. However, the Information Technology Act, 2000, and the Indian Penal Code (now Bharatiya Nyaya Sanhita) provide provisions to tackle crimes associated with deepfakes.

  • Section 66E of the IT Act: This section covers privacy violations related to deepfakes, with penalties including imprisonment or fines.
  • Section 509 of the IPC: This section addresses cases of insulting a woman’s modesty, which could be used to prosecute deepfakes that exploit women’s images or videos.

International Collaboration and Investments

India is an active member of the Global Partnership on Artificial Intelligence (GPAI), which aims to promote responsible AI development globally. In 2023, the GPAI summit held in New Delhi focused on responsible AI, data governance, and the future of work. These discussions reinforced India’s commitment to implementing ethical AI practices that align with global standards, such as the OECD AI Principles.

Government Investments in AI

India’s AI sector is expected to grow rapidly, thanks to significant government investments. In 2024, the government-sanctioned INR 103 billion (approximately USD 1.25 billion) for AI projects over five years. This investment will be used to develop computing infrastructure, support AI startups, and establish a National Data Management Office to improve data quality and availability for AI projects.

These investments are designed to position India as a global leader in AI while ensuring that AI technologies are developed responsibly, with appropriate regulatory oversight.

Challenges and Future Perspectives

Lack of Comprehensive AI-Specific Legislation

The most significant gap in India’s legal framework is the absence of AI-specific laws. While existing laws cover certain aspects of AI development, they do not comprehensively address issues like accountability, bias, intellectual property, or liability in AI-generated content. Given AI’s disruptive potential, the development of dedicated AI regulations is essential to ensure responsible innovation.

Bias and Discrimination in AI Systems

AI systems, especially LLMs, can perpetuate biases present in the data they are trained on. The lack of regulations addressing bias in AI algorithms can lead to discriminatory outcomes in sectors like recruitment, financial services, and healthcare. India’s focus on ethical AI principles through NITI Aayog’s guidelines is a step in the right direction, but more robust legal provisions are needed.

Privacy Concerns

With AI systems processing vast amounts of personal data, privacy concerns are paramount. The Digital Personal Data Protection Act, 2023, addresses some privacy issues, but as AI continues to evolve, additional safeguards will be necessary to protect users’ personal information from misuse by AI platforms.

Deepfakes and Misinformation

AI’s ability to generate realistic deepfakes poses a significant challenge. While the IT Act and Bharatiya Nyaya Sanhita provide some remedies, there is a need for dedicated laws to tackle the malicious use of deepfakes. The Indian government must continue developing laws to prosecute AI-driven misinformation and protect individuals and institutions from its damaging effects.

AI Accountability and Liability

Determining accountability for harm caused by AI systems remains a challenge. AI’s autonomous nature complicates assigning responsibility in cases of errors, biases, or damages. Future AI regulations will need to address liability concerns, ensuring that AI developers and platforms are held accountable for the actions of their algorithms.

Conclusion

India is at the forefront of AI development, with significant investments and policy frameworks in place to drive innovation. However, the country still faces challenges in creating a robust legal framework for AI. Existing laws like the IT Act, Digital Personal Data Protection Act, and IT Rules provide a foundation for AI regulation, but there is a clear need for AI-specific legislation to address the complexities and ethical concerns of AI technologies.

As AI continues to transform industries and societies, India must strike a balance between promoting innovation and ensuring responsible, ethical AI practices. The future of AI regulation in India will likely include comprehensive laws that address bias, discrimination, accountability, and privacy concerns while fostering AI’s immense potential to drive economic growth and societal progress.


Attention all law students!

Are you tired of missing out on internship, job opportunities and law notes?

Well, fear no more! With 1+ lakhs students already on board, you don't want to be left behind. Be a part of the biggest legal community around!

Join our WhatsApp Groups (Click Here) and Telegram Channel (Click Here) and get instant notifications.

Leave a Reply

Your email address will not be published. Required fields are marked *

LawBhoomi
Upgrad