November 27, 2020

REGULATION OF ARTIFICIAL INTELLIGENCE IN INDIA

AI refers to the ability of machines to perform cognitive tasks like thinking, perceiving, learning, problem solving and decision making. Initially conceived as a technology that could mimic human intelligence, AI has evolved in ways that far exceed its original conception. With incredible advances made in data collection, processing and computation power, intelligent systems can now be deployed to take over a variety of tasks, enable connectivity and enhance productivity. As AI’s capabilities have dramatically expanded, so have its utility in a growing number of fields Artificial Intelligence (AI) is poised to disrupt our world. With intelligent machines enabling high-level cognitive processes like thinking, perceiving, learning, problem solving and decision making, coupled with advances in data collection and aggregation, analytics and computer processing power, AI presents opportunities to complement and supplement human intelligence and enrich the way people live and work.

In India, the Kerala police inducted a robot for police work. The same month, Chennai got its second robot-themed restaurant, where robots not only serve as waiters but also interact with customers in English and Tamil. In Ahmedabad, in December 2018, a cardiologist performed the world’s first in-human tele robotic coronary intervention on a patient nearly 32 km away. All these examples symbolise the arrival of Artificial Intelligence (AI) in our everyday lives. AI has several positive applications, as seen in these examples. But the capability of AI systems to learn from experience and to perform autonomously for humans makes AI the most disruptive and self-transformative technology of the 21st century.

If AI is not regulated properly, it is bound to have unmanageable implications. Imagine, for instance, that electricity supply suddenly stops while a robot is performing a surgery, and access to a doctor is lost? What if an AI-based driverless car gets into an accident that causes harm to humans or damages property These questions have already confronted courts in the U.S. and Germany. All countries, including India, need to be legally prepared to face such kind of disruptive technology.

Predicting and analysing legal issues and their solutions, however, is not that simple. For instance, criminal law is going to face drastic challenges. What if an AI-based driverless car gets into an accident that causes harm to humans or damages property? Can AI be thought to have knowingly or carelessly caused bodily injury to another? Can robots act as a witness or as a tool for committing various crimes?

At present, there is no definition of artificial intelligence in Indian laws (statutes, rules, regulations and judgments). It is not wrong to say that so far Indian legal system is blissfully unaware of artificial intelligence and the legal issues it may bring with it.

First we need a legal definition of AI. Also, given the importance of intention in India’s criminal law jurisprudence, it is essential to establish the legal personality of AI (which means AI will have a bundle of rights and obligations), and whether any sort of intention can be attributed to it. To answer the question on liability, since AI is considered to be inanimate, a strict liability scheme that holds the producer or manufacturer of the product liable for harm, regardless of the fault, might be an approach to consider. Since privacy is a fundamental right, certain rules to regulate the usage of data possessed by an AI entity should be framed as part of the Personal Data Protection Bill, 2018.

Traffic accidents lead to about 400 deaths a day in India, 90% of which are caused by preventable human errors. Autonomous vehicles that rely on AI can reduce this significantly, through smart warnings and preventive and defensive techniques. Patients sometimes die due to non-availability of specialised doctors. AI can reduce the distance between patients and doctors.

Mobility and transportation form the backbone of the modern economy due to their linkages with other sectors and importance in both domestic and international trades. Today’s society demands a high degree of mobility of various kinds, so as to enable efficient and safe transportation of both people and goods.

Listed below are some of the major applications of AI on the mobility :

a) Autonomous trucking: Autonomous technology in trucking has the potential to transform the way we move goods today. AI can help increase safety and hauling efficiency through intelligent platooning, wherein trucks form platoons giving drivers the liberty to rest while the platoon keeps moving. Such a method also ensures optimal road-space utilisation, helping improve road infrastructure capacity.

b) Intelligent Transportation Systems: Through the use of an intelligent traffic management system including sensors, CCTV cameras, automatic number plate recognition cameras, speed detection cameras, signalised pedestrian crossings and stop line violation detection systems and the use of AI, real time dynamic decisions on traffic flows such as lane monitoring, access to exits, toll pricing, allocating right of way to public transport vehicles, enforcing traffic regulations through smart ticketing etc. can be made. Accident heat maps could be generated using accident data and driver behaviour at specific locations on the road network related to topology, road geometric design, speed limit etc. and suitable measures could be pre-emptively taken to prevent possible accidents. Also, AI could help to design sophisticated urban traffic control systems that can optimise signal timings at the intersection, zonal and network level, while also facilitating services such as automatic vehicle detection for extension of red/green phase or providing intermittent priority.

c) Travel route/flow optimisation: With access to traffic data at the network level, AI can help make smart predictions for public transport journeys by optimising total journey time including access time, waiting time and travel time. Considering factors such as accessibility to nearest mode of travel, most convenient access path based on local conditions and one’s preferences, AI can revolutionise first-last mile travel which could change the way we perceive public transport journeys, today. About private car usage, AI could utilise a range of traffic data sets and one’s own preferences to make human-like decisions on route selection. With information on dynamic tolls and traffic flows on links, the dependency on overhead Variable Messaging Systems (VMS) could be minimised, reducing substantial infrastructure costs. On the systemic level, AI can help predict flow of traffic at the network level and suggest alternative flow strategies in order to contain congestion, alleviating cities of this major issue.

d) AI for Railways: According to official figures, more than 500 train accidents occurred between 2012-2017, 53% of them due to derailment. Train operators can obtain situational intelligence through real-time operational data and analyse them in three different dimensions: spatial, temporal and nodal. Fleet management and asset maintenance including that of rolling stock are pertinent AI use cases. Recently, the Ministry of Railways, Govt. of India has decided to use AI to undertake remote condition monitoring using non-intrusive sensors for monitoring signals, track circuits, axle counters and their sub-systems of interlocking, power supply systems including the voltage and current levels, relays, timers.

e) Community Based Parking: The availability of parking is a major issue for Indian cities. AI can help optimise parking, likely by minimising vehicle downtime and maximising driving time. With the advent of electric vehicles, AI will be needed to mediate the complex vehicle grid interactions(VGI) as well as for charging optimisation. Parking guidance systems help drivers to find vacant parking spaces while they are using the road network and have approached close to their destination. Community While the issue of ethics would concern the biases that an AI system can propagate, the privacy concerns are largely on collection and inappropriate use of data for personal discrimination. Issue of security arises from the implications and the consequent accountability of any AI system.

At the beginning of 2018, The Malicious Use of Artificial Intelligence Report warned that AI can be exploited by hackers for malicious purposes, possessing the ability to target entire states and alter society as we know it. The authors highlight that globally, we are at “a critical moment in the co-evolution of AI and cybersecurity, and should proactively prepare for the next wave of attacks”. The trouble with AI is that it is largely based on freely available open source software. In addition, new insights, approaches and successful experiments are widely shared, as AI powerhouses such as Google and Facebook allow their top AI engineers to publish their work, which they need to do in order to stay in the race to attract and keep the best AI minds.

While sharing and collaboration through public access to datasets, algorithms, and

new tools are crucial for the success of cyber defenders, there is no question that bad actors could also benefit from it. Cyber security company Symantec predicted that in 2018, cyber criminals will use Artificial Intelligence (AI) & Machine Learning (ML) to conduct attacks. No cyber security conversation today is complete without a discussion about AI and ML. So far, these conversations have been focused on using these technologies as protection and detection mechanisms. However, this will change in the next year with AI and ML being used by cyber criminals to conduct attacks. It is the first year where we will see AI versus AI in a cyber security context. Cyber criminals will use AI to attack and explore victims’ networks, which is typically the most labour-intensive part of compromise after an incursion.

Guarding against the weaponization of AI :To protect against AI-launched attacks, there are three key steps that security teams should take to build a strong defence :

o Understand what is being protected. Teams should lay this out clearly with appropriate solutions implemented for, threat vulnerability management, protection and detection with visibility into the whole environment. It is also important to have the option to rapidly change course when it comes to defence, since the target is always moving.

o Having clearly defined processes in place. Organizations may have the best technology in the world, yet it is only as effective as the process it operates within. Both security teams and the wider organization must understand procedures and it is the responsibility of these security teams to educate employees on cybersecurity best practice.

o Knowing exactly what is normal for the environment. Having context around attacks is crucial and often where companies fail. Possessing a clear understanding of assets and how they communicate, will allow organizations to correctly isolate events that aren’t normal and investigate them. AI/machine learning is an extremely effective tool for providing this context.

Safety & AI

In order to be accepted for use by the society, AI systems have to meet high degree of safety standards. Several AI systems such as autonomous vehicles, robots exert forces during the interaction with the environment. It is necessary to design the systems in such a way that it does not harm the people and property during its interactions. There are two primary issues in the context of safety – how much safety is required and how to measure it. Traditionally, regulatory agencies prescribe the safety standards with regard to machines and how to measure these. Such standards or their equivalents should be prescribed for the autonomous machines as well.

Absolute safety is often not practical to achieve. Even without using any AI system, we can’t ensure 100% safety. Therefore, apart from safety parameters, safety thresholds have to be decided. The thresholds have to be decided for various types of domains under various scenarios. An AI system can be allowed for use by the public if it exceeds the safety thresholds on various safety parameters.

While prescribing the safety measurement parameters, it also has to be prescribed under what circumstances this has to be tested. For instance, it is said that autonomous vehicles should be adopted once these are safer than humans. The question is how to compare to decide whether it is safer. Should a car driven by a human driver be compared with self-driving car or a car driven by human driver assisted with safety devices?

Comprehensive testing is must before releasing any system for use by the public.

Government has to establish necessary infrastructure for safety testing and certification. We also need to agree on other relevant points such as who are authorized to test and what tests are to be used.

When AI systems are used in deciding matters with serious implications for the people, it is necessary to take additional precautions. Examples of such decisions include the length of imprisonment for crimes. In order to design systems for making such decisions, certain trade-offs may be needed to satisfy contradicting objectives. For instance, the decision-making has to be just, speedy and inexpensive. It has been reported by the researchers that making the systems more transparent and less biased decreases the overall accuracy and efficiency. A less accurate system may assign more punishment for less sever crimes or vice-versa. A less efficient system may not deliver justice at an acceptable speed. A trade-off should either maximize all the values or should select a combination which is acceptable to the society.

Humans need certain skills to perform a task. They are trained and certified before they are permitted to perform the task. This is especially important in high risk tasks and domains such as aviation, medicine, etc. We need similar certification system for machines. Further, the problem gets more complicated in the case of AI systems which perform the tasks which have never been done by humans. An example is surgical robots which perform the tasks not done by humans so far (at that level of sophistication).

In the context of safety, an issue often debated is whether AI poses existential threat to humanity. This issue needs to be addressed properly else it may distract the policymakers from addressing more immediate challenges. The issue of existential threat has been raised by many thinkers of the time including Elon Musk, Stephen Hawking and Nick Bostrom. In his book Super intelligence, Bostrom has argued that AI being developed would be enormously superior to humans and may even harm its creators. Bostrom has not said that it is inevitable but he has referred to it as a possibility only. The concept of super intelligence has been criticized by many experts of the field. It has been pointed out that first of all, there is no clear way how can one develop it in near future with the current state of technology. So far, we have succeeded in developing intelligent machines for specific tasks only. We have not been able to develop machines which can perform even like a lower intelligence animal. Secondly, even if such a machine is developed, there is no reason to believe that it would be interested in dominating the world as machines don’t have intent. If machines with higher intelligence are developed, the ways to control would also be developed in parallel. It is difficult to visualize a situation where powerful intelligent machines have been developed but the ways to control have not been developed.

The only possibility is that an individual or group of individuals with malicious intention may design a machine to harm the humans. For instance, one can imagine the actors who attack our nuclear installations, destabilize the trading market, etc. However, existential threat is a remote possibility. Only time can tell anything. However, most of the people believe that it is not possible in near future. Therefore, this threat should not lead to restrictions on the development of AI technology and its applications.

Safety Guidelines: All the stakeholders including industry, government agencies and civil society should deliberate to evolve guidelines for safety features for the applications in various domains. Best practices in implementation of safety features should be shared. Government should invest in interdisciplinary research to study the impact of AI on society.

Safety Thresholds: In most of the cases, achieving absolute safety is not practical. Therefore, safety thresholds have to be decided for various domains. When the thresholds involve trade-offs, it should be in the range acceptable to the society.

Human Control: In case of any threat to human life or any other sever implication, humans should be in a position to interrupt or shutdown the system at any point of time. Human checks are necessary before implementing new decision-making strategies in AI systems.

Safety Certification: A mechanism should be established to certify the systems on the safety issues before releasing to the general public. The work should be initiated with the sectors like healthcare, transport, etc. where safety is quite important as it involves human life.

Existential Threat: Though it does not appear to become a serious issue in the near future, it requires deliberations on an ongoing basis. However, it should not restrict the development and deployment of AI systems .

The accountability debate on AI, which in most of the cases today is aimed at ascertaining the liability, needs to be shifted to objectively identifying the component that failed and how to prevent that in the future. An analogy can be drawn to how the airlines have become a relatively safe industry today. Every accident has been elaborately investigated, and future course of action has been determined. Something similar is needed to ensure safe AI . One possible framework that can be mooted involves the following components:

a. Negligence test for damages caused by AI software, as opposed to strict liability. This involves self-regulation by the stakeholders by conducting damage impact assessment at every stage of development of an AI model.

b. As an extension of the negligence test, safe harbours need to be formulated to insulate or limit liability so long as appropriate steps to design, test, monitor, and improve the AI product have been taken.

c. Framework for apportionment of damages need to be developed so that the involved parties bear proportionate liability, rather than joint and several liability, for harm caused by products in which the AI is embedded, especially where the use of AI was unexpected, prohibited, or inconsistent with permitted use cases.

d. Actual harm requirements policy may be followed, so that a lawsuit cannot proceed based only on a speculative damage or a fear of future damages.

India can also take a leaf out of UK’s playbook, where GBP9 million is being invested to establish a new Centre for Data Ethics and Innovation, aimed at enabling and ensuring ethical, safe and innovative uses of data, including AI. This will include engaging with industry to explore the possibilities of establishing data trusts to facilitate easy and secure sharing of data. A consortium of Ethics Councils at each Centre of Excellence may be set up to define the standard practice (on the lines of Open AI charter). It would be expected that all Centres of Excellence adhere to standard practices while developing AI technology and products .

Civil Liability: As AI systems take independent decisions using the knowledge learned by itself, these are likely to be held responsible for civil liability in long-term. In view of this, stakeholders need to deliberate whether to recognize AI system as a legal person. If legal personhood is conferred, it should be accompanied by an insurance scheme or compensation fund to compensate for the damages. As such systems are to have global presence, these issues have to be discussed at international forums too.

Holistic Approach: A committee of the stakeholders should be constituted to look into all relevant aspects issues in a holistic manner. In order to give a fair opportunity to the technology, a decision on permitting AI systems should be made after considering the increase in the risks as well as decrease in the risks due to adoption of AI.

Review of Existing Laws: The existing laws should be reviewed for any modification which may be necessary for adoption of AI applications in the domain. Preference should be given to modifying the existing provisions in the laws rather than making new provisions altogether. Excessive regulations may be avoided as it may hinder the growth of the technology.

Prioritize Sectors: The review of the laws should be initiated in the sectors where early deployment is expected. These could be transportation, healthcare, finance, etc. The experience gained in these sectors can be used in other domains when the need arises.

Periodical Review: This can’t be one time exercise. The laws should be reviewed periodically in view of the development of technology and experience with implementation of the laws .

Laws and regulations can be an important backstop in ensuring fundamental lines are not crossed. However, because the risks are so specific to the context and use, rather than appointing an overarching entity (like an agency with broad regulatory authority over robotics or machine intelligence), oversight would best provided via existing sector-specific entities. They will be best suited to evaluating whether or not the existing body of rules are sufficient or need to be revised to meet new technological realities .In the light of forgoing India must introduce laws to regulate AI for the safe and bright future of the nation.

Author Details: BASIL KURIAN (Cooperative School of Law , Kerala)

The views are personal only, if any.

Leave a Reply

Your email address will not be published. Required fields are marked *