Can thinking machines be subjected to law?
The technological world is changing rapidly. Robots and computers are increasingly replacing simple human activities. As long as mankind used computers as simple tools, there was no real difference between computers and screwdrivers, cars or telephones. When computers were developed, we always said that computers “think” for us. The problem started when computers developed from “thinking” machines (machines that were programmed to execute defined thinking/computer processes) to thinking machines (without quotes) or artificial intelligence (AL). Al is a machine’s ability to mimic intelligent behaviour. Al is the simulation of human behaviour and cognitive processes on a computer and therefore the study of the nature of the entire space of intelligent minds. All investigations started in the 1940s and early 1950s. Since then, Al entities have become an integral part of modern human life and work much more sophisticated than other everyday tools. Could they be dangerous?
In fact, they already are, as confirmed by the previous incident. In 1950 Isaac Asimov established three basic laws of robotics in his science fiction masterpiece I, Robot.
- A robot must not harm a human being or allow inactive people to be harmed.
- A robot must follow human instructions unless those instructions contradict the First Law.
- A robot must protect its own existence as long as this protection does not violate the first or second law.
The main question in this context is what kind of laws or ethics are correct and who should decide. Society has developed criminal law to deal with the same problems that have to do with people. Criminal law embodies the most powerful legal social control of modern civilization. In most cases, people’s fear of Al entities is due to the fact that Al entities are not subject to the law, especially criminal law.
Understanding AI and its Scope
In 1997, an IBM supercomputer named Deep Blue beat the then chess world champion, Garry Kasparov, in an intense chess game. This was a rematch after Deep Blue’s first defeat in 1996. In what can only be described as human nature, Kasparov may have been ruthless in the last game in which Deep Blue emerged victorious with an apparently strategic approach. He lost that day, but maybe we don’t. Artificial Intelligence (AI) is not a new concept, especially for science fiction readers. Recently, however, there has been more science and less fiction. The world of technology is changing rapidly. Computers and now robots are replacing simple human activities. AI is a machine’s ability to mimic intelligent behaviour.
It is a general term that refers to information systems that are inspired by biological systems and encompass different technologies including machine learning, deep learning, computer vision, natural language processing (“NLP”), machine thinking and strong artificial intelligence.
The neologism term “artificial intelligence” was first used at a Dartmouth conference in which John McCarthy of the Massachusetts Institute of Technology defined AI as science and technology for making smart machines, especially smart computer programs. It is associated with the similar task of using computers to know human intelligence, but AI doesn’t need to be limited to methods that are biologically observable.8 According to him, there was no “solid definition of intelligence that did not depend on it.” Relating them to human intelligence “because” we still cannot generally characterize what kinds of computing methods we want to call intelligent. “Another definition came up next that Marvin Minsky provided in 1968, which said that AI is that the science of creating machines do things that might require intelligence if they were made by men.
Status of AI Under Indian Law
The Indian Constitution is the basic legal framework that assigns rights and obligations to individuals or citizens. Unfortunately, the courts have yet to decide on the legal status of machines with artificial intelligence, the determination of which would clarify the existing debate on the applicability of existing laws to machines with artificial intelligence. However, the Indian Ministry of Industry and Trade recognized the relevance of AI to the country as a whole, highlighting the challenges and concerns of AI-based technologies and systems, as well as the EU’s intention to promote the growth and development of such systems. In India, the Ministry of Industry and Commerce had formed an 18-person working group composed of experts, scientists, researchers and industry leaders, to which government agencies/ministries such as NITI Aayog, the Ministry of Electronics and Technology of Information, the Ministry of Science and Technology, UIDAI and DRDO in August 2017 entitled “AI Task Force for India’s Economic Transformation” chaired by V. Kamakoti, Professor at IIT Madras, to explore the possibilities of using AI for development in different areas. The working group recently released its report, which provides the Ministry of Commerce with detailed recommendations and next steps to formulate a detailed AI policy in India.
The key takeaways from the report are:
- The report has identified ten specific domains in the report that are relevant to India from the perspective of development of AI based technologies, namely
- Technology for the differently abled
- National Security
- Public utility services
- Retail and customer relationships
- The report identified the following key challenges in deploying large AI systems in India:
- promote the collection, archiving and availability of data with reasonable security measures;possibly through markets/data exchange;
- ensuring the security, protection, privacy and ethics of data through regulatory and technological frameworks;
- Digitization of systems and processes with IoT systems during the deploymentProtection against cyberattacks; and
- Use of autonomous products while ensuring that the effects on employment and security are reduced.
- The task force made the following specific recommendations to the Ministry of Industrial Policy and Promotion (“DIPP”) in the report: a. Establishment and funding of a “national inter-ministerial mission for artificial intelligence” for a period of 5 years with funding of around INR 1.2 billion to act as a node agency for the coordination of all AI-related activities in India: The mission should be self-sufficient in three broad areas, namely
- Core Activities: Bringing together relevant industry and academic stakeholders to establish a research archive for AI-related activities and fund national studies and campaigns to identify AI-based projects undertaken in each of the areas identified in the report and the area aim to sensitize society about AI systems;
- Coordination – Coordination between the responsible ministries/government agencies to implement projects at the national level to expand the use of AI systems in India;
- Centres of Excellence – Establishment of interdisciplinary research centres to allow a deeper understanding of AI systems Establishment of a universal and generic testing mechanism/procedure, e.g. to test the performance of artificial intelligence systems, z interdisciplinary data integration centre to develop a self-contained artificial intelligence machine that can work with multiple data streams and providing information to the public in all the areas identified in the report: databases, exchanges and ombudsmen: the creation of digital databases, markets and exchanges to allow the availability of intersect-oral data and information.
AI and Liability
When artificial intelligence technology replaces human judgment, this can lead to an increase in claims that raise complex questions about the cause of damage, legal obligations and liability. Autonomous (or intelligent) machines pose new challenges to our existing liability models, which are mainly based on causes. It is difficult to determine whether a machine has behaved in a certain way due to its inherent complexity or learned behaviour. The assignment of “errors” or “defects” for liability purposes is very difficult. The law must adapt to new technological developments.
Liability is essentially a sliding scale that is actually based on the degree of legal liability that society imposes on a person. Until relatively recently, the question of whether a machine should be responsible (and therefore responsible) for its actions was relatively trivial: a machine was simply a tool of the person who uses or operates it. There was no question whether a machine requires a certain level of personal responsibility or even “personality” because it cannot act autonomously or semi-autonomously.
The basic question of criminal law is the question of criminal liability, i.e., if the particular entity (person or business) bears criminal responsibility for a particular offence committed at a specific time and place. There are two main elements to imposing criminal liability on a person. The first is the external or factual element, i.e. criminal conduct(actus reus), while the other is the internal or mental element, i.e. knowledge or general intention against the element of conduct(mens rea). If an item is missing, no criminal liability can be imposed.
The actus reus requirement is mainly expressed through acts or omissions. Sometimes other external elements are required in addition to the conduct, e.g. the specific results of this conduct and the specific circumstances underlying the conduct. The mens rea requirement has several levels of mental elements. The highest level is expressed through knowledge, while sometimes accompanied by a specific intention or requirement of intent. The lowest values are expressed through negligence (a reasonable person should have known) or through strict liability.
CNBC reported an incident with online “bots” in which a Swiss art group established an “automated online shopping bot” with a weekly allocation of Bitcoin worth $ 100, an online cryptocurrency, and for the purchase of Random items from the “dark web” where shoppers can buy illegal / stolen items. In January 2015, Swiss police confiscated the robot and its illegal purchases, but neither charged the robot nor the artists who designed it with a crime. We can soon expect cases of a similar nature to appear in criminal and civil courts.
Gabriel Hallevy has proposed that AI entities can fulfil the two requirements of criminal liability under three possible models of criminal liability:
- the Perpetration-by-Another liability model
- the Natural-Probable Consequence liability model
- he Direct liability model.
The Perpetration-by-Another Liability (PBAL) Model: AI as Innocent Agents
The AI robot is viewed as an intermediary used as an instrument, while the party that organizes the crime is the real culprit (hence the name, another’s culprit). The person who controls the AIor the perpetrator is considered a first-degree principal and is responsible for the behaviour of the innocent agent (t). The perpetrator’s responsibility is determined based on this behaviour and his own state of mind. The AI robot is an innocent agent.
This model is likely to be implemented in environments where programmers have programmed an AI to commit a crime or where a person who controls the AI has ordered them to commit a crime. This model would not be suitable if the AI robot decides to commit a crime based on its own experience or knowledge.
To give a concrete example: Imagine a sophisticated aircraft that throws its pilot out of the cockpit and kills it. The perpetrator could be the AI software programmer who wrote the program with the specific message to kill the pilot. Another candidate could be the user of an AI system where the user specifically instructs the AI to follow a certain course of behaviour that can lead to a crime, such as a person ordering his dog to attack a thief. The dog commits the attack, but the person ordering the dog is considered the culprit.
The Natural-Probable-Consequence Liability (NPCL) Model: Foreseeable Offences
This criminal liability model implies that programmers or users are deeply involved in the daily activities of the AI robot but without the intention of committing a crime through the AI robot. For example, one scenario would be if an AI robot committed a crime while performing its daily tasks. This model is based on the ability of programmers or users to anticipate the possible commission of crimes. A person can be held responsible for a crime if the crime is a natural and probable consequence of that person’s behaviour.
Liability with possible natural consequences appears to be legally appropriate in situations where an AI robot has committed a crime but the programmer or user did not know, did not intend to, and had not participated in it. The liability model with probable natural consequences only requires that the programmer or user is in a state of negligence, no longer. Programmers or users do not need to be aware of an upcoming crime related to their activity, but they should know that this crime is a natural and probable consequence of their actions.
Liability can be based on negligence and would be appropriate in a situation where a sensible programmer or user should have anticipated the crime and prevented the AI robot from committing it. AI takes action that is a “natural and probable” consequence of the way it is programmed. Going back to our previous example of the plane ejecting the pilot, the programmer in this model does not have to have any specific intention (ormens rea) to kill the pilot, but a state of criminal negligence, that is, ruthlessly neglecting whether the Programming provided to the AI could lead to the exclusion of the pilot if a reasonable person instead of the programmer could have predicted the crime as a likely natural consequence of AI programming. The NPCL doctrine is an extremely problematic doctrine and has been widely discredited in many states (and incomparable jurisdictions such as the United Kingdom).
The Direct Liability (DL) Model: AI Robots as subject of Criminal Liability
The model of “direct responsibility”, which assumes a certain degree of personal responsibility of the machine for its actions. For this model to work, the self-confidence and understanding of the AI author for the actions and consequences are essential. This is the scenario in which the individual status is to be inserted into the AI. The AI robot fulfils the fact element (actus reus) and the mental element (mens rea), it is regarded as responsible for itself under criminal law.
The criminal liability of an artificial intelligence robot does not replace the criminal responsibility of programmers or users if other legal means impose criminal liability on programmers and users. The criminal responsibility is not shared but added. The criminal liability of the AI robot is imposed in addition to the criminal responsibility of the programmer or human user.
If all specific requirements are met, any company, person, company or company of Al, may be subject to criminal liability. Modern times require modern legal measures to solve today’s legal problems. The rapid development of artificial intelligence technology requires current legal solutions to protect society from potential threats associated with technologies that are not subject to the law, in particular criminal law. Criminal law has a very important social function to maintain social order for the good and good of society. The threat to this social order may come from individuals, companies or units of Al. Man is traditionally subject to criminal law unless otherwise decided in the international consensus. Therefore, minors and people with mental illness are not subject to criminal law in most legal systems around the world. Although companies in their modern form have existed since the 14thcentury, it took hundreds of years to subordinate companies to law and especially criminal law. For hundreds of years, the law has said that companies are not subject to criminal law, as inspired by Roman law (Societas delinquent non-potest).
All companies are increasingly involved in human activities, as are companies. Offences have already been committed by or through Al companies. All entities have no soul, and some Al entities have neither soul nor body. Therefore, there is no significant legal difference between the idea of criminal liability for companies and Al’s companies. It would be outrageous not to subject them to human law, as companies have. Criminal liability models exist as general forms of punishment. What more is needed?
- What is the scope and status of AI in Indian Law?
- Can society impose criminal liability upon robots?
- If AI is criminally liable, how do you punish an AI robot?
- Buyers, J. (2015, January). Liability Issues in Autonomous and Semi Autonomous Systems. Retrieved from osborneclarke.com: http://www.osborneclarke.com/media/filer_public/c9/73/c973bc5c-cef0- 4e45-8554-f6f90f396256/itech_law.pdf
- Croft, J. (2016, October 6). Artificial intelligence disrupting the business of law. Retrieved from www.ft.com: https://www.ft.com/content/5d96dd72-83eb-11e6-8897-2359a58ac7a5
- de Souza, S. P. (2017, November 16). Transforming the Legal Profession: the Impact and Challenges of Artificial Intelligence. Retrieved from www.digitalpolicy.org: http://www.digitalpolicy.org/transforming-legal-profession-impact-challenges-artificialintelligence/
- Emanuel; , Quinn;. (2016, December). Artificial Intelligence Litigation: Can the Law Keep Pace with The Rise of the Machines? Retrieved from www.quinnemanuel.com: https://www.quinnemanuel.com/the-firm/news-events/article-december-2016-artificialintelligence-litigation-can-the-law-keep-pace-with-the-rise-of-the-machines/
- European Commission. (2016, April 30). A European strategy on Cooperative Intelligent Transport Systems, a a milestone towards cooperative, connected and automated mobility. Retrieved from European Commission.
- Friedman, D. (n.d.). http://www.daviddfriedman.com/. Retrieved from Artificial Intelligence: Legal Research: http://www.daviddfriedman.com/Academic/Course_Pages/21st_century_issues/21st_century_l aw/ArtificialIntelligence_Cannon_12.html
- Hallevy, G. (2010). The Criminal Liability of Artificial Intelligence Entities – from Science Fiction to Legal Social Control. Akron Intellectual Property Journal: Vol. 4 :Iss. 2 , Article 1., http://ideaexchange.uakron.edu/akronintellectualproperty/vol4/iss2/1/.
- Heyman, M. G. (n.d.). The Natural and Probably Consequences Doctrine : A Case Strudy in Failed Law Reform. Berkeley Journal of Criminal Law, Vol 15, Issue 2.
- Holley, P. (2016, January 16). The Washington Post, 20th January 2016 “Why Stephen Hawking believes the next 100 years may be humanity’s toughest test”. Retrieved from The Washington Post: https://www.washingtonpost.com/news/speaking-of-science/wp/2016/01/20/why-stephenhawking-believes-the-next-100-years-may-be-humanitys-toughest-testyet/?noredirect=on&utm_term=.f8be9c411acb
- Karnow, C. E. (1996). “Liability for Distributed Artificial Intelligences”,. Berkeley Technology Law Journal, Vol 11.1, 147.
- Khaleej Times. (2018, April 3). Hollywood star Will Smith ‘rejected’ by robot Sophia. Retrieved from Khaleej Times: https://www.khaleejtimes.com/region/saudi-arabia/Hollywood-star-Will-Smithrejected-by-robot-Sophia
 N.P. PADHY, ARTIFICIAL INTELLIGENCE AND INTELLIGENT SYSTEMS 3 (Oxford University Press 2005).
 ISAAC ASIMOV, I, ROBOT 40 (Doubleday 1950) [hereinafter ASIMOV, I, ROBOT]
 WILLIAM M. CLARK & WILLIAM L. MARSHALL, LAW OF CRIMES 1-2 (7th ed., 1967).
 Lawrence B. Solum, Legal Personhood for Artificial Intelligences. 70 N.C. L. REV. 1231 (1992).
 The apprehension that Al entities evoke may have arisen due to Hollywood’s depiction of Al entities in numerous films, such as 2001: A SPACE ODYSSEY (Metro-Goldwyn-Mayer 1968), and the modem trilogy, The Matrix, in which Al entities are not subject to the law.
 Isabelle Boucq, Robots for Business, available at http://www.Atelierus.com/emergingtechnologies/article /robots-for-business.
PR Newswire, Artificial Intelligence Market Forecasts, available at http://www.prnewswire.com/news- releases/artificial-intel- ligence-market-forecasts 300359550.html.
Buyers, J. (2015, January).
Hallevy, G. (2010).
Heyman, M. G. (n.d.).