How to regulate Robots ?
Facing crucial ethical issues, such as the use of automated weapons, the existence of intimate relationships between human beings and robots, or plausible dangers linked to the use of automated cars, it is evident that there is an increasing need for a clarified regulation of robots.
Which liability for robots under French law?
Looking at progress made in artificial intelligence and the emergence of general public robotics raise questions about current and future regimes of liability for robots, both at the French and European level.
So far, European legal systems have not got the full measure of how robotics is about to disrupt them as they have disrupted the field of science in general. They all in some way or another deal with such issues through existing liability regimes they have to adapt to the reality of robots. But there is no example in Europe of a country that has passed and enforced a bill which specifically deals with robots’ liability issues as a whole. Basically, It means that robots are considered as things and the person having control over them is liable in case robots cause a damage. French law does not make an exception on that matter.
The notion of robot is ambiguous as it can encompass very different realities. But when it comes to legal responsibility, the main point of focus is the degree of autonomy. Some legal experts draw a line between robots depending on how “strong” their AI is. At the current state of robotics evolution, general liability regimes seem sufficient as long as the robot remains under control. But scientific progress towards more autonomy will sooner or later lead to a made-to-measure liability regime for robots.
A satisfactory yet soon insufficient liability regime
Under French law, there are several regimes that can be applied to robots when they cause a wrong. Firstly, the regime of liability for the actions of things could be of great help to any judge facing such a legal issue. It states that the custodian of an object that causes damage is held liable if he has use, direction and control over it. It would potentially be applicable to robots even if the custody is intellectual rather than material. Under this regime, as most robots do not have a sufficient degree of autonomy, the custodian would probably be held responsible in most cases.
Secondly, in the event of robot’s caused wrong, the producer could also be held liable under the regime of defective products. It engages the producer’s liability in case the product has a defect, i.e. when it does not offer the safety one can legitimately expect from it. This rather large definition means robots producers could face legal proceedings in case their machines are proven to be defective in a special occurrence. The only way they could potentially limit their liability would be to provide as much information as possible on the inherent risks of the robot. Note that this reasoning could easily be extended to the person who designed the AI system as well as to the one who developed the software integrated in the machine. But again, liability could here be reduced and transferred to the owner or the user.
French courts facing a wrong caused by a robot could rely upon one of these regimes to build case-law on robots’ liability. They seem to be sufficient to deal with a few basic situations. Actually, there is no real problem so far because robots still seem to do what they have been programmed for. But it is unclear for how long they will keep being sufficient and therefore countries need to anticipate and focus on creating a dedicated regime of liability.
Looking for the suitable legal regime
Many countries across Europe have launched reflexions and consultations around the idea that robots could be recognized a legal personality. The European Parliament has also taken that path with the recent draft report with recommendations to the Commission on Civil Law Rules on Robotics published on May 31st 2016 under the supervision of MEP Mady Delvaux.
The European Parliament in Strasbourg
The idea is to create for robots a new legal fiction like the one that exists for companies, associations, foundations, etc. Robots would then have rights and obligations, and more importantly an identity and an estate. However, even if such a creation is not legally difficult to imagine, it does not really make sense. In fact, legal personality is not meant to regulate the legal consequences of actions taken by a certain category of objects. That is certainly why animals never were recognised one. In addition, it could soften the liability currently bore by producers and users of robots. What we need in a soon-to-be robotic world is a proper liability regime
A few legal experts have come up with the idea to adapt the liability regime that exists for animals to robots. Even though animals and robots are different, as the former have feelings and a form of natural intelligence and the latter do not. However, the regime of the new article 1243 of the French Civil code could be of great inspiration: the owner of an animal, or the person who uses it, while in use, is liable for the damage the animal caused, whether the animal was in its custody or lost or escaped. The absence of proper control of the owner or the user over the robot would then be resolved by analogy with the situation where the animal escapes its master. Through an objective liability regime, the robot’s custodian (owner or user) even if not at fault, would have to answer the damage caused by the machine. Which practically mean that the issue of autonomy would be overcome.
At the international level, a disparity of the approaches
In the whole world, as robotics have opened economic opportunities it also have been raising legal issues and bill discussions. Indeed, the diverse legal systems face two main questions.
The first one deals with responsibility of the robot in the industry. The question is whether there is a transfer of responsibility from the user to the robot manufacturer. It underlines the difficulty to identify the accountable person in case of dysfunction or damages caused by the robot. Moreover, the debate is made more complex if robots become accountable as it would be difficult to get a satisfactory reparation from a robot or to make them responsible if they get destroyed after a dysfonctionnement.
The second one discuss the legality of robots. Indeed, the development of robots’ autonomy give them new abilities. In the case of military robots, the issue raised is the legality of robots that could be harmful. There is a debate going on to know whether it is legal for a robot to harm human life or the environment. Moreover, regarding civil society and domestic use, robots raise legal debates on individual liberties. Indeed human dignity, privacy and professional/ medical secrecy can be infringed by the introduction of robots.
A Sword device
From a global perspective, robots may be regulated by international law and humanitarian law. However it can be noticed that there is a great disparity among countries around the globe. The US, Japan and South Korea appear as pioneers developing an appropriate legislation for robots. According to the professor Michael Schmitt, the use of robots is limited by the humanitarian law in the US. On the contrary European countries still have a vague approach on this issue. In the EU the project Robolaw can be mentioned. This project brought up the following conclusion: robots are often considered as “exceptionals”, this leads to question existing rules.
Finally, at the international organizations level, a debate has emerged around the legality of military robots. Based on the convention on conventional weapons, the UN have questioned whether the use of these robots should be banned as an inhuman weapon like the flamethrower or chemical weapons. According to the Red Cross, which is an observer at the UN, decision of killing and destroying must be kept under human responsibility.
ATLAS : US Future Military Robots
ZOOM : the European Approach
Contrary to common belief, existing laws state that a robot is instantly regulated from the very moment it materializes (cf directive 85/374/EEC). Most current robotic applications are to be considered as “products”, and following the European Defective Product directive or the US Restatement Third of Torts, should be addressed as such. Hence lawmakers must shift the question from whether robots should be regulated to how regulation should be structured.
However, there are a number of obstacles to consider before applying or rethink the regulation, including the following: should we adopt a functional approach to regulation? If robots are to be considered as mere products: are liability rules appropriate for AI revolution? Should we shift from liability to efficient risk management? How should we implement a robot regulation which won’t weaken or stifle innovation?
Further, since robots are different and plural – for example robot companions are very different from autonomous driving cars – defining “robot” with only one legal concept does not seems possible. In that way, there is a need to focus on problems as they emerged, with a case-by-case approach, and maybe not apply the same solutions to robots of different kinds.
Resolving the issue of legal plurality by adopting a case by case approach
As we have seen, there are more differences in classes of applications than relevant similarities. Furthermore, even when there are similarities between robots or cases, the same legal issues may require different technical –legal- solutions. Thus in order to regulate robots, it seems necessary to choose to focus on the problems as they emerged when trying to apply existing rules.
How to regulate : the RoboLaw approach
RoboLaw is financed by the European Commission within FP7 and addresses the issue of how to regulate robotics in a different perspective than previous methodologies. In RoboLaw, it is made clear that the term “robot”, derived from Science Fiction, should not be used for various applications with more differences than common traits. This implies a renouncement concerning the development of any uniform solution, set of rules or code created for robots joined in a single categories, as for example Isaac Asimov imagined in the 1940s.
The central issue of liability: strict, absolute? Producer, owner?
The most problematic issue is the one of liability, or in other words: who pays for the damages when something goes wrong? However, more than a crucial question from an ethical point of view, the issue of liability is closely linked to innovation: liability rules determine incentives to the development of new products. In the production of robots, ex ante certainty is necessary. The attempt to produce an effective regulation of robotics can have heavy impacts on innovation in Europe. Hence, there is a need for an innovative approach to regulate innovations, especially in the case of robots. A legal framework with ex ante certainty would make it certain for innovators and producers to know who is liable beforehand and strengthen the incentives to innovate even more.
Regulating robots by making liability clear leads to decide once and for all if robots should have an electronic personality: it would be simpler and more plausible to consider that they don’t. In that way, when damage arise from the use of robot, with strict or even absolute liability, only human beings can be held liable and responsible : either the user/owner or the producer. But which one? In the existing directive 85/374/EEC, it is mentioned that the producer is liable for all damages resulting, either from production defect, from warning defect or from design defect. Even though such a designation has a rather negative impact on the innovation market, threatening innovators and producers to some extent, the purpose of this rule is twofold: firstly, incentivize safe design and secondly, provide prompt compensation to victims without need to prove fault since the liability is not only strict but absolute. Choosing an absolute liability allows a simpler justice process : plaintiff does not need to prove the producer’s responsibility or fault in making a robot with defects but simply needs to show the causal link between his injury and the defect in the product, in this case the robot.
Nonetheless, there are three preliminary grounds to doubt the effectiveness of EU product liability rules: firstly, there is very little litigation compared to the United States, even though products are basically the same, secondly, claimants are rarely successful and it proves that strict liability is not sufficient to protect customers from robots with defects, and thirdly, litigation is both complex and expensive in Europe.
Fields that are today regulated by other liability rules are likely to shift towards producer liability rules
The two most likely fields of product to shift towards producer liability rules are the medical robots, with the risk of medical malpractice, for example Dr Watson fom IBM, and the vehicles and driverless vehicles with the risk of traffic accidents.
Addressing these specific regulation issues
While keeping in mind that it is necessary to separate beforehand the issue of safety from that of compensation, following Dr Bertolini, who has presented research on the Regulation of Robotics to the Committee of the European Parliament, three main solutions that are to be considered:
- Foster technological standardization: develop EU standards in robotic (all major world economies will race to identify technological standards to take leadership on market), develop narrow tailored technological standards, frequently updated, for specific kinds of applications, and lastly develop a European Robotic agency;
- Allocate liability to the party best placed to minimize costs, rather than who is “wrong”: clear cut liability rules make the “risk” foreseeable, prefer absolute liability either on the user or the producer (no defense) which would reduce the costs for everyone, create compulsory insurance schemes, so that the person liable can estimate and redistribute the cost of accidents to consumers through price mechanisms;
- Create Euro regulations for robot testing in real life scenarios: the European Union is already lagging behind countries such as Japan or the US concerning robot testing in real life conditions, there is a need to test robots outside of lab, where they interact with humans. It can however prove to be very difficult because regulations can vary largely between european countries and no clear rules exist. However, some regulation could be very beneficial and could also help identify risks and thus how to insure and avoid them.