In the era of robotics, artificial and virtual intelligence, it must increasingly struggle with creating frameworks for assigning responsibility. JL
Jeremy Elman and Abel Castilla report in Tech Crunch:
The legal system’s interactions with software like robotics only finds liability where the developer was negligent or could foresee harm. But in reinforcement learning, there’s no fault by humans and no foreseeability of such an injury, so tort law would say the developer is not liable. That will pose Terminator-like dangers if AI keeps proliferating with no responsibility. AI by design is artificial, and thus liability or a jury of peers appears meaningless. But the question is whether AI should be liable if something goes wrong and someone gets hurts.
Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI?
This article, written by a technologist and a lawyer, examines that future of AI law.
The field of AI is in a sort of renaissance, with research institutions and R&D giants pushing the boundaries of what AI is capable of. Although most of us are unaware of it, AI systems are everywhere, from bank apps that let us deposit checks with a picture, to everyone’s favorite Snapchat filter, to our handheld mobile assistants.
Currently, one of the next big challenges that AI researchers are tackling is reinforcement learning, which is a training method that allows AI models to learn from its past experiences. Unlike other methods of generating AI models, reinforcement learning lends itself to be more like sci-fi than reality. With reinforcement learning, we create a grading system for our model and the AI must determine the best course of action in order to get a high score.
Research into complex reinforcement learning problems has shown that AI models are capable of finding varying methods to achieve positive results. In the years to come, it might be common to see reinforcement learning AI integrated with more hardware and software solutions, from AI-controlled traffic signals capable of adjusting light timing to optimize the flow of traffic to AI-controlled drones capable of optimizing motor revolutions to stabilize videos.
How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?
Traditionally, the legal system’s interactions with software like robotics only finds liability where the developer was negligent or could foresee harm. For example, Jones v. W + M Automation, Inc., a case from New York state in 2007, did not find the defendant liable where a robotic gantry loading system injured a worker, because the court found that the manufacturer had complied with regulations.
But in reinforcement learning, there’s no fault by humans and no foreseeability of such an injury, so traditional tort law would say that the developer is not liable. That certainly will pose Terminator-like dangers if AI keeps proliferating with no responsibility.It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions.
The law will need to adapt to this technological change in the near future. It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions, given personhood and hauled into court. That would assume that the legal system, which has been developed for over 500 years in common law and various courts around the world, would be adaptable to the new situation of an AI.
An AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless. A criminal courtroom would be incompatible with AI (unless the developer is intending to create harm, which would be its own crime).
But really the question is whether the AI should be liable if something goes wrong and someone gets hurts. Isn’t that the natural order of things? We don’t regulate non-human behavior, like animals or plants or other parts of nature. Bees aren’t liable for stinging you. After considering the ability of the court system, the most likely reality is that the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.
This likely will mean convening a group of leading AI experts, such as OpenAI, and establishing a standard that includes explicit definitions for neural network architectures (a neural network contains instructions to train an AI model and interpret an AI model), as well as quality standards to which AI must adhere.
Standardizing what the ideal neural network architecture should be is somewhat difficult, as some architectures handle certain tasks better than others. One of the biggest benefits that would arise from such a standard would be the ability to substitute AI models as needed without much hassle for developers.
Currently, switching from an AI designed to recognize faces to one designed to understand human speech would require a complete overhaul of the neural network associated with it. While there are benefits to creating an architecture standard, many researchers will feel limited in what they can accomplish while sticking to the standard, and proprietary network architectures might be common even when the standard is present. But it is likely that some universal ethical code will emerge as conveyed by a technical standard for developers, formally or informally.The concern for “quality,” including avoidance of harm to humans, will increase as we start seeing AI in control of more and more hardware. Not all AI models are created the same, as two models created for the same task by two different developers will work very differently from each other. Training an AI can be affected by a multitude of things, including random chance. A quality standard ensures that only AI models trained properly and working as expected would make it into the market.
For such a standard to actually have any power, we will most likely need some sort of government interference, which does not seem too far off, considering recent talks in British parliament regarding the future regulation of AI and robotics research and applications. Although no concrete plans have been laid out, parliament seems conscious of the need to create laws and regulations before the field matures. As stated by the House of Commons Science and Technology Committee, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.
6 comments:
Personally, I think machine learning for solving legal problems is not very promising and (together with my brother) I promote a different approach on the pages of this site. However, he has one difficulty: he assumes the closest collaboration of lawyers and programmers. Is it possible in principle? Even if by the will of the authorities to lock a lawyer and a programmer in one room, what will they talk about? Is there an object about which both have an equally good idea and starting from which they can begin to build a dialogue? After all, starting from about 7th grade, when the frightening word “algebra” first sounded, future programmers began to study formal systems, ignoring all humanitarian subjects, and future lawyers, on the contrary, were afraid of mathematics and programming like fire. Later, in the course of studying in universities and professional activities, this “cultural barrier” only increased.
We tried to implement a similar strategy in the company( site:write my assignment), but nothing happened. Therefore, I do not believe it
I have so much interest towards artificial intelligence, my brother is studying AI and I am so jealous that I developed this interest later on ion life my interest initially was mathematics only but now I am always looking for ways to avoid my assignments and instead learn about AI, I barely ever Take My Math Class Online instead I am thinking of changing my courses all together.
Not only does the high cost of lawyers prohibit individuals from receiving justice, but many people do not know how to obtain it or are biassed against ebook publishing company usa lawyers, so they do not hire anyone. Although I like the ROSS concept, there are certainly easier ways to get started.
Ammar Forte Pvt Ltd is a top real Estate Company in Pakistan, firm with offices in Lahore, Karachi, and Islamabad. We have effectively dealt with and delivered the real estate business for the past 12 years, and are regarded as one of Pakistan's most famous real estate marketing organizations. We have an extraordinary standing in grasping the concept of being a transparent and dependable institution.
In the near future, the law will need to adapt to this technological change. We are unlikely to enter a dystopian future in which AI is held accountable for its own actions, granted personhood, and hauled into court. That would imply that the legal system, which has been developed in common law and various pay to take my teas exam courts around the world for over 500 years, would be adaptable to the new situation of an AI.
You provide the great "Artificial Intelligence and the Law" blog. As an associated CV maker in Dubai. I appreciate your all posts; they are beneficial for me. AI and the Law are to adopt this new technology and change shortly it will help us very soon. thanks for sharing this.
Post a Comment