Regulating artificial intelligence - The tortoise and the hare

Artificial Intelligence (AI) programmes have boomed in recent years, yet we are still a long way off the apex of this technology.

Whilst the concept of AI covers a wide range of technological applications, the problem arises when trying to regulate and apply the law to actions and consequences from a process of machine learning that is increasingly without human influence.

To understand the issues, it helps to look at how AI is applied in today’s world. Self-driving technology in cars, text and image analysis, speech analysis that has paved the way for Amazon’s Echo and Apple’s Siri, and data mining which allows vast amounts of information about a person’s preferences and habits to be gathered and used for targeted services.

A significant challenge for law-makers will come when they assess how to deal with circumstances when AI produces undesirable outcomes, or worse, leads to the harm of a person or property.

Liability is usually determined by an assessment of fault by applying causation principles. If cause can be established then blame can be attributed; however the law will generally expect the fault or defect identified to have caused the loss for which compensation is being sought.

How the law treats the decisions and actions of AI machines when their processes are increasingly removed from human intervention, and instead rely on machine learning principles, is a challenge for legislators and judges. If a machine’s decision cannot be traced back to a human error in its programming or operation, what then?

That question remains largely unanswered, though some governments are thinking further ahead than others. The UK government is putting off dealing with the difficulties caused by AI functions in driverless cars by looking to insurers to bridge the gap where an autonomously operated vehicle has caused an accident.

Based on proposals contained in the Automated and Electric Vehicles Bill, which is currently being debated in the House of Lords, the insurer of the vehicle would be treated as being at fault in those circumstances and the onus would be on them to take recovery action where, for example, there has been contributory negligence from a third party or where there was a manufacturer’s defect involved in an incident.

This may suffice as a stop-gap solution, but the question of causation will need to be broached in the near future. The Estonian government is taking a bolder approach by considering giving machines that are controlled by AI a unique legal status. One proposal under consideration would create the term “robot-agent”, which would be somewhere between having a separate legal personality and an object that is someone else’s property. Such an approach would likely lead to a significant re-calibration of established legal principles to take account of how this new legal personality interacts with its surroundings.

The development and regulation of AI is likely to be a hot topic for years to come and as the technology becomes more sophisticated it will be imperative for the law to keep up.

The contents of this article are intended for general information purposes only and shall not be deemed to be, or constitute legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article.