Artificial Intelligence and the law

By Kate Lowry

As Artificial Intelligence (AI) systems are increasingly used in legal practice, it is important to understand what that means in regards to who is the decision-maker and how it affects those using the Australian Courts.

Former Chief Justice Marilyn Warren recently commented that ‘it’s here, it’s happening!’ Chief Justice Warren made the comments in light of the recent decision from the Victorian Civil and Administrative Tribunal (VCAT) to launch a new Online Dispute Resolution Program that will apply to straightforward cases within the small claims division of the Tribunal.

Although the concept of AI-assisted dispute resolutions conjures up ideas of science fiction characters like Judge Dredd, and while AI in the law and the courts is becoming more and more common, the reality is much less extreme.

It is now inevitable that AI will take over many of the tasks and roles currently performed by humans, and the law is no exception. There is a very real perception and fear in society that AI can probably do our jobs better than we can and when it comes to rule-book decision making, that is a fear based in reality. AI systems are more adept and less likely to be swayed by inherent and unconscious bias. AI is already permeating much of our society, replacing traditionally human-operated systems.

In respect to the courts, AI can help in the legal process in many ways. While it is tempting to fear what this might mean, and reject its actual value, it is better to understand what it can achieve for the legal profession.

Before we delve into whether AI can make a legal decision, it is interesting to consider whether AI can make what is regarded as an intrinsically human decision – an ethical decision – at all. In a recent presentation to the AI Now Institute, Professor Rob Sparrow from Monash University discussed common perceptions of how AI can be ethical in exercising power.  Sparrow put forward the question, is it the people who design the AI that need to be ethical or do we need to be building ethics into the machine?

Sparrow argues that proponents of the latter idea, which places the ethical responsibility on the machine, ‘operate on an impoverished account of what it means to be ethical’. Ethics, Sparrow argues, is not founded on one universal principle, but changes depending on the ethical theory put forward. AI could be seen from a Utilitarian, Kantian, or Aristotelian viewpoint. The two former theories, Sparrow explains, could actually make AI better ‘ethical decision makers’ than humans because they are bound by calculations and rules. But, he argues, an Aristotelian virtue ethics standpoint requires that ethics ‘is closely connected with our embodiment’. To be ‘ethical is to behave and feel and react in certain ways that only creatures with a particular biology are capable of.’ In other words, it is our humanity which makes us ethical.

So, if AI can’t make a truly ethical decision on its own without programming by a human of some particular ethical theory, can AI make appropriate legal decisions when resolving disputes? Does dispute resolution require some decision-making factor that only humans are capable of?

Artificial Intelligence in the legal domain

There are a number of branches of research into AI that should be investigated in order to understand the possible roles of AI in the law. One possibility is that it can help in legal reasoning, specifically in the dispute resolution process.

Decision Support Systems

Decision Support Systems are rule-based systems that can be used as a tool to model legal concepts. They can be used in any knowledge-based environment and can be developed in such a way as to provide conclusions in certain scenarios. This means if certain conditions are fulfilled then one or more conclusions will be true. These kinds of tools are extremely useful in complex areas of law, especially when one considers the huge amount of information lawyers and legal representatives need to analyse to make decisions. Using such tools can be an extremely efficient way of analysing relevant facts offered by parties as well as legal information, norms, and prior case law in making simple legal decisions. These decisions are still monitored by human experts and as such the systems do not generate automated outcomes, but instead issue justified recommendations and compile information useful to the decision-making process. Such human safeguards for decision-making are known as Human-in-the-Loop (HITL).  HITL is a branch of AI that leverages human and machine intelligence to create models for decision-making.

Decision Support Systems have been used in the Family Court of Australia to assist judges, registrars, mediators and lawyers with making predictions about the distribution of marital property following a divorce. Their continued use can only help to reduce the time that parties need to be engaged with the Court. This would be beneficial for the parties involved and for alleviating the backlog of cases the Court hears each year.

Expert systems

Expert systems are computer programs that have been programmed in such a way that allows them to function at the level, or sometimes even higher, of a human expert in a certain field. This is again another system of HITL also known as machine learning. The systems are trained, fine-tuned and tested by humans in a way that gives them a deep knowledge of a certain topic and allows them to perform at this high level. Expert systems try to mimic human expertise and knowledge. An expert system in this sense is able to deal with information through analysis, to then generate knowledge and then take actions on that knowledge in a way that resembles a decision made by a human expert.

For the legal profession, ideally, an expert system would be a tool that can assist in the processing of huge amounts of information, automating simpler tasks, and allowing legal practitioners to work more efficiently. Such an expert system would detail the reasons for the specific analysis or recommendation provided and can compare the facts and issues in the case to other programmed cases. This would be a huge step forward in the notoriously slow legal process. And while these recommendations should only be treated as informative and not as decisive, they could allow for judges to deal more quickly with cases by providing guidance on the case law, facts and norms. There is no current system that can operate on this level yet, but that is not to say that it isn’t imminent. However, some argue that this rule-based system is insufficient for legal practice and instead should be combined with a case-based system.

Case-based systems

The logic behind a case-based reasoning system is that if a new problem is similar to an old one it will have a similar outcome. The process requires the ability to identify and select between the unique and similar attributes of a case. It also requires a decision-making system to consider each attribute and assign it a weight in accordance with its importance in affecting the decision. The process involves four sequential phases: Retrieve, Reuse, Revise and Retain.

In the first phase, the problem must be analysed, and then relevant cases are retrieved from the memory and ordered according to similarity. The second phase, Reuse, will see the solutions from the previous cases mapped to the new problem. Of course, there is unlikely to be any case that exactly replicates the new case. Nonetheless, there will be enough similarity through a selection of cases to obtain a recommendation.

The third phase will see the solutions tested to determine whether the results are sufficient, and Revise if necessary. The last phase, if the solution is adopted by the human decision-maker, will see the storing of the solution in the case memory which will, in turn, see the ‘enrichment’ of the case memory. This system is adaptable to the legal domain as it uses a similar reasoning method as a legal professional would through the assessment of past cases and legal precedent. While case-based systems are still in the research and development stages, they are already one of the most commonly used approaches for the development of intelligent learning systems. Their application in the Courts could potentially reduce the workload of legal professionals.

Multi-agent systems

Multi-agent systems are much more complex, and while not in any implementation stage, do provide pause for concern if applied to the legal context. These systems not only define the agents within them as an individual agent with certain properties, but as a set of agents, working within an environment with a pairing between them. This, it is argued, replicates how humans make decisions. When a human – an agent – makes a decision, we make associations with our environment, in the context of society etc.

To define even a weak idea of an agent, that agent must be able to (1) operate without the direct intervention of a human and make its decisions in an autonomous way; (2) interact with other agents; (3) perceive the environment it is within and respond to stimuli; and (4) take the initiative of pursuing its own goals.

It is argued that these criteria address the complexity required for intelligent behaviour of an agent in and out of communities. The objective of these Multi-agent systems is to develop agents that are able to make autonomous assessments, that once combined with other agents’ assessments, will lead to communities of agents that can emulate intelligent behaviour.

The use of Multi-agent systems in the legal domain adds a different kind of value than the other systems mentioned above. Multi-agent systems can instil differing virtues or values into the agents in order to understand the emotional responses of parties in particular dispute resolution, as the argument goes. However, as this article first explored, is it right, or is it even possible to instil virtue and ethical decision-making capability into AI? Sparrow, as we saw above, argues no.

AI Now? Is it practical or possible to use AI in the legal system?

There is an uncomfortable feeling that goes along with the idea of computers making real decisions about people’s lives. Nonetheless, use of AI in the dispute resolution process is doing more good than harm. While the use of AI is still in its early stages and is not yet sophisticated enough to make ethical decisions, AI is an extremely useful and practical tool for the legal system and especially for dispute resolution. The Australian Courts are notoriously slow and the use of AI has the potential to improve timeliness. This is obviously a positive development as it would allow disputants to access justice more quickly. This will allow those involved in disputes to move on with their lives much quicker.

To put this in context, on 10 June 2018, Australia’s Attorney-General Christian Porter appeared on the ABC’s Insiders Program to discuss the recent changes to combine the Federal and Family Courts in Australia. Porter admitted that ‘waiting times have blown out in the family law courts … and that causes a great deal of angst for families … the best thing we can do for families is get them in and out very quickly so they can get on with living their lives.’

While the Government, in this case, was discussing the reduction and simplification of onerous jurisdictional issues, the point remains the same. Systems are needed now to reduce the amount of time parties spend in the courts. By introducing rule-based AI systems, such as the decision support, expert and case-based systems described above, the courts can reduce the onerous and time-consuming processes legal professionals undertake. Reducing this burden will not only facilitate a cheaper and quicker legal system, but also a more just one.

And while some may feel squeamish about AI making decisions that can affect their lives in sometimes dramatic ways, they should rest assured that qualified legal professionals will not be replaced anytime soon. It is unlikely that AI will ever reach the level of being able to replicate human emotion and ethical consideration. That, as Sparrow so vehemently argues, is only something that a being can have in virtue of it being human.

About Kate Lowry
Kate Lowry holds a Master of Arts in Philosophy from Monash University and is currently studying a Master of Laws (Juris Doctor) at Monash University. She is the Chief Operating Officer for the ARC Centre of Excellence for Mathematical and Statistical Frontiers at The University of Melbourne and has a strong interest in Mathematics and Statistics as useful tools for regulatory and legal decision making.

Follow Kate on LinkedIn and Twitter

Featured image by Alex Knight on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s