Experts raise concerns on AI in law and government

A US artificial intelligence thinktank has raised serious concerns about the use of the technology in government, and especially in the legal system.

Artificial intelligence (AI) is the use of computers to simulate human thought. AI is increasingly being used to make decisions in business, most notably in systems called robotic process automation (RPA), which is becoming increasingly popular in Australia.

The AI Now Institute at New York University has released its 2017 report, which focuses on the use of AI in government and the law. It cautions against the unrestrained use of AI in government agencies, because they are not transparent nor subject to an individual’s rights to due process.

“Core public agencies, such as those responsible for criminal justice, health care, welfare, and education should no longer use ‘black box’ AI and algorithmic systems,” it warns.

“This includes the unreviewed or invalidated use of pre-trained models, AI systems licensed from third party vendors, and algorithmic processes created in-house. The use of such systems by public agencies raises serious due process concerns, and at a minimum they should be available for public auditing, testing, and review, and subject to accountability standards.”

The report says that AI technologies are developing rapidly and are being adopted widely. “While the concept of artificial intelligence has existed for over sixty years, real-world applications have only accelerated in the last decade due to three concurrent developments: better algorithms, increases in networked computing power and the tech industry’s ability to capture and store massive amounts of data.”

It examines the extent to which AI systems are already integrated in everyday technologies like smartphones and personal assistants, where they make predictions and determinations that help personalise experiences and advertise products.

“Beyond the familiar, these systems are also being introduced in critical areas like law, finance, policing and the workplace, where they are increasingly used to predict everything from our taste in music to our likelihood of committing a crime to our fitness for a job or an educational opportunity.

“AI companies promise that the technologies they create can automate the toil of repetitive work, identify subtle behavioural patterns and much more.

“The design and implementation of this next generation of computational tools presents deep normative and ethical challenges for our existing social, economic and political relationships and institutions, and these changes are already underway.

“AI does not exist in a vacuum. We must also ask how broader phenomena like widening inequality, an intensification of concentrated geopolitical power and populist political movements will shape and be shaped by the development.”

Many critics of AI have concentrated on its potential to displace humans in many job functions. The AI Now Institute report considers this, but it says that in the government sphere there are also serious ethical concerns about how autonomous AI systems will affect the transparency of legal judgements and government decisions based on subjective analyses of a range of factors, in such areas as immigration.

“Difficult decisions need to be made about how we value fairness and accuracy in risk assessment. It is not merely a technical problem, but one that involves important value judgments about how society should work. Left unchecked, the legal system is thus as susceptible to perpetuating AI-driven harm as any other institution.”

The report is available here.

Comment below to have your say on this story.

If you have a news story or tip-off, get in touch at  

Sign up to the Government News newsletter

Leave a comment:

Your email address will not be published. All fields are required