Artificial Intelligence and government regulation

We are moving rapidly towards a world where robots and artificial intelligence (AI) systems are connected to and influenced by social media, the Internet of Things (IoT) and big data.

Technological developments are moving fast, and AI has many governments concerned. Given the pace of technological advancement, how do rule-makers set legislation for AI while allowing the safe evolution of technology?

Who thinks about and enforces these guidelines, and what work is being done, or should be done, with governments to craft AI policy?

Moves by the European Parliament to consider granting some form of legal status to AI have revived questions of liability and responsibility.

For example, who is liable when an intelligent system makes a mistake, causes an accident, damage or becomes corrupted? The manufacturer, developer, the person controlling it, the robot? Or is it simply a matter of allocating appropriate risk, liability and responsibility?

As autonomy and self-learning capabilities increase, robots and intelligent systems will feel less like machines and tools. New laws will certainly be required to handle AI as it develops.

While the European Union is leading the way in considering these issues, Australia is watching closely and we will need to make our own decisions around these issues in the foreseeable future.

So far, attention has been focused mainly around autonomous cars and drones. But the rapid adoption of AI into diverse areas of our lives, from business, education, healthcare and communication through to infrastructure, logistics, defence, entertainment and agriculture, means that any laws involving liability will need to consider a broad range of contexts and possibilities.

Clear boundaries and filters will be needed. Governments will need to legislate on what technology can be developed legally.

Weaponising AI, for example, should be against the law, but this needs to be approached with a degree of common sense and a sensible overview of the whole industry and the development of the technology.

Additionally, we will need to establish specific protection for potential victims of AI-related incidents to give people confidence that they will have legal recourse if something goes wrong.

As software replaces human effort, governments need to deal with job losses to prepare for the longer-term impacts of such innovation.

AI can help to reduce the burden on the public sector significantly by automating basic customer enquiries through bots. Legislation to repair the damage from jobs lost not only adversely impacts the innovation agenda, it is also too late for those who are out of work.

The UK government has created a select committee for AI, designed to gather knowledge and advice on how best to address some of the challenges that AI presents to both society and the economy. But more needs to be done.

To prepare for the shift in how we work, the Australian Government needs to partner with major employers and organisations responsible for creating and implementing bots in marketplaces around the world.

Government bodies need to be working alongside private organisations and forums in order to fully comprehend and prepare for the upcoming changes. Training and education programs must be advertised to develop skills geared towards industries that will be around in the longer term.

We also need to create software platforms that rely upon human input and labour in areas where AI is less applicable.

Introducing a robust regulatory framework with relevant input from industry, policymakers and government would create greater incentive for AI developers and manufacturers to build in safeguards and minimise the potential risks.

Such regulation would pave the way for an evolutionary adoption of AI. This new paradigm will ultimately require a significant rethink and restructure of some of our long established legal principles.

*Andrew Cannington is Research Vice President APAC for LivePerson


Comment below to have your say on this story.

If you have a news story or tip-off, get in touch at  

Sign up to the Government News newsletter

One thought on “Artificial Intelligence and government regulation

  1. Another concern is the potential rise in human dementia at earlier age due to less mental activity being required e.g. we no longer create neural pathways when needing to work out a new driving route by spatial orientation. This is somewhat similar to the need to keep exercising in old age due to muscle atrophy.

Leave a comment:

Your email address will not be published. All fields are required