Councils get smarter with data

Councils around the world are grappling with the ethical issues associated with AI as part of smart city building. And they are learning that protecting privacy is only the start.

Tom Symons

Tom Symons, principal researcher at UK-based innovation foundation Nesta, says ethical considerations aren’t huge now – but they will advance ahead of the “frontier edge” of AI and machine learning  as the focus expands from data collection and handling to how data is actually used.

“Is it reasonable and fair, is there transparency, does it reinforce existing biases?” Symons says. “Is the data being used to do something unethical?”

Nesta and several local authorities are engaged in a multi-city global experiment, Decode, looking at digital-service implementations that let citizens keep control. Leading the charge are cities like Barcelona in Spain, and Amsterdam in the Netherlands.

Empowering citizens

Francesca Bria, chief technology officer at Barcelona City Council and Decode lead, says digital tools such as smart contracts, data-commons licenses, encryption and attribute-based credentials are allowing citizens to share data on their own terms, setting specific entitlements to their own personal data dynamically on an ongoing basis.

Francesca Bria

“These are then the citizen-set rules to be enforced when data consumers access the data,” says Ms Bria. “We are going from the previous more dismissive approach to empowering citizens and understanding the value of their data.”

In the Spanish city, an e-participation platform, Decidim, lets citizens run surveys, propose ideas, and participate in budget processes and consultations. Some of the 30,000 active users were worried that Decidim would expose their political beliefs; as a result, authenticated users can remain anonymous in debates and when signing petitions.

Barcelona is also looking at enabling citizens to submit local environment data, such as information on pollution.

Identifying hot topics’

In 2015, the UK’s Bristol City Council teamed up with Knowle West Media Centre and Barcelona thinktank Ideas for Change on a participatory “sensing” pilot to identify and solve hot topics among citizens.

Damp homes were a big issue for many, so sensors measuring temperature, humidity and dew point were given to residents to install at home, and citizens diarised their showering, cooking and clothes-washing. This brought the community together to understand damp and how to address it.

Australian examples include Transport for NSW’s Opal dataset project. Differential privacy algorithms ensure the dataset, collected from customers tapping in and out over two weekly periods in 2016, remains private but is still useful for exploring personalised service delivery, according to a statement.

“An example is using tap-on and tap-off Opal data to tell customers in real-time how full a bus is so they can make the right transit decision.”

Avoiding regulatory lag

Nesta’s Symons says cities need a system flexible and dynamic enough to adapt to new technologies as they emerge, rather than having constant “regulatory lag”. And this need for agility suggests that the classic public-sector accoutrements of detailed frameworks and processes may not be the best approach.

Instead, decisions can be assessed against overarching principles like universality, privacy, fairness, accessibility, non-discrimination. Is there openness? Is anyone being harmed? Then, you need a level of education, literacy and trust that facilitates a “sensible and informed view” case by case, he says.

“The potential level of complexity when you get into advanced AI, where all this is heading, is so huge that it’s very difficult to come up with a framework or code of AI ethics that people can actually follow, and that doesn’t pose more questions than it answers,” Symons explains.

He emphasises that what’s also crucial to the ‘smarter city’ dream is building awareness and engagement in the local population ahead of time. Then local bodies will have the best chance of developing trust and staying ethical, says Symons.

Machines that learn to be ethical 

A complication is that big data applications tend to reinforce entrenched biases in their programming and in the mathematical algorithms that generate their economies of scale. Computers aren’t capable of ethical considerations on their own.

But what if they could be in future? AI projects such as EthicsNet in the UK and Belgium have begun experimenting with different ways of creating the massive datasets that machine-based systems use for learning. Their aims include the creation of “socially aware thinking machines” that can respond to individual preferences with appropriate, pro-social behaviour.

If that research bears fruit the results will no doubt be far-reaching, in the public sector and elsewhere.

*Fleur Doidge reports for Government News from London

Comment below to have your say on this story.

If you have a news story or tip-off, get in touch at editorial@governmentnews.com.au.  

Sign up to the Government News newsletter

Leave a comment:

Your email address will not be published. All fields are required