There is great potential for the use of Artificial intelligence (AI) by cities. With the help of AI, we can provide more responsive services to citizens. However, AI poses ethical issues that need special attention, here we explain why and suggest an approach for city government.
The use of automation and machine learning systems is not a new phenomenon in public administration, but it is being transformed in how it is being used – from automating simple transactions to more complex problem-solving.
Today we see adoption across a range of services – chatbots in customer services, prioritisation of housing repairs, traffic signalling, demand-responsive transport, even library book management systems. City government is experimenting with drones, autonomous vehicles and facial recognition.
The rise of machine learning and AI in city government raises a series of issues for city governments to consider:
- First, the application of these technologies is becoming more widespread across city functions. As data is harnessed from an increasing number of sources (including Internet of Things, enabled by 5G) and computing power becomes from affordable, inevitably these uses-cases will also increase.
- Secondly, advanced AI is able to process data from multiple sources to ‘learn’ and propose solutions to more complex problems.
- Third, cities are the places where much AI will be developed, tried and tested – they will be at the forefront of the debate about AI is deployed, why and to whom it is accountable.
The ability to understand the process which led to the result of AI’s learning becomes a new imperative for city government. Cities need to develop new expertise and frameworks to guide decision-making or to update existing rules and ways of working to ensure public accountability when technologies are adopted.
How cities should approach AI?
The successful application of AI in cities relies on the confidence of the citizens it serves.
Decisions made by city governments will often be weightier than those in the consumer sphere, and the consequences of those decisions will often have a deep impact on citizens’ lives.
Fundamentally, cities operate under a democratic mandate, so the use of technology in public services should operate under the same principles of accountability, transparency and citizens’ rights and safety – just as in other work we do.
- The use of AI has attracted criticism as a ‘black box’ technology, hidden from public scrutiny. How do we provide the appropriate public accountability with new technology?
- AI intensifies questions about citizen privacy. Cities need to be in a position to provide assurance against identifying people who wish to remain anonymous or where it infers or generates sensitive information about people from non-sensitive data.
- There’s a further risk around the potential for bias in data leading to vulnerable groups in society to be hurt or have their rights impinged by biased AI.
- Even potentially equitable proposals arising from AI make heavy weather if citizens are not brought in right at the start. How can we design AI with citizens in a civic setting?
Are high-level principles enough?
Today’s AI industry has taken steps to bring greater assurance through high-level principles and ethics committees, illustrated in Nesta’s mapping of AI governance.
Because of the different uses of AI, and the context in which it is used at a city level, it is hard to see such high-level principles covering every scenario with sufficient robustness or granularity to withstand the scrutiny that public services demand. Principles can also be agreed, but fail to be acted upon in the field But we also consider that approaching AI on a case-by-case basis carries risks of an over-cumbersome regime which would lack consistency.
The framework suggested by Nesta’s Eddie Copeland – a shift from thinking from establishing new principles, to ‘showing your workings’ – suggests a new way of working aligned with other areas of city work (e.g. privacy or equality impact assessments) which are already part of the decision-making process. Copeland is suggesting that public sector organisations and their staff should be equipped with a set of questions that they should be able to answer before deploying an AI in a live environment, see picture below.
Figure. 10 Questions to answer before using AI in the public sector
The advantage of this practical approach is that it helps to embed AI deployment within everyday decision-making through a series of questions we can pose right now.
This is a practical entry point for city government allowing AI deployment while creating an ethical framework for public servants and line-of-sight for the city leaders.
Over the coming months we will be exploring how to develop this framework in further detail.
First, as cities we need greater awareness of where AI is being used, particularly with third parties, and where it is proposed. Currently, a product or service using AI can be commissioned or bought from a third party without it being specifically badged as ‘AI.’ As cities develop technology mapping, deployment of AI should also be included. Frame agreements used in the procurement processes should include ethical standards for the use of AI.
Secondly, as public services become increasingly data-driven we will need greater public understanding of artificial intelligence in society and with those who make decisions in city government. In April 2018 London government started a big conversation with citizens and their local elected representatives on the use of sensors and data in development of Smarter London Together Roadmap. In Finland, the University of Helsinki and Reaktor presented an open artificial intelligence challenge to educate one per cent of Finns (about 54 000 people) not only to understand what the artificial intelligence is but also to identify the opportunities it has brought. To solve this challenge, an open and free Elements of AI online course was created. To date, over 170 000 people from 110 countries have enrolled, making Elements of AI Finland’s most popular online course. Public engagement is an increasingly vital part of our work.
Finally, behind the AI questions set out above sit city policies and procedures designed to safeguard against bias and unfairness. Public agencies already have strong rules around direct or indirect discrimination and raising concerns at work. According to Doteveryone’s research tech workers rely most on their personal moral compass, conversations with colleagues and internet searches to assess the potential consequences of their work. Enabling staff to clearer express concerns about bias or other impact in the deployment of new technologies through updated whistleblowing or risk registers also needs to be considered.
As part of the Smarter London Together Roadmap London is committed to exploring AI ethics in further detail. We will use these discussions as the starting point for work with London’s boroughs and main public agencies to embed AI assurance questions in what we do.
Helsinki’s vision is to be the world’s most functional city that makes the best use of digitalisation. A functional city is based on trust and open, inclusive ways of operating. Helsinki’s digital strategy has a strong focus on building data and AI capabilities. Without trust there is no need for AI. Thus, clarifying data and AI ethics is one of the key initiatives in our plan.
As the Chief Digital Officers of London and Helsinki, we continue working together on the data and AI ethics and piloting different approaches and identifying policies that need to be updated.
Theo Blackwell, Chief Digital Officer for London
Mikko Rusama, Chief Digital Officer at City of Helsinki
12 June, 2019
Cities of London and Helsinki have agreed to collaborate on ethical use of data and artificial intelligence as a part of City-to-City Digital Partnership signed February 2019. Theo Blackwell, Chief Digital Officer for London and Mikko Rusama, Chief Digital Officer at City of Helsinki highlight the importance of ethical discussion. Data and AI ethics is one of the key areas for the city digital leadership in leading smart cities.
The Data and AI Ethics workshop was organised in Helsinki, Finland on 25 April, 2019. Ethical questions of AI in cities was discussed:
- How do we provide democratic accountability as the use of AI tools become more widespread?
- Do we need new rules and principles to govern the use of artificial intelligence in cities?
- How do we ensure AI meets citizens’ needs?
- Do citizens trust and to let us use their data and AI to serve them better?
The workshop was hosted by Mikko Rusama, Chief Digital Officer at the City of Helsinki as part of the London and Helsinki City-to-City Digital Partnership. Expert partners of the City from London, Amsterdam, Utrecht and Tallinn, as well as from different parts of Finland were present on the invitation of Mr. Rusama. See Eddie Copeland’s blog about the event.