Responsible Urban AI

The phrase “responsible AI governance” is a bit vague and as a result it is often interpreted in one of two ways: either it’s about regulation, or it’s about risk management.

At the urban scale, regulation is often beyond the municipal jurisdiction. Regulation tends to be regional, national or international. What remains is risk management.

As cities grapple with understanding the implications of AI technologies for their local contexts, risk management for urban AI is a hot topic.

The challenge is, many AI risk management frameworks are most useful once you know you’re using a particular AI technology.

The project in question first has to make a decision on whether to use AI and on what kind of AI to use. Then, frameworks can be used to explore how to do that well, responsibly and how best to respond to user needs.

This risk management approach works well in an innovation context where the use of AI is clear. That is, where the added value of this project is the AI technology itself. When the reason for the use of AI is clear, the project can move along to risk management processes.

On the other hand, in a problem-focused context, the type of solution is more open-ended. In this case, there is more to be done first.

For example, consider public sector administrations or municipalities thinking through their digital transformation work. In this context, there is a whole messy period of figuring out what the problem is and whether a technological implementation may help, and whether even a type of AI is the right instrument.

Before a successful AI risk management process can begin, there is an extensive problem exploration and discovery phase.

For urban AI projects, the exploration and discovery phase includes problem definition but also technical investigations around data availability, quality and annotation.

In one successful urban AI project I know of, the exploration and discovery phase took up to six months. That’s 25%-40% of project timeline.

Risk management frameworks and tools built upon the AI lifecycle, like in the AI and cities report, are extremely helpful. I find them to be especially helpful as tools enabling interdisciplinary conversations. What I argue is that urban AI risk management needs to be placed into context.

Context is the starting point of responsible urban AI.

Considerably more time needs to be spent engaging with the lived experience of residents and city administrators. This exploration focuses on the problem first, and then determines how technologies may support those aims.

Excellent examples include UN-Habitat’s people-centred smart cities work, Open North’s work on data governance for cities, and the data governance clinics I co-created.


Posted

in

by

Tags: