The real revolution in generative AI is actually going to be incredibly mundane. In its insidious simplicity, it has significant impacts for the public sector.
*
The other day I met my neighbor. We had a slightly longer elevator ride than normal, and after three years of living in the same building, I finally asked him what do you do. He told me, and when he politely asked me in response, I told him I work on AI policy and governance. His response went something like this:
‘Oh, that’s really interesting, really topical and sexy right now. [pause] You know, I used ChatGPT today.’
‘Oh yeah? What for?”
With a conspiratorial wink, he told me ‘Well, you know, I was kind of tired so at the end of the day around 3 o’clock I used it to write my emails to my boss.’
“I know exactly what you mean man, I get tired around 3 o’clock as well.”
This is really where we’re going to see the impacts of generative AI.
In the way that we go about our daily processes, generative AI is insidiously going to be interacting with our daily lives. In the same way that we’ve been using auto-correct on our phones and emails for years. This is happening now, not in the future.
*
Over the last year, Hollywood has been reverberating with the Writer’s Guild of America strike. It is an ongoing labour dispute, and AI has formed part of the WGA’s demands in the last couple of months. They are not pushing against using generative AI per se. They do not think that generative AI is going to replace them, at least not for now.
Rather, their concern is that producers, who think they know what they’re doing, are going to give them a script written by genAI and they’re going to say ‘just fix it’. That the job of screenwriters is going to be relegated to cleaning up the mess.
This dynamic is made possible by existing structural patterns of separating the work of writing from production – so it’s not the just the technology, but the organization of our social contexts which the technology accelerates.
This is where the social impacts are going to be really visible.
*
How does this impact the public sector?
This research paper surveyed perceptions of CIOs in the public sector in Spain and Mexico, asking them about how they saw the impact of AI. It was nice, clean, simple survey research. The graph below in particular shows that where people expected the greatest impact of AI in the public sector.
What strikes me is that the third row, ‘public service delivery’, is where most of the AI applications for the city that we hear about, fall into. About 50%, that’s not bad.
But there is a lot on this list – processing of transactions, management organisation, clerical and assistant tasks – this is rather boring stuff. This is bureaucracy.
This matters, because bureaucracy creates risks that impacts people’s lives today.
*
High-risk AI in public sector Bureaucracy
Recent research by Applied AI looked at the new definitions of ‘high-risk AI sectors’ under the EU AI act. A lot of these – human resources, accounting and finances, IT and security – have strong overlaps with the public sector.
In the EU AI act, the implications of being called high risk is that in order to do anything of these applications, you need do a fundamental rights impact assessment. While trilateral negotiations are still ongoing, there’s going to be potentially a lot of FRIAs within the public sector. The Netherlands has been working on a government level version of FRIA for Algorithms.
In the Netherlands, the impact of algorithmic harms in bureaucracy is really prominent. The poster child for current harms is the Dutch childcare benefits scandal. In short, the Dutch tax authority was using algorithmic decision making to detect fraud , and what transpired is they falsely accused thousands of families of low socioeconomic background of fraud and demanded to return the money back, all at once. Thousands and thousands of euros. It turned out, this was institutional racism. The political cabinet was forced to resign in the backlash.
Sasha Costanza-Chock has written about x-harms vs a-harms. Their argument is too much of a focus on x-harms, on existential risks and future harms, actually distracts the conversation from what are the actual harms right now, the a-harms.
If you are wondering what are the actual harms, there are people who have done wonderful work of creating databases and repositories of actually existing harms perpetuated by algorithmic systems or algorithmic-supported decision systems like the AIAAIC.
*
AI Governance strategies
AI will shift our processes and the way that we do things. This sounds boring but true.
This is why I’m interested in governance, because governance processes are where we can identify what are the values we want to operationalise. Governance is where we look at how we do things in order to support the ‘why’.
To respond to how AI will shift our processes, I suggest three specific ways forward.
1. Conduct engaged social impact research
This is new, it’s happening now, and we don’t fully understand what transitions are occurring.
Understanding how algorithmic-assisted decision making shifts socio-technical systems requires social, qualitative, engaged research.
2. Create organizational learning
Last year, I researched accountability and transparency initiatives for urban artificial intelligence, specifically in city administrations. The purpose was to gather lessons learned and make recommendations.
What came out of this very strongly was the importance of an organizational culture of learning, with strong feedback mechanisms that provided space to adapt over time.
Building the capacity to respond was in part about building the capacity to monitor frequently. One of the suggestions that came out of Helsinki was that an easy way to do that was to connect monitoring moments to already existing times when the executives were already sitting around the table, like financial quarterly reviews. This is an example of organizational habits, organizational adaptation through governance processes.
3. Evaluate digital governance for public values
In practice, connecting abstract ethical principles to work happening on the ground is only going to happen through projects, through what cities are implementing.
Governance clinics were interventions for a couple of days, working with cities, looking digital transformation and AI projects. We asked questions like What are we optimising for? Who are we optimising for? Are those actually the values that we care about?
For example, working with the city of Amsterdam, which is a very value driven city, we realised that they had so much desire to do good and do ethical work, but when we looked at what were the actual processes they had in place, all of the pressure was falling on their data protection officer. There was little other governance processes to safeguard public interest, and no way to define what is the public interest in the first place.
Working with cities through projects in these kinds of interventions in existing projects, is one way to get from the abstract to the practical.
*
These strategies are not easy. They require time, budget, and space. That implies you need leadership on board. We’re asking a lot of cities, but there is so much potential for public sector leaders to create the enabling conditions guiding urban development towards a digital society we actually want.
For concrete recommendations for an Urban AI strategy, have a look at the AI and Cities report I worked on with UN-Habitat and Mila.
This post is an abridged version of a talk I gave at CIDOB, the Barcelona Centre for Intentional Affairs, in June 2023, as part of a panel on the dark side of AI. You can find the slides for that presentation here, though my slides tend to be rather sparse in text.