Society is not a flock of birds: self-learning algorithms, risk governance, and moving beyond fear.

When a self-driving car governed by self learning algorithms kills a cyclist in a ridiculously stupid error of judgement, what can we learn for the future of an algorithmic society?

Find some place to hide’ by Thomas Hawk: CC BY-NC 2.0

We certainly need to do more than just focus on how unpredictability can have fatal consequences. Franken-algorithms exist, and the challenges that unpredictability can raise for digital agency are real. But beyond grief and the shock factor, we need to move past fear that blindsides us. We certainly need to do more than only focus on how unpredictability can have fatal consequences. Rather, we need a more nuanced conversation which re-examines the tools we have to construct an algorithmic future we actually want to live in.

As a social scientist working on Global Data Justice, I am acutely aware of how critical voices are too often silenced or ignored. I am writing this blog because I am concerned that there are very smart people reading the same things I am, yet who don’t seem to acknowledge that if we stagnate in fear rather than critique, we can actually play into a discourse which reifies the antagonism to the regulation of innovation in the tech sector.

If we stagnate in fear rather than critique, we can actually play into a discourse which reifies the antagonism to the regulation of innovation in the tech sector.

As a society, the increasing use of algorithms in decision-making means we are re-confronted with how we manage and accept risk, and we need to discern what different types of risk mean for different approaches to governance. The unpredictability of self-learning algorithms which evolve and adapt over time, meaning they display emergent behaviours beyond the control of the original programmer, is one of these risks. Instead of shivering in fear and denial, why not see what we can learn from the history of applied risk governance?

Big data analytics are based on the premise of emergent patterns in enormous data sets, and the proliferation of sensors, instrumented environments and the interconnection of databases provides the data as fodder into inductive, data-driven decision-making and the phenomenon of smart urbanism (Kitchin 2015). These shifts mean we are dealing with fundamentally different logics of emergence, and we have to at least engage with a complexity perspective.

The shift to inductive reasoning and data-driven decision making means we have to shift our perspective on the social consequences of fundamentally different logics. In particular, self-learning algorithms evolve and adapt over time, which means that they display emergent behaviours beyond the control of the original programmer. We have to think through some of the tools and concepts from a critical complexity theory. 
 
Complexity is fascinating because it is a fundamental transformation of our perspective from linear control to emergence. Non-predictability and emergence challenge our understanding of how to interact with the world because we have to let go of the reigns a little bit. A broadly-defined complexity perspective decenters the human agent from the center of the universe to another being in the ecosystem, and this destabilisation can be profoundly emotionally unsettling (whether one likes to admit it or not). Much like the shift from a Ptolemaic to Copernican view, when the realisation dawned that the sun rather than the earth is at the center of the galaxy, we are forced to confront the reality that we are not in charge of what is about to happen.

The thing is, in the face of the unknown, we tend to be a bit like a deer in the headlights and freeze. We lump everything together into an indecipherable, scary mass of uncertainty and mentally try to run away in the other direction. In a 2018 paper called Wickedness and the anatomy of complexity, Andersson and Törnberg call this scary mass of uncertainty an intuitive sense of ‘overwhelmingness’. It’s a great word, and I fully support active attempts to integrate this into everyday language.

Importantly, ‘overwhelmingness’ is different than complexity. To be very brief: in the scientific sense of the term, complexity very specifically refers to where the individual units are of the same class, express the same function, and collectively display emergent and unpredictable behaviours. This is different than complicated, where there are many different parts which are all functionally different. A flock of birds is complex, a car engine is complicated.
 
 Society is not completely like a flock of birds. Some social phenomena are more amenable to complexity thinking and big data analytics than others; it is no accident that most data-driven smart city applications begin with the urban problems of traffic and energy networks. In other areas, it is less easy to avoid the fact that socio-historical conditions and institutional arrangements are differentiated and enduring. Social functions are not complex phenomena in the strict sense of the word; as Andersson and Törnberg argue, societies are more often a combination of complicated and complex, creating their own flavours of ‘wicked’ problems.

Institutional arrangements are differentiated and enduring. Social functions are not complex phenomena in the strict sense.

This means that when we investigate the challenges of applied self-learning algorithms and discover social complexity theory, we are not completely reinventing the wheel. Yes, complexity matters. And, quite frankly, it’s cool. But rather than freezing in the face of overwhelmingness, we need to acknowledge that social ‘systems’ are not flocks of birds à la Hitchcock. Instead, we need to rethink how we understand the governance of risks and how these are socially constructed.

There is a lot of thinking that has already been done around the structural transformations in the face of systemic risk that we can learn from, particularly in the fields of environmental governance. The problems we face are new, but just like the data patterns, they emerge from sociopolitical historical conditions.
 
 Self-driving cars are a good starting point. They are emblematic of our changing modernity, as both the evolving symbol of having ‘made it’, the threat of vaguely-defined robots taking over the world, and the ethics of trying to let algorithms take decisions. As symbols, they are useful anchors to start a conversation around algorithmic futures. However, if the only examples of an algorithmic future you look at centre on car crashes, modern warfare and aviation, these are all life-threatening sectors. Risk tolerance drops sharply, moral responsibility is undeniable. These are the warnings, the extremes, the measuring sticks.
 
In the case of Elaine Herzberg, the consequences of how we deal with unpredictability were indeed fatal. The self-driving car crashed into Herzberg not because it didn’t recognize her, but because the encoding of the decision-making in the algorithm was tuned too far to avoiding false-positives.

While there is an important conversation here about algorithmic decision making, here I focus on another aspect which doesn’t get as much attention: the delicate balance between risk and the unstoppable progress of innovation. The self-learning algorithm was calibrated vis-a-vis a particular understanding of the risks of getting it wrong. Don’t make too many errors or you stop the relentless charge of progress and innovation. But who gets to decide what tradeoffs are acceptable? What should we be optimising for? How do we make choices about innovation in an algorithmic society? In other words, the challenge is governance.

How do we make choices about innovation in an algorithmic society? In other words, the challenge is governance.

Certain discourses would have you believe that innovation is only about being bold and daring, and that risk-aversion suffocates innovation. In this simplistic framing, fear is associated with stasis, regulation with an attack on freedom, and responsibility for outcomes as impossible in a complex world.
 
 In a 2018 report called ‘Clearly Opaque : Privacy Risks in the Internet of Things’, the IoT Privacy Forum unpacks this false duality which frames the precautionary principle as anathema to any sort of innovation. Drawing from environmental governance theory, the report shows how the only time that this is actually true, that Thou Shalt Not Innovate Unless Thou Can Prove It Won’t Hurt Anybody, is in sectors where there is a serious risk of loss of life and harm, such as health and transport, i.e. cars.

Otherwise, risk governance, like most things, is a spectrum. There are different gradations of risk, different gradations of governance responses. Rather than throwing the baby out with the bathwater, the trick is nuanced understanding in order to find appropriate responses. This is the role of experienced regulators.

As a result, we cannot use the symbol of self-driving cars and their regulatory challenges as emblematic of all questions around risk and the governance of algorithms. Even the phrase ‘governance of algorithms’ is an oversimplification similar to reducing all of the things to ‘technology’. If decision-makers see only fear and overwhelmingness, we lose all hope in hell of appropriate regulation. When we are faced with unpredictability in algorithmic futures, we do not need to collectively lose our minds.

When we are faced with unpredictability in algorithmic futures, we do not need to collectively lose our minds.

If data is the new oil (in the minds of many, anyway), we need to learn from the experience of environmental governance theory as we create algorithmic futures. Environmental governance, risk management, critical complexity and the new challenges of the digital economy come together in a way that needs rethinking. We need critical voices, we need critique, and we need fearless, constructive conversations learning from the work that has already been done. This is a path forward.