AI & Inclusion Symposium: Questions rattling my brain as I prep for it

November 7, 2017

This week I have the pleasure of representing TILT at the Network of Centres’ Artificial Intelligence and Inclusion Symposium in Rio de Janeiro. (HELL YES omigodomiggod. Ok, composure regained.)AI & Inclusion Symposium

The event […] will identify, explore, and address the opportunities and challenges of artificial intelligence (AI) as we seek to build a better, more inclusive, and diverse world together. It is co-organized on behalf of the NoC by the Institute for Technology and Society of Rio de Janeiro (ITS Rio) and the Berkman Klein Center for Internet & Society at Harvard University.

Considering the research project I’m working on is specifically about data justice and social justice issues, the timing couldn’t be better. There is not only a focus on AI, but also specific attention to inclusion and how these issues manifest specifically in the global South.

The program is one of the better ones I’ve seen, not only in terms of topics but also with the attention to first setting a common baseline of understanding together (making sure ‘we’re all speaking the same language’) based on pre-meeting surveys, splitting into breakout groups, and building common knowledge with the specific aim to

‘identify intervention points for collaboration, communication, and synergy. Which ideas, initiatives, and projects in AI & Inclusion should be discussed further, emphasized, or reconceptualized?’

As an ex-English teacher specialised in the architecture of discussion to facilitate genuine communication, this makes me happy.

I mean, some of those leading the discussion in the plenaries are called ‘Firestarters’ (cue the 90s theme song)

They’ve also put together a fantastic reading list on AI and inclusion, accessible to the public here.

In preparation, I’m thinking through some of the questions that I’ll be bringing along.

How can we value social knowledges when AI deepens the dependence on STEM?

As I underlined in my paper on varieties of knowledge for urban planning, the way a society or institution values particular types of knowledge has implications for whether complementary sets of knowledges can get taken up and mainstreamed into development planning, or not. In particular, the dominance of STEM-based knowledges (science, technology, engineering, maths) mean that very concrete insights from the social science or non-formalised knowledges don’t get taken up, despite the potential for new perspectives towards solutions.

Related image

In some cases, such as in my experience in India, this is done to depolitcise planning – which, in the sociological circles, may sound like heresy, but in certain contexts, it’s a valid way to combat corruption.

This clearly goes beyond the question of AI – on the contrary, the ‘newness’ of AI may only serve to cement this distinction. Especially in areas where such a valuation is very distinct, how can we continue to bring in non-STEM-based insights into the usage of AI for development?

Why do we talk about inclusion and not social justice?

I’m not sure the symposium will be able to answer this, but I am working on it. The program talks about social inequalities, social good, and (perhaps intentionally?) doesn’t mention social justice – even though much of the reaction to algorithmic bias is very linked to social justice advocates. For instance, the Data Justice Lab at Cardiff University is expressed as ‘for social justice’.

It’s on my mind because I’m reading Young’s seminal book Justice and the Politics of Difference, which outlines the basic theory of social justice. She argues the vast majority of justice theories are within a distributive paradigm, meaning they focus on rights as ‘things’ you can ‘have’. Problem is, that obscures that power, and injustice, are relationships, often determined by institutional structures.

While I’m still working through the book, and while I personally agree with much of the social justice movement, if I’m honest I still quibble a bit with the image that gets evoked in my brain when I think of social justice, namely that of angry Tumblr users. So perhaps it’s a political decision not to mention social justice specifically. However, I think the focus on institutional structures that social justice highlights is absolutely foundational for resolving anything.

How can AI be reliable in areas characterised by data gaps & informality?

An algorithm is only as good as the data set it is based on. What if the data set is incomplete? This is nothing revolutionary – the question is more about whether there are specific applications of AI that circumvent this problem, if at all.

I should probably know this, but I don’t, yet.

Should we not be starting with the structure rather than the technology?

Gurses and van Hoboken just came out with a fantastic chapter on ‘Privacy after the Agile Turn‘. They explain how the evolution of software development has changed the structure of the internet in terms of it’s infrastructure, complexity and modularity, with significant implications for how we think about, and try to tackle, privacy and data protection. They suggest adapting differential privacy approaches to deal with this modular, distributed system of data collection and analysis (which would explain why dynamic consent is gaining popularity).

It’s well worth a read, but the bottom line is that by focusing on algorithms or data minimisation, we focus on how data as a thing is consumed, but we do not address the overarching structures of the political economy of the internet, nor do we focus on the flows of power created by institutional structures.

NOTE – This is not to discredit the field of AI ethics and this stuff needs to be thought through. I work on data governance more broadly, so it’s kind of normal that I always go back to the institutions. I also literally just read this paper and it’s been swimming around in my head. So it is a question that acts a lens through which I’ll be engaging in the conference.

Who is best placed to broker information about best practices?

More an implementation question – information brokerage also came up as necessary in the workshop I attended on EU health data, and it’s always going to be a key role. What I’m curious about is where is the specific need for sharing best practices? How could this help direct the development of AI towards more inclusion? Who’s voice would realistically have the most impact for what audience?

I’m very much looking forward to this conference, and there will likely be a flurry of activity afterwards, stay tuned!

Leave a Reply

Your email address will not be published. Required fields are marked *