Democracy and Artificial Intelligence: old problems, new solutions?
I see three big perspectives relevant to AI and democracy. You have the most conservative, mirroring the 80s and the 90s, still talking about the digital public sphere as if it’s distant from our lives. As if it’s something novel and inaccessible, which is not quite accurate anymore.
Then there’s the more optimistic and cautionary side of the spectrum. People who are excited about the technologies, but they’re not quite sure. They’re intrigued to see the potential and I think they’re optimistic because they overlook how these technologies connect to a broader context. How a lot of these technologies are driven by surveying and surveillance of the data and the communication that we produce. Exploitation of workers who do the filtering and cleaning work. The companies that profit out of this and make engineered election campaigns. So they’re cautious because of that, but still optimistic, because at the same time, they try to isolate it from that bigger context.
And finally, the most radical is something like Cesar Hidalgo’s proposal of augmented democracy.
Rob Weymouth
He proposes that more direct democracy would address issues with the current established electoral democracies such as low voter turnouts. AI agents could reduce the cognitive load associated with making legislative decisions on many complex areas. The AI agents would learn voters’ preferences and then be able to generalise these to direct votes on political and legislative issues.
Nardine
What’s interesting about this is that the idea of the agent – a digital twin – is not that radical. We know what our Spotify or YouTube twin does. The idea is very familiar, but it’s quite radical to propose that we completely outsource all our active thinking, listening, and internal and intersubjective deliberation that occurs to make a decision, and delegate it completely to something that is tracking us.
There are three robot politicians that we have seen competing in elections at different levels and in three different countries. The first was in Aoetearoa New Zealand, and then Japan. The most recent was in Lebanon. They all revolved around the same idea: this huge contrast between what super-intelligent machines are capable of and what, let’s say, cognitively inferior politicians and populations can do. It’s this idea of tracking what we say and do online, the frustrations that we express, it all gets crunched into different metrics and analyses, and then they decide what is the most likely desirable policy pathway or direction.
The biggest problem with that is that the people developing or promoting this are people who have never really done grassroots activism or advocacy. They haven’t worked with democracy with very little resources, they haven’t done that groundwork, that very difficult work that takes a long time. And they are now capable of influencing which democratic practices are going to be good for us, which is quite dangerous in itself, and that’s why I say it’s radical. It’s not the proposal itself. It’s the isolation of the people with that power from the people affected by those decisions.
Questions of power, access to power, being listened to – these are not new or unique problems.
Rob
Do you think that the problem you are describing is unique to deliberative democracy and AI?
Nardine
I would rather say it’s not unique. It’s a common representation in the literature that AI, the digital, the cyber, and so on, is unique, it’s exceptional. And I’m not quite sure. I don’t think these adjectives are useful because the problems we have are persistent. They’re not new problems as such, and the tool that we’re dealing with could be relatively new, different from what we had before, but questions of power, access to power, being listened to – these are not unique or new problems. The only difference here is that they would say “oh this technology is developing too fast for anyone else to catch up” – the pacing problem. And that justification deliberately fosters the isolation I was talking about before.
We do hear about big tech companies opening up their governance processes using deliberation, but how far does that really go? A global corporation genuinely opening up in this way would be rather exceptional. If we put things in context, it’s not just about the technology, it’s also the structures of concentration of capital, power and influence. You’ll see that these are the same problems that have occurred before. Instead of tobacco, instead of oil, it’s AI and tech.
Rob
There’s a general techno-optimist approach I have noticed, that could be paraphrased as “Yeah, I don’t know much about this area, but I’ve got this cool technology which I believe is going to fix all the problems.” It’s partially a stance towards the world that is further paraphrased as “I don’t need to understand all those other complicating factors. I don’t need to talk about all those potential negative externalities. I’m just going to propose all these solutions.”
So, springing off this starting point, do I believe AI could play any positive role in deliberative democracy? I think it could make it more efficient to gather accurate existing opinions from a population. I believe it could make deliberations higher quality. But the word ‘could’ is doing a lot of work in those sentences.
As humans, we have a genuine problem efficiently and accurately integrating all the videos we watch, the papers we read, the conversations we have. It’s a cognitive load that I believe can compromise the ability to do deliberative democracy at scale and this is probably where AI seems to have the most promise in assisting. To assist us in having a form of democracy where everyone’s opinions are better represented and more importantly where their considered will is better represented. What do you think? Is efficiency currently stopping us from achieving this better democracy?
Humans have a genuine problem efficiently and accurately integrating all the videos we watch, the papers we read, the conversations we have…this is probably where AI seems to have the most promise.
Nardine
I used to work with someone and one time, I wasn’t on top of my game, and they said, “What happened? You used to be like a computer!” And I was like, “I’m not sure if that’s an insult or a compliment!” but that’s the thing: is it a democratic ill that deliberation is imperfect? Is perfection the aspiration here? Do we create tools to correct us all the time? Who has the authority to correct? Is this something that we should aspire to?
For me, democracy is a lifelong, eternal work in progress. I don’t think we’ll ever get to a golden age or the final stages of democracy. It’s ridiculous, the idea that we are going to perfect it in this way. It’s a work in progress.
Rob
Do you think that’s because it’s not epistemically possible for humans to conduct perfect deliberation, or is it something more fundamental? It seems to me that efficiency is not the only problem with democracy by far.
Nardine
If you believe that democracy is just about epistemic quality, then that argument would hold. And that’s what a lot of people who support using algorithmic tools and AI endorse. But if you believe that democracy is more about morality, ethics and public reason, then that wouldn’t necessarily hold.
That brings me to a related point, which is, is it even possible to have everyone represented using AI? I don’t think so. From my understanding and knowledge, what these machine learning models do, they’re making statistical inferences, using statistical modelling. So anything that’s on the margins is not going to be captured. Anything that’s an outlier, it’s going to be too extreme and everything in the middle is going to be represented.
Whereas in an authentic deliberative forum, for instance, which creates the right conditions, you try to bring in those outliers and uplift them. With statistical modelling, they’re not going to be there. So again, the question is, what do you want this democracy to do? Do you want it to be accurate, informative, robust, and rigorous in terms of epistemic quality? Or do you want it to actually liberate people? I don’t know if anyone is interested in doing that because most of these models are not done in good faith. They are often associated with monetary value. So it comes back to the concentration of capital, power and influence.
These are not new problems. Governments outsourced welfare to companies. There was a neoliberal agenda to cut these expenses and outsource it because companies do it better and the private sector can do it better. And it feels like the same argument: AI can do it better. It’s the same problem. It just takes a different shape, but it’s the same thing.
Rob
It strikes me as analogous to thinking that solving the problem of world hunger is all about just producing more food. My understanding is that there are sufficient calories in the world – the problem is around other issues like justice and distribution. It looks like we agree on most points! So where do we go from here?
Nardine
If we’re going to take this optimistic perspective, that AI has the potential to create a better future for democracy and humanity, then the conversation should be about what commitments are being made to that. What concrete steps and concessions are those in power willing to make to involve people in that better future? Because so far it doesn’t seem like it’s better for all.
About the authors
Dr Nardine Alnemr is a Lecturer at the School of Humanities, Arts and Social Science, and Fellow of the Indo-Pacific Research Centre (IPRC), Murdoch University in Western Australia.
Dr Rob Weymouth is the Facilitator of Deliberation and Engagement at the Shire of Carnarvon in Western Australia.
Acknowledgements from the Editor
This piece is part of the Digest’s new curated conversation series, where practitioners, scholars, advocates and critics meet to converse on the topics that matter to them in the theory and practice of deliberative democracy.
I’d especially like to thank members of the Democracy R&D Network. The idea for this series came about through an open space workshop at the DRD Conference in Copenhagen, and it was through the Network that I was able to reach so many people. I would like to thank all the people who came forward to take part in this series and the collective effort that has gone into the careful planning, recording and editing of each piece.
Supporters
The Journal of Deliberative Democracy and Deliberative Democracy Digest are supported by:
Contact
General queries
Please get in touch with our editor Lucy Parry.
Mailing Address
Journal of Deliberative Democracy
Centre for Deliberative Democracy and Global Governance
Ann Harding Conference Centre
University Drive South
University of Canberra, ACT 2617