Government algorithms are undermining democracy – let's open up their design to the people

Government algorithms are undermining democracy – let’s open up their design to the people

StratfordProductions/Shutterstock

Algorithms appear to be in retreat in the UK – for now. Not only did national governments recently U-turn over their use of algorithms to assign the grades of school leavers, but numerous local authorities have also scrapped similar technology used to make decisions on benefits and other welfare services. In many of these cases, this was because the algorithms were found to lead to biased and unfair outcomes.

But why didn’t anyone realise these algorithms would produce such harmful outcomes before they were out in the real world doing real damage? The answer may be that the public are not adequately represented in the processes of developing and implementing new systems. This not only increases the chances of things going wrong but also undermines public trust in government and its use of data, limiting the opportunities for algorithms to be used for good in the future.

As algorithms are increasingly used in ways which change how people access services and restrict their life choices, there are risks that these new opaque systems reduce accountability and fundamentally alter public relationships with government. At its core, this is a threat to democracy.




Read more:
A-level results: why algorithms get things so wrong – and what we can do to fix them


There’s a growing body of guidance that says people’s data should be used in a transparent way to maintain their trust. Yet when governments use algorithms to make decisions, if there’s any transparency at all it typically doesn’t go far beyond what’s legally required.

Simply telling the public about what is being done isn’t enough. Instead, transparency should involve asking the public what should and shouldn’t be done. In this case, that means opening up real opportunities for dialogue and creating ways for the public to shape the role of algorithms in their lives.

This requires professionals working in this area to concede that they don’t have all the answers and that they can learn something from listening to new ideas. In the UK’s A-level case, had the exam authorities spent more time speaking to students, teachers and parents in the early stages of developing their algorithms, these bodies may have been able to anticipate and address some of the problems much earlier and found ways to do things better.

How might this work in practice? The health sector, where there is a long tradition of patient and public involvement in delivering services and in research, provides some clues. Members of the public are included on boards determining access requests for medical data. Patient panels have played active roles in shaping how health bodies are governed and advising data scientists in developing research projects.

More broadly, across a range of policy areas, deliberative forms of public engagement can play important roles in shaping policy and informing future directions. Workshops, consensus conferences and citizens’ juries can help identify and understand public attitudes, concerns and expectations around the ways that data is collected and (re)used or about the acceptability of new technologies and uses of algorithms.

These methods bring together diverse members of the public and emphasise collective, socially-minded ways of thinking. This can be a valuable approach for addressing the many complex social and ethical issues relating to uses of algorithms which “expert” or professional teams struggle to resolve. Even with the best of intentions, no government team is likely to have all the necessary technical and policy expertise and understanding of the lives of everyone affected to get things right every time.

Public deliberation might consider whether an algorithm is an appropriate tool to use in a particular context or identify which conditions need to be met for its use to be socially acceptable. In the case of the A-level algorithm, public engagement could have clarified (in advance) what would constitute a fair outcome and which data should be used to realise that.

Trust the public

It might be argued that algorithms are too complicated or technical for members of the public to understand. But this just serves as a convenient excuse for keeping scientific and policy processes following business as usual.

The growing body of evidence around public engagement in this area consistently points to both the competence and enthusiasm of the public to actively engage in processes of developing, deploying and governing algorithms. There are even citizen science projects, such as that of Serenata de Amor in Brazil, that bring members of the public together to develop algorithms for public good.




Read more:
Not just A-levels: unfair algorithms are being used to make all sorts of government decisions


Given the appeal of algorithms to increase efficiency and provide an illusion of objectivity in complex decision-making, it is likely that governments’ use of algorithms will continue to increase. If lessons aren’t learnt from the A-levels fiasco, the recent protests will also become an increasingly regular feature of public life. Trust in government will be further eroded and democratic society itself undermined.

Algorithms are not intrinsically bad. There are enormous opportunities to harness the value of data and the power of algorithms to bring benefits across society. But doing so requires a democratisation of the processes through which algorithms are developed and deployed. Fair outcomes are much more likely to be reached through fair processes.

The Conversation

Mhairi Aitken does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.