Can we really trust AI to channel the public’s voice for ministers? | Seth Lazar

What is the role of AI in democracy? Is it just a volcano of deepfakes and disinformation? Or can it – as many activists and even AI labs are betting – help fix an ailing and ageing political system? The UK government, which loves to appear aligned with the bleeding edge of AI, seems to think the technology can enhance British democracy. It envisages a world where large-language models (LLMs) are condensing and analysing submissions to public consultations, preparing ministerial briefs, and perhaps even drafting legislation. Is this a valid initiative by a tech-forward administration? Or is it just a way of dressing up civil service cuts, to the detriment of democracy?

LLMs, the AI paradigm that that has taken the world by storm since ChatGPT’s 2022 launch, have been explicitly trained to summarise and distil information. And they can now process hundreds, even thousands, of pages of text at a time. The UK government, meanwhile, runs about 700 public consultations a year. So one obvious use for LLMs is to help analyse and summarise the thousands of pages of submissions they receive in response to each. Unfortunately, while they do a great job of summarising emails or individual newspaper articles, LLMs have a way to go before they are an appropriate replacement for civil servants analysing public consultations.

First problem: if you’re doing a public consultation, you want to know what the public thinks, not hear from the LLM. In their detailed study of the use of LLMs to analyse submissions to a US public consultation on AI policy, researchers at the AI startup Imbue found that LLM summaries would often change the meaning of what they were summarising. For instance, in summarising Google’s submission, the LLM correctly identified its support for regulation, but omitted that it supported specifically risk regulation – a narrow form of regulation that presupposes AI will be used, and which aims to reduce harms from doing so. Similar problems arise when asking models to string together ideas found across the body of submissions that they are summarising. And even the most capable LLMs working with very large bodies of text are liable to fabricate – that is, to make stuff up that was not in the source.

Second problem: if you’re asking the public for input, you want to make sure you actually hear from everyone. In any attempt to harness the insights of a large population – what some call collective intelligence – you need to be particularly attentive not just to points of agreement but also to dissension, and in particular to outliers. Put simply, most submissions will converge on similar themes; a few will offer unusual insight.

LLMs are adept at representing the “centre mass” of high-frequency observations. But they are not yet equally good at picking up the high-signal, low-frequency content where much of the value of these consultations could lie (and at differentiating it from low-frequency, low-signal content). And in fact, you can probably test this for yourself. Next time you’re considering buying something from Amazon, have a quick look at the AI-generated summary of the reviews. It basically just states the obvious. If you really want to know whether the product is worth buying, you have to look at the one-star reviews (and filter out the ones complaining that they had a bad day when their parcel was delivered).

Of course, because LLMs perform poorly at some task now doesn’t mean they will always do so. These might be solvable problems, even if they’re not solved yet. And obviously, how much this all matters depends on what you’re trying to do. What’s the point of public consultation, and why do you want to use LLMs to support it? If you think public consultations are fundamentally performative – a kind of inconsequential, ersatz participation – then maybe it doesn’t matter if ministers receive AI-generated summaries that leave out the most insightful public inputs, and throw in a few AI-generated bons mots instead. If it’s just pointless bureaucracy, then why not automate it? Indeed, if you’re really just using AI so you can shrink the size of government, why not go ahead and cut out the middle man and ask the LLM directly for its views, rather than going to the people?

But, perhaps unlike the UK’s deputy prime minister, the researchers exploring AI’s promise for democracy believe that LLMs should create a deeper integration between people and power, not just another layer of automated bureaucracy that unreliably filters and transduces public opinion. Democracy, after all, is fundamentally a communicative practice: whether through public consultations, through our votes, or through debate and dissent in the public sphere, communication is how the people keep their representatives in check. And if you really care about communicative democracy, you probably believe that all and only those with a right to a say should get a say, and that public consultation is necessary to crowdsource effective responses to complex problems.

If those are your benchmarks then LLMs’ tendency to elide nuance and fabricate their own summary information, as well as to ignore low-frequency but high-signal inputs, should give reason enough to shelve them, for now, as not yet safe for democracy.

The Guardian

Leave a Reply