Mr. Robot goes to Washington: How AI will change democracy

By Jamie Susskind

Democracy is at a crossroads. With the midterms approaching and the 2020 presidential election looming on the horizon, millions of Americans are set to place their faith in candidates whose rhetoric is democratic but who are open about their intention to compromise the very institutions that curtail the powers of elected leaders. Not just in the United States, but around the world–from Brazil to Hungary–voters are turning to authoritarian leaders promising to unleash the power of the people, but whose definition of “the people” excludes many who are not like them. Remarkably, however, the ‘illiberal’ turn in the development of democracy is not the greatest challenge facing the idea of government by the people.

Increasingly, digital technology is eroding the assumptions and conditions that have underpinned democracy for centuries. By now, fake news and polarization are familiar subjects to those interested in democracy’s health. Just last week Facebook announced that it was doubling its ‘security and community’ staff to 20,000. But in the future, we’ll have to grapple with the much more significant idea of AI Democracy, asking which decisions could and should be taken by powerful digital systems, and whether such systems might better represent the people than the politicians we send to Congress.

It’s a prospect which holds possible glories but also terrible risks. “Is democracy, such as we know it,” asked Henry David Thoreau in 1849, “the last improvement possible in government? Is it not possible to take a step further towards recognizing and organizing the rights of man?” The same question arises today.

This book excerpt has been edited for length and clarity.

We tend to talk to those we like and read news that confirms our beliefs, while filtering out information and people we find disagreeable. Technology increasingly allows us to do so. If you are a liberal who uses Twitter to follow races for the U.S. House of Representatives, 90 per cent of the tweets you see (on average) will come from Democrats; if you are a conservative, 90 per cent of the tweets you see will typically come from Republicans.

In the early days of the internet it was predicted that we would personally customize our own information environment, choosing what we would read on the basis of its political content. Increasingly, however, the work of filtering is done for us by automated systems that choose what is worthy of being reported or documented, and decide how much context and detail is necessary. Problematically, this means that the world I see every day may be profoundly different from the one you see.

The term fake news was initially used to describe falsehoods that were propounded and given wide circulation on the internet. Now even the term fake news itself has been drained of meaning, used as a way to describe anything the speaker disagrees with. Although some social media platforms have taken steps to counter it, the nature of online communication (as currently engineered) is conducive to the rapid spread of misinformation. The result is so-called post-truth politics. Think about for this a moment: in the final three months of the 2016 US presidential campaign, the top twenty fake news stories on Facebook generated more shares, reactions, and comments than the top twenty stories from the major news outlets combined (including the New York Times, Washington Post, and Huffington Post). 75 per cent of people who saw fake news headlines believed them to be true.

Unfortunately, our innate tendency toward group polarization means that members of a group who share the same views tend, over time, to become more extreme in those views. As Cass Sunstein puts it, “it is precisely the people most likely to filter out opposing views who most need to hear them.” If matters carry on as they are, we will have fewer and fewer common terms of reference and shared experiences. If that happens, rational deliberation will become increasingly difficult. How can we agree on anything when the information environment encourages us to disagree on everything? “I am a great believer in the people,” Abraham Lincoln is supposed to have said. “If given the truth, they can be depended upon to meet any national crisis. The great point is to bring them the real facts.”

Who will bring us the real facts?

Importantly, these problems aren’t inevitable. We can find solutions. Social network proprietors are slowly taking steps to regulate their discussion spaces. Software engineers like those at loomio.org are trying to create ideal deliberation platforms using code. The Taiwanese vTaiwan platform has enabled consensus to be reached on several matters of public policy, including online alcohol sales policy, ride- sharing regulations, and laws concerning the sharing economy and Airbnb. Digital fact-checking and troll-spotting are rising in prominence and the process of automating this work has begun, albeit imperfectly. These efforts are important. The survival of deliberation in the future will depend in large part on whether they succeed.

What’s clear is that a marketplace of ideas, attractive though the idea sounds, may not be what’s best. If content is framed and prioritized according to how many clicks it receives (and how much advertising revenue flows as a result) then truth will often be the casualty. If the debate chamber is dominated by whoever has the power to filter, or unleashes the most ferocious army of bots, then the conversation will be skewed in favour of those with the better technology, not necessarily the better ideas. Deliberative democracy needs a forum for civil discussion, not a marketplace of screaming merchants.

In the future, as we’ve seen, those who control digital platforms will increasingly police the speech of others. At present, tech firms are growing more bold about restricting obviously hateful speech. Few among us will have shed a tear, for instance, when Apple removed from its platform several apps that claimed to help ‘cure’ gay men of their sexuality. Nor when several content intermediaries stopped trafficking content from right-wing hate groups after white supremacist demonstrations in Charlottesville in mid- 2017. (The delivery network Cloudfare terminated the account of the neo-Nazi Daily Stormer. The music streaming service Spotify stopped providing music from “hate bands” The gaming chat app Discord shut down accounts associated with the Charlottesville fracas. Facebook banned a number of far-right groups with names like “Red Winged Knight,” “White Nationalists United,” “Right Wing Death Squad,” and “Vanguard America.”)

But what about when Facebook removed the page belonging to the mayor of a large Kurdish city, despite it having been “liked” by more than four hundred thousand people? According to Zeynep Tufekci, Facebook took this action because it was unable to distinguish “ordinary content that was merely about Kurds and their culture” from propaganda issued by the PKK, a group designated as a terrorist organization by the U.S. State Department. In Tufekci’s words, it “was like banning any Irish page featuring a shamrock or a leprechaun as an Irish Republican Army page.”

My purpose is not to critique these individual decisions, of which literally millions are made every year, many by automated systems. The bigger point is that the power to decide what is considered so annoying, disgusting, scary, hurtful, or offensive that it should not be uttered at all has a significant bearing on the overall quality of our deliberation. It’s not clear why so-called “community guidelines” would be the best way to manage this at a systemic level: the ultimate “community” affected is the political community as a whole. To pretend that these platforms are like private debating clubs is naïve: they’re the new agorae and their consequences affect us all.

The idea of unfettered freedom of speech on digital platforms is surely a non-starter. Some forms of extreme speech should not be tolerated. Even in the nineteenth century, the philosopher John Stuart Mill accepted that certain restrictions were necessary. In his example, it’s acceptable to tell a newspaper that “corn-dealers are starvers of the poor” but not acceptable to bellow the same words “to an excited mob assembled before the house of a corn-dealer.” Mill understood that we certainly shouldn’t be squeamish about rules that focus on the form of speech as opposed to its content. Just as it’s not too burdensome to refrain from screaming in a residential area at midnight, we can also surely accept that online discourse should be conducted according to rules that clearly and fairly define who can speak, when, for how long, and so forth. In the future this will be more important than ever: Mill’s “excited mob” is much easier to convene, whether physically or digitally, using the technologies we have at our disposal.

It would be easy to blame post-truth politics on digital technology alone. But the truth (!) is that humans have a long and rich history of using deceit for political purposes. Richard Hofstadter’s 1963 description of the “paranoid style” in public life–“heated exaggeration, suspiciousness, and conspiratorial fantasy”–could have been meant to describe today. So too could George Orwell’s complaint, in his diary of 1942, that:

We are all drowning in filth. When I talk to anyone or read the writings of anyone who has any axe to grind, I feel that intellectual honesty and balanced judgment have simply disappeared from the face of the earth . . . everyone is simply putting a “case” with deliberate suppression of his opponent’s point of view, and, what is more, with complete insensitiveness to any sufferings except those of himself and his friends.

AI Democracy

Looking further into the future, we have seen that one of the main purposes of democracy is to unleash the information and knowledge contained in people’s minds and put it to political use. But if you think about it, elections and referendums do not yield a particularly rich trove of information. A vote on a small number of questions–usually which party or candidate to support–produces only a small number of data points. Put in the context of an increasingly quantified society, the amount of information generated by the democratic process–even when private polling is taken into account–is laughably small. Recall that by 2020 it’s expected that we’ll generate the same amount of information every couple of hours as we did from the dawn of civilization until 2003. This data will provide a log of human life that would have been unimaginable to our predecessors. This prompts the question: why would we govern on the basis of a tick in a box every few years?

Mr. Robot goes to Washington: How AI will change democracy | DeviceDaily.com
Future Politics:
Living Together in a World Transformed by Tech
by Jamie Sisskind

By gathering together and synthesizing large amounts of the available data–giving equal consideration to everyone’s interests, preferences, and values–we could create the sharpest and fullest possible portrait of the common good. Under this model, policy could be based on an incomparably rich and accurate picture of our lives: what we do, what we need, what we think, what we say, how we feel. The data would be fresh and updated in real time rather than in a four- or five-year cycle. It would, in theory, ensure a greater measure of political equality–as it would be drawn from everyone equally, not just those who tend to get involved in the political process. And data, the argument runs, doesn’t lie: it shows us as we are, not as we think we are.

Machine-learning systems are increasingly able to infer our views from what we do and say, and the technology already exists to analyze public opinion by processing mass sentiment on social media. Digital systems can also predict our individual views with increasing accuracy. Facebook’s algorithm, for instance, needs only 10 “likes” before it can predict your opinions better than your colleagues, 150 before it can beat your family members, and 300 before it can predict your opinion better than your spouse. And that’s on the basis of a tiny amount of data compared to the amount that will be available in the future.

The logical next question is this: what role will artificial intelligence come to play in governing human affairs?

We know that there are already hundreds, if not thousands, of tasks and activities formerly done only by humans that can now be done by AI systems, often better and on a much greater scale. These systems can now beat the most expert humans in almost every game. We have good reason to expect not only that these systems will grow more powerful, but that their rate of development will accelerate over time.

Increasingly, we entrust AI systems with tasks of the utmost significance and sensitivity. On our behalf they trade stocks and shares worth billions of dollars, report the news, and diagnose our fatal diseases. In the near future they will drive our cars for us, and we will trust them to get us there safely. We are already comfortable with AI systems taking our lives and livelihoods in their (metaphorical) hands. As they become explosively more capable, our comfort will be increasingly justified.

In the circumstances, it’s not unreasonable, let alone crazy, to ask under what circumstances we might allow AI systems to partake in some of the work of government. If Deep Knowledge Ventures, a Hong-Kong based investor, can appoint an algorithm to its board of directors, is it so fanciful to consider that in the future we might appoint an AI system to the local water board or energy authority? Now is the time for political theorists to take seriously the idea that politics—just like commerce and the professions—may have a place for artificial intelligence.

In the first place, we might use simple AI systems to help us make the choices democracy requires of us. Apps already exist to advise us who we ought to vote for, based on our answers to questions. One such app brands itself as ‘matchmaking for politics’, which sounds a bit like turning up to a blind date to find a creepy politician waiting at the bar. In the future such apps will be considerably more sophisticated, drawing not on questionnaires but on the data that reveals our actual lives and priorities.

As time goes on, we might even let such systems vote on our behalf in the democratic process. This would involve delegating authority (in matters big or small, as we wish) to specialist systems that we believe are better placed to determine our interests than we are. Taxation, consumer welfare, environmental policy, financial regulation— these are all areas where complexity or ignorance may encourage us to let an AI system make a decision for us, based on what it knows of our lived experience and our moral preferences. In a frenetic Direct Democracy of the kind described earlier in this chapter, delegating your vote to a trusted AI system could save a lot of hours in the day.

A still more advanced model might involve the central government making inquiries of the population thousands of times each day, rather than once every few years—without having to disturb us at all.66 AI systems could respond to government nano-ballots on our behalf, at lightening speed, and their answers would not need not be confined to a binary yes or no. They could contain caveats (my citizen supports this aspect of this proposal but not that aspect) or expressions of intensity (my citizen mildly opposes this but strongly supports that). Such a model would have a far greater claim to taking into account the interests of the population than the model with which we live today.

In due course, AIs might also take part in the legislative process, helping to draft and amend legislation. And in the long run, we might even allow AIs, incorporated as legal persons, to ‘stand’ for election to administrative and technical positions in government.

AI systems could play a part in democracy while remaining subordinate to traditional democratic processes like human deliberation and human votes. And they could be made subject to the ethics of their human masters. It should not be necessary for citizens to surrender their moral judgment if they don’t wish to.

There are nevertheless serious objections to the idea of AI Democracy. Foremost among them is the transparency objection: can we really call a system democratic if we don’t really understand the basis of the decisions made on our behalf? Although AI Democracy could make us freer or more prosperous in our day-to-day lives, it would also rather enslave us to the systems that decide on our behalf. One can see Pericles shaking his head in disgust.

In the past humans were prepared, in the right circumstances, to surrender their political affairs to powerful unseen intelligences. Before they had kings, the Hebrews of the Old Testament lived without earthly politics. They were subject only to the rule of God Himself, bound by the covenant that their forebears had sworn with Him. The ancient Greeks consulted omens and oracles. The Romans looked to the stars. These practices now seem quaint and faraway, inconsistent with what we know of rationality and the scientific method. But they prompt introspection. How far are we prepared to go–what are we prepared to sacrifice–to find a system of government that actually represents the people?

From FUTURE POLITICS: Living Together in a World Transformed by Tech by Jamie Susskind. Copyright © 2018 by Jamie Susskind and published by Oxford University Press. All rights reserved.

 

Fast Company , Read Full Story

(24)