Artificial Intelligence, Authoritarianism and the Future of Political Systems

Google+ LinkedIn

Click to Download Full English Report

Artificial Intelligence, Authoritarianism and the Future of Political Systems

Akın Ünver – EDAM, Oxford CTGA, Kadir Has University


“Enlightenment is man’s emergence from his self-imposed immaturity. Immaturity is the inability to use one’s understanding without guidance from another.”

— Immanuel Kant (Answering the Question: What is Enlightenment? 1784)


In February 2017 Scientific American featured a special issue, which revolved around the question: ‘will democracy survive big data and artificial intelligence.’[1] According to the issue, humanity is undergoing a profound technological transformation, and the advent of large-scale social and behavioural automation would change how human societies will be organized and managed. Later in August 2017, Vyacheslav Polonski, a researcher at Oxford University asserted in a World Economic Forum article, that artificial intelligence ‘silently took over democracy,’ citing the impact of A.I.-powered digital advertisements, social media platform power and mass communication spoilers (bots and trolls) on political processes.[2] Societies online and offline, mostly against their will and awareness, are increasingly experiencing the effects of large-scale automation. Political systems, elections, decision-making, and citizenship too, are increasingly being driven by aspects or by-products of automation and algorithmic systems at different systemic levels. These systems range from targeted political advertisements to facial recognition, from automated interactions that intensify polarization to Internet-based mass-participation opinion polls that can easily be skewed by factors of automation.


Digital communication is at the epicentre of this critical and historical interaction between politics and automated systems. Political communication professor Andrew Chadwick was the first to coin the term ‘hybrid media system,’ which referred to the multitude of roles performed by social media platforms.[3] According to the hybrid media system theory, platforms such as Twitter, Facebook or Instagram are not just communication tools, but also perform news-media roles during emergencies, as well as political assembly and protest roles during contested events like elections or key events. The algorithmic structure of these platforms, therefore, increasingly impact and shape political messaging, information-seeking, and citizen engagement. In the words of Jose van Dijck, “Social media are inevitably automated systems that engineer and manipulate connections.”[4] In that regard, Facebook is not a passive platform that simply connects friends and relatives. It is a living and breathing political actor that actively harvests personal information from its users and sells it to third parties.[5] Such data can be used for targeted advertisement, micro-profiling, and political behavioral analysis that the world most recently observed during the Cambridge Analytical scandal.[6] Twitter, Amazon, Netflix, and other algorithmic platforms too are structured upon the harvesting and exploitation of similarly vast quantities of granular human data, that are in turn used to profile and catalog behavioral patterns of societies.[7]


Just like media platforms are hybrid, so are data types. ‘Hybrid data’ refers to the multi-purpose nature of human footprint online; namely, how people’s ‘like’s, retweets and check-in decisions can be harvested to be cross-fed into each other to generate a multi-dimensional snapshot of micro and macro-level determinants of social behavior.[8] A user’s personal fitness tracker, check-in location information and Google search histories combined, can yield a very granular set of information from that person’s health, purchasing behavior and political preferences. This personal data hybridity, when cross-matched with people with similar search histories, tastes, and online order patterns creates the information infrastructure of mass surveillance and become the largest ever pool of social monitoring and tracking.[9] Such surveillance is no longer as labor-intensive as it used to be; mass profiling infrastructures too, are largely algorithm-driven. Algorithms, programmers and technology companies that are responsible for developing and maintaining these structures of automation, thus form a new source of power that is partially independent of states as well as international political institutions.[10] As Internet connectivity and social platform membership explode globally, the percentage of the world’s population living under these new forms of automated power relations increase exponentially, rendering the impact of automated politics historic.


Image 1 – Timeline of the evolution of Google Search algorithms (Source: Google)


Modern algorithmic structures inherit the cybernetic theory, introduced in the late-1940s by mathematician Norbert Wiener. Wiener argued that the behavior of all large systems (bureaucratic, organizational and mechanical) could be adjusted and maintained through regular feedbacks.[11] By ‘learning’ through constant feedbacks, systems adapt and learn and eventually perform better. It is through the basic premises of the cybernetic theory that some of the most popular forms of automated structures (such as machine learning, blockchain, decentralized data mapping) operate. Since algorithms are trained through live human data, they rapidly act and behave in a way that emulates human behavior. A search algorithm, for example, is designed to provide the most relevant search results based on the query string. When a violinist is searching for a new bow, it is algorithmically more meaningful to curate ads, news feed items and connection suggestions based on violin, or classical music to that user, instead of – say – golf.[12] It saves time and renders online interactions more meaningful and relevant. However, it does more than that. The algorithm improves by studying the search histories of millions of people, their second and third next search strings and page view duration statistics to make a search engine faster and better able to address follow-up queries by users.


How do we, then, contextualize the political implications of these ‘automated structures of relevance’? After all, none of these algorithms were initially designed to exert such vast political, economic or social impact. The very code structures that enable a violinist to find more music-related content online are also polarizing societies, intensifying political echo chambers and distorting meaningful political debate in digital space.[13] Whose fault is it? Are societies becoming less tolerant due to technological change? Are governments exploiting these technologies of scale to re-assert their authoritarian dominance? Or is this an evolutionary curve that will settle in time, or are algorithmic structures permanently in place to influence state-society relations for the long haul?


The current debate boils down to the question of whether or not technology companies strategically deploy biased algorithms to reinforce their position of power in ‘A.I. arms race’; namely automated information retrieval, engagement maximization, and content curation.[14] The business model environment within which big tech companies operate is dependent on engagement metrics: likes, retweets, comments. To maximize profit, tech companies have to maximize engagement, which inherently suggests content that elicits as much response as possible from as many people as possible.[15] This automatically triggers algorithmic principles that generate extreme or emotional behavior through similarly extreme of emotional content. From an algorithm’s point of view, whether users respond to a positive or negative content is irrelevant, since what ultimately matters is the maximization of quantified user statistics. As long as the user response is optimized, an algorithm is doing its job regardless of whether through bloody war videos, or kitten photos. As quantified engagement is cashed in as advertisement revenue, the resultant media environment favors extreme content and media, leading to a toxic interaction culture across all social media platforms. In particular, tense political and social episodes, such as protests, elections or diplomatic escalation, this social media environment exerts a disproportionate effect on misinformation through fake news and automated accounts known as bots.[16]


Although popular, social media is not the only avenue for discussion in exploring A.I. and politics. Another popular debate on A.I.-politics nexus is the issue of automating decisions – namely, day-to-day machinations of the bureaucracy outsourced to machines. Most champions of the ‘A.I. bureaucracy’ argument favor the outsourcing of low-risk decision-making, rather than policy formulation, to optimize bureaucratic size and depth.[17] In the name of making governments and bureaucratic apparati more efficient, algorithmic systems are said to take over the functions of the rank-and-file bureaucracy gradually.[18] Modern bureaucracy, at least as defined by Max Weber, is the ideal candidate for an algorithm-based, automated habitus: “Bureaucracy is an organisational structure that is characterised by many rules, standardised processes, procedures and requirements, number of desks, meticulous division of labour and responsibility, clear hierarchies and professional, almost impersonal interactions between employees”.[19] A.I. can indeed solve some of the most chronic dysfunctions of the state, such as corruption, inefficiency, and ego politics. It can offer an efficient centralized response to a multitude of citizen requests, resolve resource allocation problems, remain immune to human fallacies such as fatigue or burnout, and can perform all non-decision tasks such as speech transcription, translation, document drafting and archiving far better and faster than any human-based bureaucracy. However, the erosion of the bureaucratic apparatus, transfer of tasks to algorithmic structures bereft of decision-making will nullify one of the most potent sources of authority for the modern state: a rational bureaucratic workforce. With such a significant power source automated and human influence minimized, some states might use A.I. as a guardian of reinforced totalitarianism. Furthermore, pre-existing problems with A.I. transparency and code accountability will be even more relevant in this case, as biases in programming will have a disproportionate effect on administration as mistakes are amplified through the sheer volume and size capacities of algorithmic decision-making.


Click to Download Full English Report