Reconstructing the Social World For Profit: Platforms and Data’s Emerging Social Order

Nick Couldry | London School of Economics and Political Science

Reconstructing the Social World For Profit: Platforms and Data’s Emerging Social Order

In an age characterized by powerful new infrastructures of connection, a fundamental question for humanity is whether the outcome of such infrastructures is to be welcomed. Few would disagree that, in principle, connection between human beings is good: indeed every form of friendship and civil solidarity is based on connection, and today even the organic relations of family are sustained, in part, through technologies for connecting family members while they are spatially separated. So, if connection between individual human beings is generally a good, does that means that infinite connection between humans and things as it is enabled today online is, in turn, a massive good? Only if we follow a crude utilitarian calculus that simply adds up the benefits of each connecting link, without considering the unintended side effects of connection on such a massive scale. So let me try to ask a more precise question. Is the large-scale connectivity of human beings, enabled in the past three decades by computing-based technologies, a good thing for humanity considered overall, and in terms of its wider consequences; in other words, is the social order being constructed through such connections good, or could it alternatively be bad? What if it might be a massive bad, even if an unintended one? That is the possibility on which I want to reflect.

The institutions that we normally associate with possibilities of connection are based on media technologies. But I will not be talking here about traditional media (television, radio and the press). I will be talking instead about the new forms of data-driven media that have risen to prominence in the last decade or two. The case of news quickly illustrates the change in the media landscape. When I led a study of news consumption in the UK between 2003 and 2006, the main sources for news were the traditional ones: television, radio and the press, supplemented in a limited way by particular websites and some email-based discussion lists. Today, as the research of Ofcom and other regulators confirms, many people, especially young people, do not get their news from those traditional sources, but from social media such as Facebook’s main platform or that of its subsidiaries Instagram and WhatsApp, video-sharing platforms such as Google’s YouTube, or simply from various forms of search engine use, again usually Google-driven. Those large-scale digital platforms are increasingly positioning themselves as packagers of traditional news sources, generating disputes with traditional media which some governments such as Australia have tried to regulate. What will interest me here however are the processes of data extraction and consumer monitoring which underlie the functioning of digital platforms, and the role that platforms are playing in not only the circulation of media, but in the conduct of social life more generally.

I want to ask how we should respond ethically and morally to the implications for social life and institutional power that flow from the inexorable rise of digital media platforms. After a brief historical reflection, I will discuss two alternative models for understanding the rise of digital platforms and, more widely, the rise of Big Data discourses: surveillance capitalism and data colonialism. The latter is the theoretical model with which I personally have been associated, and I will focus upon it, not so much for that reason, but because there are important bridges to be built between the diagnosis that we are entering a new historical phase of data colonialism, distinct, that is, from historical colonialism, and wider questions raised by Pope Francis himself about the dangerous imbalance of humanity’s relationship to technology. That will enable me to bring into sharper focus the challenge announced in the 2015 Encyclical (Pope Francis 2015): the need to rethink our relations to technology and the instrumental view of the world that historically has been associated with our uses of technology, at least in the so-called ‘West’.

What is going on with data extraction today?

We do not have to look far in business and financial commentary to find positive readings of what is happening with data extraction today, even if some versions of it (for example, on increasingly controversial platforms such as Facebook) have recently attracted some criticism.

There are basically three advantages which mainstream business commentators find in the emergence of a world where the sorts of continuous data extraction that characterize social media platforms have become the norm, not the exception.

The first value, most obviously, is value extraction. As Microsoft CEO Satya Nadella put it half a decade ago: “the core capability of being able to create value . . . comes from being able to do machine learning and AI at scale. . . But in order to do that you need data and LinkedIn represents that when it comes to the professional network” (quoted Financial Times, 16 June 2016). Nadella leads a corporation that benefits hugely from ‘connection’ between human beings, because it is this, and only this, that stimulates the continuous online presence of human beings from which the continuous extraction of data is possible; and it is from such data that economic value can in turn be extracted.

The second value is, on the face of it, more subtle, since it involves reorganizing the actions that human beings would until now have performed themselves, using their own bodies or some simple tools, without any opportunity for data extraction. It involves reorganizing such simple data-free processes and actions into ‘smart’ processes that bring continuous possibilities of data extraction. Some leading thinkers in the marketing field have even gone so far as to see in new smart consumer objects the gateway to a new age of marketing that they call “The Age of Continuous Connection” (Siggelkow and Terwiesch, 2019). What is announced is a form of delegating human activities that enables a new form of “continuous connection” whose highest stage they propose is “automatic execution”: “in an automatic-execution strategy, customers authorize a company to take care of execution, and from that point on the company handles everything else”. The authors’ example is a smart fridge which will automatically order more milk, when it senses that you need it. But the authors add, things are not quite so simple. For how will the smart fridge know that you actually need more milk today? “Naturally”, they add, “only after checking our calendar to make sure we’re not going on vacation” (ibid.). Delegation of our everyday routines to smart devices requires also giving those devices access to considerable information about our social interactions and movements, data that might previously have been regarded as exclusive and personal to us.

Once the idea of delegation has been accepted, it is just a small step to generalize this into a model where the whole natural and non-natural environment becomes embedded in sensors which extract, transmit and store data of all sorts for the benefit of a variety of corporations and other institutions. One of the leading evangelists for not just smart devices, but a wider “digital transformation”, as he calls it, is US entrepreneur Thomas Siebel. He has written: “I expect that in the next few years virtually everything will have become a computer: from eyeglasses to pill bottles, heart monitors, refrigerators, fuel pumps, and automobiles . . . The basic idea of Internet of Things is to connect any device . . . to the internet, so that it can send and receive data” (Thomas Siebel 2019: 112, 45). What are we to make of this?

It should already be clear that these are not trivial changes that are proposed in the organization of social life, but potentially revolutionary ones, in the eyes of these writers certainly, but potentially from many perspectives. Siebel, among others, even claims that the new ‘digital transformation’ – enabled by Big Data, Artificial Intelligence, massively expanded and more flexible cloud computing, and smart devices with the capacity to extract data from everywhere – will ‘extend the length and quality of human life” (2019: 27). That is not a trivial goal.

But there are surely other ways of looking at these changes. I want to propose one in particular. We need, like Siebel, to interpret these changes in the round, and in terms of their broadest implications, but paying rather more attention than he does to the concentrations of power that result from such a massive increase in the extraction and processing of data. From this perspective, what we are seeing is not an augmentation of human life, but rather the reconfiguration of social life – and the things we interact with – for one end only: not the extension of human capacities, but the optimization of economic value through the extraction of data. From this power-sensitive perspective, what is under way is not an empowering of human life, but its increasing instrumentalization for external, corporate goals: it is those corporate goals that are thereby ‘empowered’, not ours, as human beings.

To help us see what is at stake here, let me quote from Pope Francis himself and the 2015 Encyclical Laudato Si’: On Care for Our Common Home which the leading environmentalist Bill McKibben has called “one of the most influential documents of recent times” (McKibben 2015: 40):

The basic problem goes even deeper: it is the way that humanity has taken up technology and its development according to an undifferentiated and one-dimensional paradigm [that] exalts the concept of a subject who, using logical and rational procedures, progressively approaches and gains control over an external object . . . attempting to extract everything possible from them while frequently ignoring or forgetting the reality in front of us (Pope Francis, 2015, 66-67, original emphasis).

This quotation is not specifically concerned with computers or data, but its thought fits very well with how we are currently using them, and with the implications for social power that I have just noted.

Indeed, Pope Francis’ recently expressed concern fits very well also with a prediction for the long-term consequences of our instrumentalized use of computing technology made at the very dawn of the computer age by one of its founders, the mathematician Norbert Wiener, in the original preface to his famous book Cybernetics, published in 1948. The clarity of Wiener’s vision of the potential social costs of a technology that he had done so much to help invent remains, to this day, breathtaking:

It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control . . . Long before . . . public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil (in Wiener 2013: 29).

He added ‘there are those who hope that the good of a better understanding of man and society which is offered by this new field of work [cybernetics] may outweigh the incidental contribution we are making to the concentration of power . . . I write in 1947 and I am compelled to say that it is a very slight hope” (ibid.). In one sense, Wiener was right to be pessimistic, since almost no attention was paid to his warnings in the eight decades that followed of introducing computers into everyday life.

The fact that such a clear warning was ignored requires us to ask not only what is going on with data extraction today, but how did we get to this point. It is too easy to imagine a vast corporate conspiracy to rule the world for evil purposes, but there is no evidence of that (and indeed too much of what has happened has emerged through unintended side-effects of other things). It makes more sense to interpret this history in terms of many convergent developments, none of which would have been decisive on their own, but which, taken together, have come to change the very nature of the social order, the very possibilities of how social life can be ordered.

Seven stages in that gradual convergence can be distinguished. Let me sketch them, although there is no time to go into any detail.

At the start was a feature of how computers, as we know them, function: that is, by regularly capturing their changes of state in an auto-archive which provides the basis for future operations of the computer. The first to note the deep social implications of this mundane aspect of computing was Philip Agre (Agre 1994) who also noted that not anything can be recorded in a computer’s archive of what has happened to it, but only an action that, as he put it, fits with a computer’s “grammar of action”. Put simply, I may smile at my computer keyboard or screen, but the smile will leave no trace in its memory, since a smile on an unconnected human body is not readable by the computer as ‘anything’ (until, that is, the computer acquires a camera which can capture an image of that smile, read it as a smile, and then store a trace of that reading as something determined in advance to be significant for the computer’s operations. This basic first point, however, acquires completely new significance when, secondly, computers become connected in effective ways to each other, so that one computer is able to capture data archived on other computers. This happened in the early 1990s with the emergence of the internet as a general resource in daily life.

The next key step was when that abstract space of computer connection started to be re-organised in the mid-1990s around commercial goals (commercial web browsers for internet access; growth of corporate intranets for logistics etc; advertisers’ exploitation of the computer cookie to track consumers). But this commercialization was profoundly accelerated in the early years of the century when online platforms emerged as interfaces that could reconfigure countless social and economic interactions by requiring them to be transacted on those platforms, making possible a massive increase in the capture and processing of data gathered from those platforms. It took a while, fifthly, for the business models of major platforms to fully adapt to the implications of this structure, but in time this happened, as Facebook refocussed its business model around the extraction of data from user behaviour as the fuel for more efficient targeting of ads sent to users, while on the platform. But Google had already been moving for some time in the direction of monetizing the data gained from tracking its search engine users (Zuboff 2019).

Today we are accustomed to all this and to the consequences of two further steps that unfolded more recently: one is the normalization throughout the last decade of data extraction from user tracking as the very basis of social and economic life generally, and increasingly as the basis for government functioning too (as I write, the UK government has adopted the slogan that “data is the new air”: BBC Radio 4, Today programme, 22 June 2021). The other is the increasing extension of data tracking to inanimate objects (The Internet of Things).

The details of these seven overlapping changes are complex, but taken together, they amount, I suggest, to a turning-point in history, indeed a moment of choice when humanity must decide whether to continue down the same path (of reconfiguring social life for corporate, not human, gain) or whether we take a step back and consider humanity’s options for moving in a different direction.

Two alternative theories of what is going on with data

To help us assess our options in response to this huge challenge, theoretical frameworks are helpful. There are many options, but for reasons of space, let me here concentrate on just two: the concepts of surveillance capitalism and data colonialism.

Zuboff’s concept of surveillance capitalism is very well-known, and it is a powerful integrative framework. Zuboff (2019) sees at work just beneath the surface of our contemporary uses of technology, and our restless desire to extract data from everything, the emergence of a new form of capitalism focussed on exploiting what she calls “human surveillance assets” in a new mode of accumulation. A great strength of this model is to emphasise the ruthless targeting of personal data by particular corporations such as Google and Facebook, and their resulting grand ambition to influence human behaviour through the medium of the vast stores of data about past behaviour gathered. By contrast, the concept of data colonialism which I have myself developed with the Mexican/US author Ulises Mejias is less well-known: it shares a common vision of how problematic the unconstrained extraction of personal data is, but it reads it not just as a continuation and extension of capitalism, but as a new stage in the development of colonialism. This new, data-focussed, capitalism appropriates not land, minerals and bodies, but human life itself, making possible a new future capitalism that exploits human life without limit (Couldry and Mejias 2019). The theory of data colonialism, in other words, reads what is going on with data not just in terms of recent emergent techniques of capitalism, but in terms of the 500-years old relations between colonialism and capitalism, which started with the appropriation of the vast mineral wealth of the Americas by Spain and Portugal but which also made possible the very emergence of capitalism.

Let me say a little more about how what’s going on with data and digital platforms might be connected not just with capitalism (as it quite obviously is), but also with colonialism (here I am condensing hugely here form the argument of my book on this topic: Couldry and Mejias 2019). Crucially this does not mean claiming a one-to-one correspondence between everything that happened in historical colonialism and everything that might happen with ‘data colonialism’, assuming we are correct in our diagnosis and it continues to unfold for the centuries that historical colonialism took to unfold. Such an over-ambitious comparison would be absurd: ‘colonialism’ is too large a historical object to treat in that way. Rather we are making a much more specific, point-to-point comparison, comparing today’s moment when data colonialism is starting to emerge through a myriad of extractive data practices to the initial act of historical colonialism when a small number of powers began to appropriate the world’s assets for their exclusive benefit (its territory, minerals, agricultural produce, and the, normally non-white, bodies that were conveniently available to extract that value).

At the core of historical colonialism’s beginning was an act of appropriation. This basic fact has been seen most clearly perhaps by indigenous peoples, such as the North American First Nations writer Leanne Betasamosake Simpson, who has written that “Colonialism . . . didn’t seem complicated anymore . . . It seemed simple. Colonizers wanted the land. Everything else, whether it is legal or policy or economic or social, whether it was the Indian Act or residential schools or gender violence, was part of the machinery that was designed to create a perfect crime – a crime where the victims are unable to see or name the crime as a crime” (Simpson, 2017: 15). This approach sees historical colonialism as, most fundamentally, and even with its vast other histories of racism and violence, as a landgrab or landnähme (Dörre, Lessenich and Rosa 2015). It follows that if data colonialism is a new version of this fundamental movement of colonialism, then what is going on with data is fundamentally a new form of landgrab. The target of that landgrab is us: human beings and the open-ended stream of human experience and activity, that, though data, has become convertible into economic value for the first time. This claim might seem dramatic, but, seen from the perspective of the last 500 years of colonialism and not just the last 40 years of the internet, or even the two and a half centuries or so of capitalism, this claim makes clear sense, as part of a progressive pattern.

At the start, colonialism relied on cheap land, or more specifically on claims that the land it seized in what were to become the colonies was just there for the taking, that is, ‘free’ to take because it belonged to no one (eventually this was codifed in the legal concept of terra nullius: for the parallel here with data extraction, see Cohen 2019: 50). Underlying this claim that the land was just there to be seized, there came also, from early on in colonialism’s history, a denigration of those human beings who were in fact already occupying that land and had done so for centuries: their rights to the land were completely ignored. The working of the land in historical colonialism also involved access to cheap labour, in particular slave labour: indeed the availability of that labour to ensure that value was extractable from such difficult land was crucial to colonialism’s economic success.

Today’s new stage of colonialism seizes a different asset, data, but this too must be cheap. To ensure that this is the case, many things converge: the discourse of countless business and multilateral organizations that our personal data is just the ‘exhaust’ ‘naturally’ given off by human activity as it occurs online; the availability of favourable legal environments that have not, until now, significantly challenged the continuous extraction of data from human beings; the tacit support of many governments around the world who see opportunities themselves to benefit from corporate data extraction as the fuel for their own strategies of intensified population governance; and finally the fact that, because data is anything but naturally occurring, it requires vast infrastructures of automated processing to generate value from it, infrastructures whose ownership and control lie almost entirely in the hands of large corporations.

How to confront data extraction and its consequences for humanity?

Depending on whether one finds the theory of surveillance capitalism or data colonialism more useful will inevitably affect one’s interpretation of what actions are possible in response to data extraction. The appropriate response to surveillance capitalism would seem to be to rein in the ‘rogue’ digital platforms such as Facebook and Google in the hope that, by doing so, a more modest and less ambitious form of capitalism can be restored. As Zuboff puts it, “raw surveillance capitalism is as much of a threat to society as it is to capitalism itself” (Zuboff 2019: 194).

But the framework of data colonialism affords no such easy exit, since it argues that it is the whole direction of capitalism in contemporary societies, both in “the West” and “the East”, including its instrumentalized uses of technology that treat human life in general as an object of extraction rather than as something to value in itself, that is the problem. And indeed problematic forms of data extraction can be found much further afield than social media platforms and search engines: in the massive expansion of data-driven surveillance of workers in the workplaces (Levy 2015); relatedly in the growth over four decades of logistics which by tracking things every more closely inevitably has indirect implications for the tracking of workers’ bodies (Cowen 2015); in the uses of data and algorithmic processing in personal finance markets and social welfare (Fourcade and Healy 2013; Eubanks 2018); and in many other sectors too, from agriculture to health and education to international development.

If we are to see clearly what is problematic in all those sectors, we must go further than lamenting the consequences for individual privacy of social media platforms, important though that is. We must also raise questions about the implications for human dignity, and indeed the very basis of human freedom, of treating human life as just an input to economic production (Couldry and Mejias 2019: chapter 5). This is where the theory of data colonialism connects clearly with the broader question of the instrumentalization of human life through technology about which Pope Francis has written, since both seek to confront that instrumentalization.

We can distinguish two dimensions of that instrumentalization operating in the forms of data extraction that underlie most contemporary media. First, there are harmful uses of data for commercial and state purposes; and second, there is the underlying threat to human freedom from the basic fact that human life is now being continuously tracked. Both problems are aspects of a wider instrumentalization of the world, and specifically the world of human meaning-making, for technology and power. And as such this instrumentalization fits very clearly into a longer history of the problematic relations between power and knowledge that is at the very heart of the project of colonialism from its beginnings. The Peruvian sociologist Aníbal Quijano was the first to clearly identify this through his concept of coloniality (in “Colonialidad y Modernidad/Racionalidad” 1992, translated as Quijano 2007).

Although developed for an era before the expansion of the internet as a tool of everyday life, and long before the massive expansion of data extraction with which we are all now familiar, Quijano’s concept of coloniality (colonialidad) has great relevance to the age of Big Data. Let me explore it in a little more detail. Coloniality for Quijano refers to the ways of thinking and practices of knowledge production which continued throughout colonialism and remain in existence even after the formal political structures of imperialism and colonies have ended. Coloniality as a way of thinking is inseparable from – indeed it helped to form – what we, separately, known as ‘European modernity/rationality’ (Quijano 2007: 171): what we know as modernity is the world conceived in coloniality’s self-image. But what exactly did, and does, coloniality involve? It was a form of domination working at many levels of knowledge production: the level of “specific beliefs, ideas, images, symbols or knowledge” which were regarded by colonizers as irrelevant to their mode of governing, and the expropriation “from the colonized” of knowledge that was regarded as useful. It was also, at another level, domination “over the modes of knowing, of producing knowledge”, in other words the imposition of new dominant models of what counted as knowledge. And finally, it involved the colonizers imposing their own “mystified image of their own patterns of producing knowledge and meaning”, of which the language of European modernity is part (Quijano 2007: 169). It is not hard to see such patterns repeating themselves in the discourses about Big Data and digital transformation that are so common today.

Equally interesting is Quijano’s vision of how to confront the centuries-old process of coloniality, as it shapes the domains of knowledge and science: not by rejecting rationality or knowledge itself (why give up on that ideal, why give up on the possibility of living our lives more in accordance with our reasoning?), but rather by offering positively another vision of “rationality”. If one vision of rationality is to organise society, functionally, in technology’s interests and in the interests of the extraction of profit through technology, it is only one of many possible visions. Another vision, according to Quijano, is to think of society as a different type of “totality”, adopting a concept of totality that “not only does not deny, but requires the idea of an ‘other’ – diverse, different’” (Quijano 2007: 177). This is a vision of potential human uses of technology that give more respect to the diversity of human purposes for living on the earth and benefiting from its resources: adopting such a more respectful vision might indeed be a true “digital transformation” in Thomas Siebel’s phrase. Such an empowering vision would however surely be in conflict with today’s Big Data vision of continuous data extraction from human bodies and minds – in fact from everything – to provide the fuel for ever greater corporate efficiency: a shallow digital transformation that ends up eroding the very basis of human freedom rather than augmenting the capacities of human beings.


It is Quijano’s vision, in some form, that we need today if we are to get the full measure of the challenges facing human societies, digital societies, through the data extraction processes that underlie our changing media, and so many other aspects of society and the economy. The problem is not that data-extracting processes are unfamiliar: in some form they are becoming ever more familiar, although many aspects of them remains opaque and hidden. The problem is that, through the myriad forms of data extraction which have become normal aspects of daily and business life over the past three to four decades, something much larger, and more dangerous, is being actualised: a corporatization of social life, and its reconfiguration for ends which are not social, but commercial. This is not, of course, to deny that commercial activities are a valid part of the social world and of human life, but rather to insist that humanity has not agreed – and may never agree – to convert the full, indeed inexhaustible, potential of human life into a mere matrix of possibilities for the generation of profit. Again, profit itself is not the issue. The issue is reconstructing the social world for profit, for corporate ends, and for the specific end of optimally extracting data from it, whatever the costs to our independence and autonomy, is necessarily a reduction of human potential. This reconstruction must therefore be resisted with all our efforts. We need a vision that imagines not only rejecting data colonialism in all its many forms, but also, more positively, imagines ways of humanly connecting without the costs that today’s digital world appears destined to impose upon us, unless, that is, we start to resist.



This chapter draws heavily on my joint work with Ulises Mejias since 2016, and especially our book The Costs of Connection. That said, the responsibility for this particular version is mine.



Agre, P. “Surveillance and Capture: Two Models of Privacy”. The Information Society 10, no. 2 (1994): 101-127.
Cohen, J. Between Truth and Power. New York: Oxford University Press, 2019.
Couldry, N. and Mejias, U. The Costs of Connection: How Data is Colonizing Human Life and Appropriating it for Capitalism. Palo Alto: Stanford University Press, 2019.
Cowen, D. The Deadly Life of Logistics. Minneapolis: University of Minnesota Press, 2014.
Dörre, K., Lessenich, S. and Rosa, H. Sociology, Capitalism, Critique. London: Verso, 2015.
Eubanks, V. Automating Inequality. New York, NY: St. Martin’s Press, 2018.
Fourcade, M., and Healy, K. “Classification Situations: Life-Chances in the Neoliberal Era”. Accounting, Organizations and Society 38, no. 8 (2013): 559-572.
Pope Francis Encyclical on Climate Change & Inequality: On Care for our Common home. Brooklyn: Melville House, 2015.
Levy, K. “The Contexts of Control: Information, Power, and Truck-Driving Work”. The Information Society 31, no. 2 (2015): 160-174.
McKibben, B. “The Pope and the Planet”, New York Review of Books, 16 August, 40-42 (2015).
Quijano, A. “Coloniality and Modernity/Rationality”. Cultural Studies 21, no. 2-3 (2007): 168-178.
Siebel. T. Digital Transformation. New York: Rosetta Books, 2019.
Siggelkow, N. and Terwiesch, C. “The Age of Continuous Connection”, Harvard Business Review, May-June, 2019.
Simpson, L. Betasamosake. As We Have Always Done. Minneapolis: University of Minnesota Press, 2017.
Wiener, N. Cybernetics [composite edition]. Mansfield Center, CT: Martino Publishing, 2013.
Zuboff, S. The Age of Surveillance Capitalism. London: Profile Books, 2019.