Publications

The future of work in the age of digital revolution

Stefano Zamagni[1]

1.             Introduction

In this paper I endeavour to throw some light on the ethical and social consequences associated with the current rapid diffusion of the so-called convergent technologies. It is sloppily reductive to identify the Fourth Industrial Revolution simply with a new technological paradigm. To emphasise this dimension alone, as, unfortunately, is the case with most of the literature on this subject, does not enable us to grasp the elements of rupture on the social and cultural fronts which are being revealed by this radical phenomenon. A profound disruption is happening in the workplace and in the economy at large, as the relentless march of technology has brought us to a point where machines and software are not just outworking us but starting to outthink us in more and more realms. This does not allow us to devise strategies of intervention effective enough to face the contemporary challenges.

Right from the beginning, I would like to clarify the perspective from which I am going to investigate these realities. The present author does not see himself either in the position of the laudatores temporis acti, the so-called techno-pessimists, or in that of the uncritical admirers of the “magnificent progressive destiny” of humanity. If those who praise the Fourth Industrial Revolution are wrong, those who disparage it are not right. In fact, I regard the present techno-scientific trajectory as something positive in itself, and, moreover, unstoppable. However, it is something that has to be governed with wisdom (that is, with reasonableness) and not only with competence (that is, with rationality).

It is known that networked AI (Artificial Intelligence) will amplify human effectiveness but also threaten human autonomy, agency and capabilities. Computers might match or even exceed human intelligence on tasks such as complex decision-making, reasoning and learning, speech recognition and language translation. Yet, most experts, regardless of whether they are pessimistic or optimistic, express concern about the long-term impact of these new tools on the essential elements of being human. Automatism and AI will effect tasks in virtually all occupational groups in the future, but the effects will be of varied intensity, and drastic only for some.[2]

Before proceeding, I deem it necessary to clarify the notion of development, a word very much overused today. In its etymological sense, development indicates the action of freeing somebody from the entanglements, traps and chains which inhibit his/her freedom of action.[3] It is above all to St. Paul VI that we owe the emphasis on the connection between development and freedom: development as a process of expansion of the real freedoms enjoyed by human beings (see his encyclical letter Populorum Progressio issued in 1967). In biology, development is synonymous with the growth of an organism. In the social sciences, on the other hand, the term indicates the passage from one condition to another and, therefore, refers to the notion of change (as when one says: that country has passed from the state of an agricultural society to an industrial one). In this sense, the concept of development can be associated with that of progress. Bear in mind, however, that the latter is not a merely descriptive concept since it involves an implicit yet indispensable value judgement. In fact, progress is not simply change but rather change towards the better, and so it implies an increase in value. It follows that the judgement of progress depends on the value which one intends to take into consideration. In other words, a valuation of progress and, therefore, of development requires the determination of what it is that has to proceed towards the better. Even if endowed with artificial intelligence, robots are not, and never will be, adequate to such a task.

The central point to be underlined is that development cannot be reduced solely to the dimension of economic growth – still measured today by that indicator well known to all, the GDP (Gross Domestic Product). In fact, growth is only one of the three dimensions of the development notion. The other two dimensions are the socio-relational and the spiritual ones. Notice that the three dimensions stand in a multiplicative relation, not an additive one. This means that it is not acceptable to sacrifice, say, the socio-relational dimension in order to increase growth – as it is happening today, unfortunately. In a multiplication, even if a single factor is zero, the whole product becomes zero. This is not the case in a sum-total where the annulment of an addendum does not cancel the total. By the way, this is the central difference between the notion of total good (the sum of individual goods) and that of common good (the product of individual goods). Strictly speaking, it is impossible to speak of integral and inclusive growth, whereas one can and ought to speak of integral and inclusive development. Basically, integral human development is a transformational project which has to do with the change in people’s lives in the direction of their betterment. Growth, on the other hand, is not transformative in itself. It is for this reason that, as history teaches us, there have been examples of communities or nations which declined even though they were growing. Development belongs to the order of ends whereas growth, which is an accumulative project, belongs to the order of means.

2.              On some relevant “res novae” of the digital revolution

The promise of an empowering and so a transformation both of man and society which is made today by the converging technologies of the NBIC (Nanotechnologies, Biotechnologies, Information, Cognitive Sciences) group explains the extraordinary attention that technoscience is receiving in a number of areas, from the cultural to the scientific, from the economic to the political. The goal being pursued is not only the empowering of the mind, and not even just the enlargement of diagnostic and therapeutic capacity when it comes to the whole spectrum of pathologies, and, still more, not the improvement in the means of controlling and manipulating information. What is intended is the artificialization of human beings and, at the same time, the anthropomorphization of the machine. It is to Julian Huxley that we owe the invention of the word “transhumanism” to describe a future world in which, instead of oppositions between human beings, we shall have a continuous hybridisation of the human.

As a global movement, transhumanism started in the Silicon Valley – where the University of Singularity was established in the last decade – thanks to the efforts of Google and Apple, whose aim is to construct an “enhanced human being” with increased abilities. It is on this point that it is urgent today to lift the veil of silence, opening a high-profile debate. In fact, the question concerns the anthropological dimension. There are two conceptions of the human which are confronting each other: that of the human-person and that of the human-machine. The latter is gaining ground over the first. That explains, among other things, why the ideal of the human machine is today causing a real emergency in the area of education: training/instruction is replacing education. The human-machine “seeks” instruction; education is no use to it. The reference here is to the theory of equilibration, according to which the engine of the mental development of the child and of the young person is a process of cognitive adaptation to impulses coming from outside them.[4]

Nothing could be more mechanistic. This is a vision inspired by the principle of homoeostasis, the same one that is at the basis of cybernetic theories. A worrying signal of this reductionism is the gradual disappearance of the figure of the educator. The master-teacher is reduced to the role of facilitator or mediator who must not educate but only assist in the process of self-learning or self-training since only what one does by oneself is of value. This is one of the most devastating consequences of libertarian individualism, of which I shall speak in the final section.[5] Not only that, but the professed anti-authoritarianism – according to which, it is not permitted to condition or guide the “free” choices of the subject – is actually hiding an authoritarian vision: only the expert in self-training can speak about education. But didactic methods do not lead by themselves to knowledge, as one might believe. It could be of interest to note that we have abundant evidence today denouncing the dependence of an increasing number of young people on the iPhone – depressive syndromes, decreased capacity for concentration, especially among those young people for whom the smartphone has become a sort of new “drug” causing the FOMO syndrome (“Fear of Missing Out”).[6] So much so that two of the largest American investment funds (Calstrs, the pension fund for the Californian teachers, and the Jana Partners hedge fund) have publicly invited Apple to modify the intensity and the way of utilization of the new device and to prepare measures specifically directed at teachers.

Another important novelty associated with the phenomenon at stake refers to the new method of organising production known as Industry 4.0. This is an expression coined by the German firm Bosch and presented for the first time at the Hannover Trade Fair in 2011. Artificial intelligence, robotics, genomics and information technology are literally revolutionising both the means of production and the meaning of human work. The fusion between the real world of plants and the virtual world of information, between the physical world of people and the digital world of data, has given birth to a mixed cyber-physical system which aims at solving those problems which the models of the past have not been able to, e.g. reducing waste; gathering information from the working process and redeveloping it in real time; anticipating project errors by virtualising the production; valuing fully the creativity of the worker; incorporating the specific requests of the client in all the stages of the production process.[7]

Clearly, in order for the Cyber-Physical System (CPS) – the heart of the Industry 4.0 project – to produce the expected results, it is essential that the firm includes a radical organisational innovation which abandons the obsolete Ford-Taylorist model based on hierarchy and on an excessive specialisation of tasks. It is of little use to acquire the new machines and to activate their technological operating systems unless a management change is implemented, facilitating the development of a participatory culture among all those who work in the firm. That is why we are observing today the appearance of new professional figures such as “Digital Innovation Officer”, responsible for digital innovation; “Technology Innovation Manager”, whose role is to facilitate the spreading of innovation; “Data Protection Officer”, in charge of the protection of data and privacy; “Coding expert”, instructing the machine to carry out a specific task through the programming language (coding, to be precise); and yet others. These are becoming indispensable professions to manage the changes imposed by the use of “big data”, the “internet of things” – an expression coined by Kevin Ashton in 1999 – and, in the near future, the “internet of beings”, which will represent the third phase of the life of the net. It is the generalized lack of such figures which explains some paradoxes that would be otherwise inexplicable.

An empirical confirmation of this observation comes from a recent OECD enquiry (Productivity Trends, Paris, 2014) which serves to confirm the so-called “Solow paradox”: notwithstanding the enormous increase in the power of computers and digital technologies, overall productivity in the last forty years has not increased as much as expected. Utilising the data of the G7 countries, OECD researchers calculated that in the twenty years 1970-1990, productivity in the seven most advanced countries increased by an average of 2.6% per year; on the other hand, in the 1991-2013 period marked by the general diffusion of the new technologies into the business world, productivity rose in the same countries by an average of 1.7% per year. A few explanations have been suggested to make sense of the paradox. The most plausible one refers to the so-called “great war management problem”, based on the following analogy.

On the eve of the First World War, there was tremendous progress in military technology while military strategy remained basically the same as that which prevailed at the time of the Franco-Prussian War of 1870. Mutatis mutandis, the analogy with the present situation is perfect. The new technologies have been adopted so rapidly as to render business strategies irremediably obsolete. Bear in mind that what has happened corresponds to the phenomenon of the “displacement of ends” which takes place in bureaucracy. Some time ago the American sociologist Robert Merton explained that rules and procedures which were initially meant to prevent administrative chaos became ends in themselves. People work as if sticking to the rules were an end in itself and not a tool for the goal which the business is supposed to be pursuing, i.e. the creation of shared value. To tell the truth, it is fair to say that the decrease in observed productivity is, at least in part, due to the statistical methods still in use to measure GDP. Such methods continue to reflect the transition from agriculture to industry and so are not adequate to represent the misalignment between output increases and improvement in human well-being brought about by the new technologies.[8]

The principal factor responsible today for the increase in productivity is digital fluency, consisting of the body of new skills made possible by the introduction of new technologies. This is a metaskill which goes beyond mere digital literacy, that is, the simple knowledge of programmes and applications. When, in 2001, Marc Preusky traced the difference between digital natives and digital immigrants, he could certainly not imagine that his distinction would have become obsolete within the span of the following decade. Today, the important distinction is not between digital firms and non-digital ones but between firms which are digitally fluent and firms which are not – and, therefore, simply survive. The former, in fact, have workers who are able to integrate marketing within design, making use of the feedback from sales, something which translates into keeping costs down and increasing competitiveness levels.

It follows that our societies are today facing a new division between the digital workforce and the non-digital workforce in the labour market. I will deal with the anthropological and economic consequences in the next sections. Here I shall limit myself to reiterating the urgency of updating the managerial culture inherited from the recent past, a culture which is incapable of filling the profound gap between the logic of participation demanded by digital fluency, which encourages horizontal collaboration oblivious to hierarchical relations, and the model still dominant in businesses, which privileges linear processes and hierarchical control of a bureaucratic nature. Thus, it happens that, thanks to the possibility of employing a digital workforce, firms that started off as digital and have not been subject to the limits represented by organisational models of the past, end up enjoying an advantage compared with digitally immigrant ones. Consider, in short, what could happen with the large-scale employment of 3D printers, applied not only to the production side but extended also to the consumption side. As is well known, the model of the 3D printer – the so-called Rep Rap (Replication Rapid Prototype) – is self-replicating and free. Potentially, a 3D printer could enable firms to produce batches of a single unit of production. That means that it could reproduce its own parts (those that are made of plastic) and since the diagrams are available to all with a few clicks, each one can in time contribute its own improvements and share them with others.[9]

The printer’s way of producing is the opposite of the careful operations of the craftsman. This new way proceeds through “additive manufacture”: the three-dimensional model is broken down into very thin horizontal layers so that the object is constructed from bottom to top, like a superimposition of plates. Work no longer encounters the resistance of wood or stone. The result is that work is no longer a place but a flow, an activity which can be carried out in different places. Creativity and innovation are required from all the workers and not just from those in managerial positions. The American company AT&T has introduced a system of “gathering ideas” within the company. It works as follows: arranged in groups, the workers present the managers with their own projects, as if they were before a “venture capitalist” who has to decide whether to finance them or not. The workers are thus encouraged to take on the risk, at least in part – something that gives rise to new problems as far as the assets both of governance and of ownership. It is worth recalling that Adrian Bowyer – the inventor of the 3D printer – had the above in mind to bring about with his machine what Marx believed he could achieve with a political revolution. “The Rep-Rap – he wrote – will enable a revolutionary appropriation of the means of production on the part of the proletariat” because the consumer can also become the producer. This is the figure of the so-called prosumer. In every neighbourhood – wrote Jeremy Rifkin– 3D printers will be found with higher performance than those which a single individual could acquire and handle, and, in these places, neighbours will be able to help themselves manufacture everything which is useful for domestic life according to their own plans. Naturally, the near future will show whether this is realistic or just utopia.

An interesting historical precedent helps us understand the importance of what is now happening. In Book V of the Wealth of Nations (1776) Adam Smith wrote: “The man who passes his whole life in carrying out a few simple tasks with results that are about the same or almost has no opportunity to exercise his intelligence or inventiveness in finding expedients which can overcome difficulties which he never encounters”. This is why Smith advocated for a decisive intervention by the government to impose a system of compulsory education for all as a way to counter the dulling of the faculties of the workers caused by that process. The coming of the First Industrial Revolution, however, saw another way of conceiving and making use of the division of labour. The great economist, David Ricardo, and, above all, the English engineer and mathematician, Charles Babbage, were its creators. The idea from which they started is that, since individuals vary in abilities and personal gifts, each is the bearer of a specific comparative advantage in the world of work. Now, the division of labour and the ensuing specialisation become the practical instruments enabling society to draw the maximum advantage from the existence of these different abilities among individuals. Whereas for Smith the division of labour is the “cause” of the differences in personal abilities, for Ricardo and Babbage the opposite is true: it is these differences which render the division of labour suitable. Dazzled by the ideas of W. Leibnitz, who, in the second half of the 17th century had already gone so far as to invent the calculating machine, Babbage, at the beginning of the 19th century, claimed that, in time, a superhuman calculating reason would be achieved. In relation to the argument above, an interesting analysis of the causes of declining productivity growth is the one given by R. Gordon in the paper: “Why has economic growth slowed when innovation appears to be accelerating?” (NBER, April 2018). The A. suggests that a major cause is the slowdown in the rate of increase of educational attainment resulting from the interplay of demand and supply factors, including the flattening of the college wage premium and the rising relative price of college education.

It is easy to grasp the “pedagogic” implications of this reversal of the causal relation: whereas “Smith’s worker” has to invest in continual education so as not to lose his own abilities (and ultimately his own identity) – so that the division of labour is seen as the opportunity to favour and incentivise the acquisition of new knowledge – “Babbage’s worker” does not have any motive of this kind since the division of labour serves precisely to minimise the need for learning on the part of the worker before he enters the productive process. It follows that the more rigid the process of the division of labour, the more limited is the content of knowledge needed for each job, so that there is less to learn before beginning it. It follows that, whereas Smith’s workers “grow” alongside their work, Babbage’s workers are subject to a terrible threat, that of seeing themselves replaced at any moment since they have been rendered highly replaceable. It should be noted that, whereas during the Fordist period, Babbage’s conception was the dominant one – and it could not be otherwise – the post-Fordist period “vindicates” Smith: both process and product innovations demand more and more the inventive capacity of all workers. This is the basic idea underneath the proposal by B. Robertson[10] to accelerate the transition from the Tayloristic model of work organization to the Holacratic one.

A recent phenomenon seems to confirm, empirically, the argument above. It has to do with the observed decline of research productivity. A key assumption of many endogenous growth models is that a constant number of researchers can generate constant exponential growth. This would be true if we assume that the total factor productivity of the so-called idea-production function were constant. Instead, research productivity is falling sharply everywhere. Taking the US aggregate number as representative, research productivity falls by half every 13 years: ideas are getting harder and harder to find. In other words, to sustain constant growth in GDP per person, the US must double the amount of research efforts searching for new ideas every 13 years to offset the increased difficulty of exponential growth. Several are the explanations for declining research productivity.[11]

3.              Implications of automation and AI on work

In the present section, I consider the implications of automation and AI on the demand for labour, wages and employment. First of all, it is proper to specify that the task-based approach emphasizes the displacement effect that automation creates as AI replaces labour in tasks that it used to perform. It is certainly true that this “displacement effect” tends to reduce the demand for labour and wage, but it is counteracted by a “productivity effect”, resulting from the cost savings generated by automation, which increases the demand for labour in non-automated tasks. The productivity effect is complemented by additional capital accumulation and by improvements of existing machinery, both of which further increase the demand for labour. However, these countervailing effects are incomplete. Even when they are strong, automation increases output per worker more than wages and reduces the share of labour in national income. Whence the urgency to create new labour-intensive tasks, so to increase the labour share to counterbalance the impact of automation.[12] Indeed, increasing inequalities might be a more problematic development than the pure destruction of jobs. Several studies report a decrease in middle-wage routine jobs and a polarization of employment structures. Needless to say, regulation should address such developments.

The slow growth of high-paying jobs in Western countries since 2000 and rapid advances in computer technology have sparked fears that human labour will eventually be rendered obsolete. Yet while computers perform cognitive tasks of rapidly increasing complexity, simple human interaction has proven difficult to automate. It is a fact that the labour market increasingly rewards social skills. Since 1980, jobs with high social skill requirements have experienced greater relative growth throughout the wage distribution. Moreover, employment and wage growth has been strongest in jobs that require high levels of both cognitive skills and social skills.

D. Deming[13] shows that high-skilled, difficult-to-automate jobs increasingly require social skills. The reason is that skill in human interaction is largely based on tacit knowledge and, as argued by D. Autor,[14] computers are (still) very poor substitutes for tasks where programmers don’t know the rules. Autor refers to this as “Polanyi’s paradox”, after the philosopher Michael Polanyi who observed that “we can know more than we can tell”. Human interaction requires a capacity that psychologists call “theory of mind” – the ability to put oneself into another’s shoes – a capacity that the new machines do not possess.

Three important facts about western labour markets deserve attention. First, employment growth in social skill-intensive occupations has occurred throughout wage distribution, not just in management and other top-paying jobs. Second, there exists a growing complementarity between cognitive skills and social skills. Since 1980, employment and wage growth has been particularly strong in occupations with high cognitive and social skill requirements. In contrast, employment has fallen in occupations with high math but low social skill requirements, suggesting that cognitive skills are increasingly a necessary but not sufficient condition for obtaining a high-paying job. Third, measures of an occupation’s social skill intensity and its routineness are strongly negatively correlated. In his revelatory and outspoken book, David Blanchflower,[15] explains why today’s post-recession economy is vastly different from what existed before. He calls out leaders and policy makers for failing to see the Great Recession coming, and for their continued failure to address one of the most unacknowledged social problems of our times. The author shows how many workers are underpaid or have simply given up trying to find a decent-paying job, how growth has not returned to prerecession levels despite rosy employment indicators, and how general prosperity has not returned since the crash of 2008.

Abundant empirical evidence suggests that computerization and AI actually increase the return to cognitive and social skills complementarity. Why are social skills so important in the modern labour market? The reason is that computers are still very poor in stimulating human interaction. Reading the minds of others and reacting is an unconscious process and this skill in social settings has evolved in humans over thousands of years. Human interaction in the workplace involves team production, with workers playing off of each other’s strengths and adapting flexibly to changing circumstances. Such non-routine interaction is at the heart of the human advantage over machines.

Having recognized that, the question arises: where do social skills come from and can they be affected by education or public policy. J.J. Heckman et al.[16] find that placing special emphasis in the Curricula on developing children’s skills in cooperation, resolution of interpersonal conflicts and self-control are important factors to generate social skills. Three complementary approaches to countering these developments can be envisioned. First, since the new technologies generate net gains for society as a whole, the winners could in principle compensate the losers and still be better off. Second, technical progress should be steered to minimize workers’ losses. Third, governments should intervene to thwart the rise of monopolies that extract rents from society. It should be noted that this type of measures are relevant examples of what defines a transformational strategy, quite different from a mere reformist strategy.[17]

Today it is accepted that automation will bring neither apocalypse nor utopia, but instead both benefits and stresses alike. Such is the ambiguous and sometimes disembodied nature of the “future of work” discussion. So far the focus has been on backward-looking analyses of the impact of AI over the years 1980 to 2017. Time has come to consider a forward-looking approach. This is the suggestion coming from the recently established Stanford Institute for Human-Centered AI, a new interdisciplinary think tank, whose stated mission is to advance AI research, education, policy and practice to improve the human condition – not to subvert it. If it is true that almost no occupation will be unaffected by IA, it is also true that the effects will be of varied intensity and drastic for only some. Routine, predicable physical and cognitive tasks will be the most vulnerable to AI in the coming years. The same is true for different regions. Whence the urgency to promote a constant learning mind set: investing in reskilling incumbent workers; expending accelerated learning; making skill development financially accessible; fostering uniquely human qualities.[18]

4.             Shaping the new world of work: ethical aspects and policy measures

It is in the area of public ethics that the consequences of the rapid diffusion of the new technologies are posing the most serious challenges, first of all that of understanding how the digitalisation of our lives is succeeding in changing the way in which we perceive them. Yet, it is precisely on this front that we observe a kind of fin de non recevoir on the part of high culture, both scientific and philosophical. I wish to say something about two aspects only of this problem. The first one concerns the question of trust: can artificial intelligence create the trust which is necessary for the correct functioning of our market economies? The second aspect concerns the problem of responsibility, of what does it mean to be responsible in the era of digitalisation. Are “smart machines” moral agents and therefore responsible agents? Will it be algorithms which will rule us in all those cases where people are not able to have a full understanding of the questions on which they have to pass their judgements? I shall begin with the first aspect.

It is generally agreed that trust is one of the decisive factors to ensure the advantages of collective action and so to maintain the process of development. This is easily explained. All the exchanges which take place in the market are incorporated in contracts: explicit or implicit; spot or forward; complete or incomplete; contestable or not. Except for spot contracts, all the other types need some kind of mechanism for them to be enforced. We know that the enforcement of contracts – what to do so that the terms and obligations of the contract are honoured – depends, in different ways and to various degrees, on legal norms, on the social norms of behaviour prevalent in a particular community, and on mutual trust. So, when the first two factors are not sufficient to ensure the enforcement of contracts, trust becomes necessary for the market to function. This is especially true in our days, given that globalisation and the Fourth Industrial Revolution have cut off the traditional bonds (of blood, religion and tradition) which functioned in the past as more or less perfect surrogates for trust.[19] The Author stresses why, to preserve the dignity of work in the time of morphogenetic hybridization of work, it is necessary for labour contracts and the organization of work to comply with a relational paradigm of work-dignity.

Notice the typical paradox of the present epoch. While trust in institutions, both political and economic, is declining for a variety of reasons, among which is the endemic increase in corruption, the global market is dominated increasingly by firms and organisations which demand unprecedented signs of trust from their clients and users. It is as if the individuals had learnt the lesson of the well-known story of Puccini’s Tosca: mutual distrust always produces suboptimal results.[20] As an example, consider granting permission to total strangers to use one’s own home (Airbnb) or sharing car journeys with people one does not know (Uber, Blablacar). Indeed, what is happening is that the decline in institutional trust of the vertical type is being matched by an increase in personal trust, that is, horizontal trust between people. For Tim Wu, the famous jurist of Columbia University, what we are facing is a massive transfer of social trust: after the abandonment of trust in institutions, there is a move toward technology.

“Trust – writes R. Botsman[21] – is the new currency of the world economy. It is a real multiplier of the opportunity for gain because it enables the development of underutilised goods”. Consider the phenomenon of cryptocurrencies – the best known of which, but certainly not the only one, is the bitcoin – which are digital currencies that are exchanged among peers. The transactions are not guaranteed by any central authority but validated by the participants themselves by means of an algorithm. At the same time, the strength of these cryptocurrencies enables the performance of anonymous transactions which are not subject to taxation and ensure protection from confiscation by the State. Their basic infrastructure is the blockchain which is a register of distributed properties on which all the transactions are noted without possibility of modification. Today, blockchain technology – until now used practically only in the financial sphere – already allows a vast gamut of applications, from those in the social sphere to those of a politico-administrative type. One example is the handling of administrative processes where the blockchain can certify a particular act, in a secure way and forever, without the need for third-party certification. Consider also that the United Nations are planning to make use of the same technology for the handling of aid of various kinds to refugees and migrants. And so on.

The heart of the contemporary paradox lies in the fact that today’s market economy has more need than ever of mutual trust in order to be able to properly function. At the same time, however, the extraordinary levels of efficiency reached by our economic systems so far make us forget that it is necessary to strengthen the fiduciary links among people. This is because, while the market “consumes” trust increasingly, it does not succeed, given the present institutional set-up, in producing enough of it. Hence the disturbing social dilemma: we are seeking ever greater efficiency in order to increase material well-being, wealth and security, but, in order to pursue this objective, we squander irresponsibly the patrimony of trust which we have inherited from previous generations.[22] Bear in mind that a command economy can fare well without trust to ensure its proper functioning; not so a market economy, as we said above. At the time of the USSR (“Trust is good, control is better”, Lenin was accustomed to say) people felt no need to invest in interpersonal trust; institutional trust was enough.

How do we solve this dilemma? David Hume’s proposal is well known. For the founder of philosophical empiricism (and initiator of ethical non-cognitivism), the disposition to award trust, and to repay received trust, finds its real foundation in the personal advantages which spring from a good reputation. “We can satisfy our appetites better in an indirect and artificial way … It is thus that I learn to give a service to someone else without feeling a real benevolence for him. In fact, I foresee that he will render me a service, expecting another of the same type in order to preserve the same reciprocity of good offices with me or with others”.[23] It is almost unbelievable that a great philosopher like Hume could fall into such a conceptual oversight as to confuse the notion of reciprocity with a sequence of self-interested exchanges. Contrary to the principle of exchange of equivalents, reciprocity is a set of gift relations, interrelated among them. It is even more strange that subsequent philosophical thought never detected the pragmatic contradiction into which Hume fell when, a few lines above the passage just quoted, he gives the example of two cultivators of grain who end up suffering losses on account of the absence of reciprocal guarantees. Again in the Treatise, we read: “Your grain is ripe today; mine will be tomorrow. It will be useful for both of us if I work for you today and tomorrow you give me a hand. But I am not displaying any particular sentiment of benevolence towards you and I know that neither are you doing the same for me. Therefore, I shall not work for you today because I do not have any guarantee that tomorrow you will show me any gratitude. Thus, I shall leave you to work on your own today and you will do the same tomorrow. In the meantime, however, the bad weather will intervene and so we shall both end up losing our crops through lack of reciprocal trust and guarantee”.[24]

Nor is the Kantian categorical imperative a proposal of great help for present purposes. “Follow the rule which, if everyone followed it, you could wish for its result”. This is a principle of equality of duty. However, Kant’s theory suffers from an evident aporia when one seeks to put it into practice. In fact, the Kantian individual chooses the rule which he is going to apply by assuming that everyone else is also going to apply it. However, since, in general, different people have different preferences about the final result, the Kantian rules that are preferred by people will also be different a priori. The result is that each will follow his preferred rule (whence his action) assuming that others will act in a way in which, in reality, they will not act at all. This means that the Kantian principle cannot be applied to itself; it cannot validate itself. This is really a serious logical inconsistency for a moral doctrine which is intended to be universal. Only if all the individuals entertained the same preference structure would the aporia in question disappear. But it is clear that if this were the case, then the Kantian principle would lose all its practical relevance.

Today, on the basis of laboratory experiments and results obtained from the neurosciences, behavioural economics suggests the following way to avoid the dilemma indicated above. In a miscellaneous work published in the prestigious American journal Science (2006), it is reported that if one deactivates a particular zone of the cerebral cortex, by transcranial magnetic stimulation, the subjects increase their pro-social behaviour. This leads to a substantial increase in their degree of trust. In particular, by the nasal administration of a certain amount of oxytocin (a hormone produced by many mammals), it has been discovered that it deactivates the cerebral activity of a specific region of the brain (the amygdala) which serves to control the behaviour of individuals in their relations of trust.[25] Consider also the procedures aimed at cognitive development which act on capacities such as attention, memory and the tendency to mental fatigue. There are already techniques such as deep brain stimulation which envisage the planting of a microchip in the brain.

 Several policy measures stem from the argument developed above. Let me suggest a few of them. Regarding the future of work, responsible dialogue between employers and workers’ organizations needs to start anew, far from the usual public debate and a somewhat sclerotic relationship. There is a need for trust-building meeting places for this purpose. New ways of cooperation should be explored between public sector and private agents to design transition projects in accordance to traverse theory. The ongoing debate on educational priorities must be revised in the light of findings on the future of work. The uncertainties of technological development should lead to the revaluation of humanistic studies in order to improve the soft skills of workers. Indeed, the digital revolution urgently requests that focus is shifted from protecting jobs to protecting workers, providing them with the necessary flexible social benefit and learning possibilities in a changing world. Regulation is too slow to keep up with the pace of innovation, so society and the economy must rely on culture to regulate company behaviour.

The new economy produces new kinds of contracts, cooperation and conflicts. Policy measures are needed so as to protect the weaker part in these new situations. More generally, policies that redistribute income across generations can ensure that a rise in robotic productivity benefits all generations. For a very useful critical review of the literature in this area, see T. Balliester and A. Elsheikhi, “The future of work: a literature review” (ILO, Geneva, March 2018). The Authors analyse the future of work along five dimensions: the future of jobs; their quality; wage and income inequality; social protection systems; social dialogue and industrial relations. In his brilliantly written book (2019), C.B. Frey devotes considerable attention to the question of what should be done to prevent the divide between the winners and losers of automation from continuing to grow further. Education, retraining, wage-insurance, tax credits, regulation, relocation are among the most relevant policy measures that can and therefore should be adopted. While there are good reasons to be optimistic about the long run, such optimism is only possible if we successfully manage the short-term dynamics. People who lose out to automation will quite rationally oppose it, and if they do, the short-term effects cannot be seen in isolation from the long run. Taking a long time perspective, governments chose to overlook the costs of globalization and the 4th industrial revolution and focused on the benefits only. Those benefits were indeed significant, but the failure to deal with individual and social costs ended up costing mainstream politics its credibility. The general problem to be aware of is that an unconditional belief that “AI can do everything better”, to take one example, creates a power imbalance between those developing AI technology and those whose lives will be transformed by it. The latter essentially have no say in how these applications will be designed and deployed.

5.             Trans-humanism versus neo-humanism: the ethical urgency to make a decision

I pass now to consider that grand project, at once political and philosophical, which is transhumanism and whose aim is both to fuse man with machine to extend his potential indefinitely and (above all) to demonstrate that conscience is not an exclusively human feature. The ultimate objective of the transhumanist project is not so much commercial or financial; rather, it is political, and, in a certain way, religious, since it aims at transforming – not so much at improving – our way of life as well as our basic values. Transhumanism is the apologia for a human body and brain “augmented”, that is, enriched by artificial intelligence the use of which would enable the separation of the mind from the body and so the affirmation that, in order to function, our brain would not need to have a body. This would permit the development of arguments regarding the meaning of the person and its unity. It might be interesting to recall that the word “robot” derives from the Czech robota, which signifies, literally, forced labour. It appears for the first time in the of Karl Capek’s 1920 science-fiction novel RUR – Rossum’s Universal Robots. The novel describes the boss of the RUR firm dreaming of a coming time in which the prices of goods would fall to zero thanks to the increase in productivity ensured by robots and in which toil and poverty would be defeated. But the dream vanishes when the robots “decide” to eliminate their creators, killing all human beings. In the present era, the mechanical makeup of the robot – which rendered them not so versatile and therefore not much of a benefit – has been replaced by the electronic-computer one. By doing so we have achieved cognitive manufacture, where the robots are placed on the human level, understanding the context in which they operate.[26]

The strategy pursued by Ray Kurzweil, the scientist in charge of an ongoing Google project, aims at producing cyborgs endowed with physical features and capabilities similar to those of the homo sapiens.[27] It is the objective of playing God which is hidden by the desire to take control of the reins of evolution.[28] The physicalist approach (according to which there exists only one reality – the physical – which the cognitive sciences seek to understand by explaining how consciousness is generated), welcomed by the neurosciences, raises the serious question dealing with the connection between responsibility and freedom. We come out of a long period during which it was accepted that freedom as an expression of responsibility was matched by responsibility as a consensus to the application of freedom itself. What does it mean for a worker to work all day long with a collaborative robot? We know already how the advent of social networks and the use of smartphones are changing our habits and our lifestyles. But can we imagine a future in which people pass their whole working day in “dialogue” – so to speak – with a robot without falling into new and more serious forms of alienation? It may be interesting to recall Gramsci’s thought on a question of this kind. Referring to F. Taylor’s famous phrase on the “tamed gorilla”, Gramsci writes: “With brutal cynicism, Taylor expresses the goal of American society: to develop in the laboratory mechanical and automatic behaviour to the highest degree, to break the old psycho-physical connection of qualified professional work which demanded a certain active participation of intelligence, imagination and worker initiative, and to reduce productive operations solely to their physical, mechanical aspect. But, in fact, this is nothing new: it is only the most recent phase of a long process which began with the rise of industrialism itself, a phase which is only more intense than the preceding ones and manifests itself in more brutal forms, but which too will be superseded with the creation of a new psycho-physical connection of a different type from the preceding ones and undoubtedly superior”.[29]

The question just posed introduces us to the intriguing theme of responsibility.[30] As we know, responsibility has different meanings. One can talk about responsibility to mean a freedom which has the sense of responsibility. But one can talk of responsibility in a very different sense when it carries an obligation to which one has to respond. (This is the American concept of “accountability”). In fact, one can talk of responsibility to indicate that one is guilty of an action that has been performed. In this sense, “I am responsible” means that I am guilty of something. Thus, responsibility and freedom turn out to be closely related even if, in recent times, on the wave of advances in the neurosciences, the tendency has been to loosen the connection between freedom and responsibility. Take the enhancing operations which I have mentioned above. The subject who has been empowered would be taking her decisions not on the basis of the reasons for and against but as the result of a causal influence exercised on her brain by means of biotechnological manipulation. This is tantamount to saying that, in order to improve the performance of human beings, they are deprived of their moral autonomy, which is the greatest good.[31]

While it seems relatively easy to identify the direct responsibility of agents – as when the owner of a sweatshop exploits child labour for gain – what can we say about the economic activity which is undertaken with the intention of disadvantaging no one and which yet causes negative effects for others? For example, who is responsible for unemployment, poverty, inequality, etc.? In economics, the traditional responses consist in maintaining that these are “unintended consequences of intentional actions” (to use the famous expression of Scottish moralists in the 18th century). Therefore, the only thing to do is to assign to society the task of remedying (or alleviating) the negative consequences. Indeed, the welfare state arose and was developed precisely to make the responsibility of each individual collective and impersonal. But is it really so? Are we sure that the mechanisms of the free market are inevitable and that their results are unintended as we are led to believe?[32]

It is worth giving a single example. In his essay, “Is business bluffing ethical?”[33] – the most cited essay in financial literature – Albert Carr writes: “Where it seeks to do to others what it does not want others to do to us (sic!), finance must be guided by a body of ethical standards different from those of common morals or of religion: the ethical standards of the game. If an action is not strictly illegal, and can yield a profit, then to carry it out is an obligation for the businessman”. It is this way of thinking – founded on the thesis of double morality – that has favoured the outbreak of major financial scandals. As Z. Bauman was one of the first to note, the social organisation of second modernity has been thought up and designed to neutralise the direct and indirect responsibility of the agents. The strategy adopted – one of great intellectual subtlety – was twofold: on the one hand, to lengthen the distance (spatial and temporal) between the action and its consequences, and, on the other hand, to achieve a huge concentration of economic activity without a centralisation of power. This is the specific character of the adiaphoric company, a form of company unknown in the periods before the Second World War, which aims at eliminating the question of the moral responsibility of group action. Adiaphoric means “technical” responsibility which cannot be judged in moral terms of good/evil. Adiaphoric action is assessed in solely functional terms, only on the basis of the principle that everything that is possible for the agents is also ethically licit, without it being necessary to make an ethical judgement of the system, as Luhnman has taught.

In recent times, adiaphoric responsibility has received a new impulse precisely from the fact that the new technologies are producing means which are in search of “questions” or of problems to be solved. Exactly the opposite of what happened in the past. So the question becomes: in what does the principle of responsibility consist in the society of algorithms? We rely on complex procedures to which we delegate the success of operations which human beings do not know how to perform on their own. Yet, algorithms are irresponsible, though neither neutral nor objective, as is erroneously believed. When a programme makes an error, it does not suffer the consequences because the argument is that mathematics is exterior to morality. But this is not the case: algorithms are not pure mathematics; they are human opinions enshrined in mathematical language. They therefore discriminate, like any human decision-makers. For example, the employment process is becoming increasingly automated because this is thought to render recruitment more objective, eliminating prejudices. But, rather than diminishing, discriminatory dynamics are increasing in our society.

Generalising for a moment, the real problem of smart machines begins the moment they perform actions which involve a choice or a decision. The soldier-robot, the car-robot, the cleaner-robot can all make choices that are fatal for non-robotic lives. Whose is the responsibility in these cases? As Gunther Anders clearly explained,[34] the 21st century has inaugurated the era of human irresponsibility, immunising subjects from their relations. “Smart machines” (those endowed with artificial intelligence) are able to make autonomous decisions which have both social and moral implications.[35] How then do we ensure that the decisions made by these objects are ethically acceptable? Given that these machines can cause all kinds of damage, how do we make sure that they are able to differentiate between decisions that are “right” and “wrong”? And where some kind of harm cannot be avoided – think of the case of the driverless vehicle which has to choose between colliding with another vehicle, killing all its passengers, or running over some children who are crossing the road – how do we instruct (in the sense of programming) these machines to choose the lesser evil? Examples in the literature are already abundant. All agree on the need to endow AI with some kind of ethical canon for solving ethical dilemmas of the “autonomous driver” kind.[36]

The differences arise the moment one has to choose the way forward (that is, the approach): top-down (ethical principles are programmed into the intelligent machine: the human being transfers his own ethical view of the world to artificial intelligence); or bottom-up (the machine learns to make ethically sensitive decisions from observing human behaviour in real situations). Both approaches pose serious problems. These are not so much of a technical nature; rather, they concern the larger question as to whether intelligent machines can or cannot be considered moral agents (that is, moral machines). We are just at the beginning of a cultural and scientific debate which already promises to be simultaneously fascinating and worrying. Look, for example, at the recent positions of A. Etzioni and O. Etzioni[37] who deny the possibility of attributing the status of moral agent to Artificial Intelligence and so deny any foundation to the research programme of Internet Ethics which studies the ethical aspects of Internet Communication in its various forms.

The conclusion the two Etzioni draw is that there will be no need to teach machines ethics even if it could be done. The same opinion is not shared, for example, by the research group which operates through the Neurolink Corporation in California. For some time, it has been developing digital technologies to achieve connections between computers and the human mind and is planning a cyber-man with a microchip in his brain.[38]

It might be useful to consider that, in the current debate, two different ways of conceptualising AI are coming into conflict. The first concerns software which seeks to reason and make cognitive decisions in the same way as human beings. For such a conception, AI would be aspiring to replace man.[39] The second way aims rather at providing smart assistance to human actors. It represents an AI which is a partner of human beings, often referred to as “Intelligence Augmentation”, or “Cognitive Augmentation”. In practice, Google is moving in the former direction; the declared object being that of fusing the human with the machine in order to increase the capacities of both in an unlimited direction; IBM, with its cognitive computing, is moving in the latter direction. In 2013, IBM launched an Artificial Intelligence system called “Thomas Watson” in homage to its first president. Watson replies to questions on any theme, spoken in ordinary language. On the first page of the website devoted to Watson, one reads: “Watson is a cognitive technology which can think like a human being”. It will be a question of seeing whether the machine can become more intelligent than human beings. In any case, it remains true that the standardised responses which Watson (or another machine) will be able to give will never be more effective than those which can be given by people who are able to understand the problems of other people. Indeed, although intelligent, machines will never be capable of empathy because they are not endowed with moral sentiments. In any case, more open than ever today is the problem of knowing how and how much the development of potentive technologies will relatively affect values such as those of social justice. To the extent that these technologies will never be able to be enjoyed by all, in that they are very costly, there arises the question of the increase in social inequalities which is already intolerably high today. The serious risk is that of forming a parallel society made up of the richest part of the population which is growing gradually further away from the poorer, un-empowered part.[40]

6.             Instead of a conclusion: beyond libertarian individualism

It is when one is confronted with problems such as those I have discussed in the previous pages that one comes to understand the serious limitations of libertarian individualism as the anthropological foundation of the current prevalent cultural matrix, especially in the West. As is well known, individualism is the philosophical position according to which it is the individual who attributes value to things and to interpersonal relations. Moreover, it is always the individual alone who decides what is good and what is evil; what is licit and what is illicit. In other words, everything is good to which the individual attributes value. For axiological individualism, there are no objective values but only subjective values or legitimate preferences. In his essay, Individualmente insieme,[41] Z. Bauman explains that “the fact of conceiving its own members as individuals [and not as people] is the distinctive mark of modern society” (p. 29). Individualisation, Baumann continues, “consists in the transformation of the human identity from something given to a role, and in the attribution to the actors of the responsibility for the performance of this role and the consequences of their actions” (p. 31). Baumann’s thesis, therefore, is that “individualisation guarantees an ever-increasing number of men and women an unprecedented freedom of experimentation, but also carries with it the unprecedented role of facing its consequences”. Therefore, the continually increasing gap between the “right of self-affirmation”, on the one hand, and the “ability to control social contexts” in which this self-realisation has to have a place “appears to be the principal contradiction of second modernity” (p. 39).

On the other hand, libertarianism is the thesis advanced by more than a few philosophers in recent times according to whom, in order to establish freedom and responsibility, it is necessary to have recourse to the idea of self-causation. For example, G. Strawson, among many, in his essay “Free Agents”,[42] maintains that only the agent who is self-caused, self-created, or, in his own words, causa sui (as if he were God) is fully free. It is now possible to understand why the watchword of this period has been able to spring from the marriage between individualism and libertarianism, that is, from libertarian individualism: “volo ergo sum”, that is, “I will, therefore I am”. The radicalisation of individualism in libertarian terms has led to the conclusion that every individual has the “right” to extend himself as far as his power allows him. Freedom as the loosing of links is the dominant idea in cultural circles. Since it is they which limit liberty, the links are what must be loosed. Wrongly equating the concept of links with that of chains, the limitations on freedom – the chains – are being confused with the conditions – the links – of freedom. Consider the following: whereas the freedom of the moderns was basically political – the possibility of being masters of the material and social conditions of one’s own existence – the individualistic freedom of the post-moderns is the claim to the right of the individual to do everything that is technically possible. We enjoy an abundance of these freedoms, but no longer the freedom to affect in practice what Marx called the social and material conditions of the society in which we live. Consider what is happening today with the new technologies. On the one hand, our space of freedom is expanding, thanks to the opportunities offered by the technological enhancement of our capacities for communication. On the other hand, in order to enjoy such opportunities to the full, our freedom is exercised in uncritical subordination to the structure of the net. It seems as if today’s society is returning to what, in the 16th century, Etienne de la Boetie prophetically labelled “voluntary servitude”.

This is an aspect which Michel Foucault grasped with rare perspicacity when, tackling the problem of access to the truth, he asked if it was true that today we live in a time when the market has become a “place of truth”, that is, where the entire life of the subjects is subsumed in economic efficiency and where it is again the market which sees to it that the government, “to be a good government”, must function according to that place of veridiction: “the market must speak the truth, and must do so in relation to the practice of government. It is its role of veridiction that, from now on, and in a clearly indirect way, will bring it to command, dictate and prescribe the legal mechanisms according to the presence or absence of which the market will have to organise itself”. It is interesting to note that even a leading advocate of the digital revolution like J. Taplin has written:[43] “The libertarians who control some of the main Internet companies do not actually believe in democracy. The people who run these monopolies believe in an oligarchy in which only the most brilliant and rich succeed in determining our future”. (p. 3). It is to be borne in mind that the various transhumanist positions are all of an individualistic stamp. But there is a further serious consequence of this cultural arrangement on the recent developments in the economic theory about the Fourth Industrial Revolution. If one consults the extended and well-documented critical survey by A. Goldfart and C. Tucker[44] – a survey which considers more than four hundred works on the subject – it will be noted that only one aspect is made the object of attention and analysis: how much the new technologies reduce the costs of production, increase productivity, enable share prices to increase, etc. But nothing is said of the impact of digitalisation on the structure of people’s preferences, on their lifestyles, on their cognitive maps, ignoring that side of the question, as if people did not care about the fact that “numerocracy”, associated with today’s rampant “omnimetrics” (the tendency to measure every dimension of human life), holds no promise of good!

The question arises spontaneously: how do we trace the origin of the spread of the individualistic-libertarian culture? To answer, it is worth recalling that the term individuum arose in the context of medieval scholastic philosophy and is the translation of the Greek atomos. (It was Severino Boethius who defined the person as naturae relationalis individua substantia). But it is from the end of the 18th century that individualism began to be coupled with libertarianism. The reasons for this marriage are several and different. I limit myself to indicating the two most relevant. On the one hand, the spread in the spheres of European high culture of the utilitarian philosophy of Jeremy Bentham, whose principal work, dating from 1789, would take a few decades to become a leading part of economic discourse. It is with utilitarian morals and not with the Protestant ethic – as some still maintain – that the hyper-minimalist anthropology of the homo oeconomicus takes foot within economic science, and with it the methodology of social atomism. The following passage from Bentham is notable for its clarity: “The community is a fictitious body, composed of individual subjects, which is considered as if they made up its members. The interests of this community are what? The sum of the interests of the several members of which it is composed”.

 On the other hand, there is a serious misconception of the way to preserve liberty. As argued by D. Acemoglu and J. Robinson,[45] liberty is hardly the “natural” order of things. In most places and at most times, the strong have dominated the weak and human freedom has been quashed by force or by customs and norms. Either states have been too weak to protect individuals from these threats, or states have been too strong for people to protect themselves from despotism. Liberty emerges only when a delicate and precarious balance is struck between state and society. There is a Western myth that political liberty is a durable construct, arrived at by a process of “enlightenment”. This static view is a fantasy, the authors argue. In reality, the corridor to liberty is narrow and stays open only via a fundamental and incessant struggle between state and society. Today we are in the midst of a time of wrenching destabilization. We need liberty more than ever, and yet the corridor to liberty is becoming narrower and more treacherous. The danger on the horizon is not “just” the loss of our political freedom, however grim that is in itself; it is also the disintegration of prosperity and safety that critically depend on liberty. The opposite of the corridor of liberty is the road to ruin.

Very different from each other in their philosophical assumptions and their political consequences, these two reasons ended up generating on the economic level a result that was perhaps unexpected: the affirmation of an idea of the market antithetical to that of the tradition of thought of civil economy. An idea which sees the market as an institution founded on a double norm: the impersonality of the relations of exchange (the less I know my counterparts, the greater will be my advantage because it is better to do business with strangers!); and the exclusively self-interested motivation of those who take part so that “moral feelings” such as sympathy, reciprocity, fraternity etc. do not play any significant role in the market arena. Thus it has happened that the gradual and majestic expansion of market relations in the course of the last century and a half has ended up reinforcing that pessimistic character of human beings which was already theorised by Hobbes and Mandeville, according to whom only the harsh laws of the market would succeed in taming their perverse impulses and forces of an anarchic type. The caricature of human nature which thus stands out has contributed to endorsing a twofold error: that the sphere of the market coincides with egoism, with the place where each one pursues, as best he can, his own individual interests, and, symmetrically, that the sphere of the State coincides with solidarity, the pursuit of collective interests. This is the foundation of the well-known dichotomous model of social order based on the State and market: a model in which the State is identified with the public sphere and the market with the private one. In line with the civil economy paradigm, Rajan in his very recent book[46] brings to the attention of the reader relevant reasons in support of the triadic model of social order – State, Market, Community – to replace the obsolete dyadic model – State, Market – typical of modernity.

Which component of our conceptual infrastructure has to change if we are to be able to go beyond the individualistic-libertarian thought which is rampant today? Firstly, we must abandon that anthropological pessimism which goes back to Guicciardini and Machiavelli, passes through Hobbes and Mandeville and ends up with the modern systematisation of economic mainstream. It is the assumption that human beings are individuals who are too opportunistic and self-interested to think that, in their actions, they can take into account categories such as moral feelings, reciprocity, the common good etc. In his famous Fable of the Bees. Private Vices, Public Benefits, (1713), B. Mandeville wrote: “I think I have shown that neither the friendly qualities and the kind feelings which are natural in man nor the virtues which he is able to acquire… are the foundations of society. The latter are what we call the evil in the world. This is the great principle which makes us social creatures, the solid base, the life and the support of all trade and business without exceptions”.

It is on this anthropological cynicism – founded, bear in mind, on an assumption and not on evidence drawn from the real world – that the imposing edifice of self-interest which is still the dominant paradigm in economics has been built. It is clear, or it ought to be on careful reflection, that within the horizon of the homo oeconomicus there cannot be any room to solve the ethical dilemmas stemming from convergent technologies. In fact, from this perspective, humans are one-dimensional beings, only capable of acting to accomplish a single aim. The other dimensions, from the political to the social, emotional and religious must be held strictly apart, or, at the most, can contribute to making up the system of chains within which the objective function of the agents is to be maximised. The category of the “common” knows two dimensions: being in common and what is owned in common. There is no one who does not see that, in order to solve the problem of what is owned in common, it is necessary that the subjects involved recognise their being-in-common. This is hammered home in Pope Francis’s Laudato Si’ with a wealth of details.[47]

Clearly, a conception of this kind would make sense if it were true that all (or the majority) of individuals were self-interested and asocial subjects. But the factual evidence, which is now quite abundant and based both on experiments in the laboratory and on empirical research, informs us that this is not the case because the majority consists of those who, in reality, exhibit pro-social behaviour (for example, sacrificing themselves for collective aims) and are not self-interested (for example, they practice charitable giving systematically). This is why Lynn Stout[48] forcefully advances the proposal to take seriously, in the theory of law, the idea of conscience, that interior force which inspires pro-social and non-egoistical behaviour. Conceptualising the law as a kind of system of prices which have to be paid as compensation for various kinds of negligence and disregard for contractual terms has as its, certainly negative, result an increase in the “cost of conscience”. Teaching egoism is a self-fulfilling prophecy.

We know that the behavioural traits which are observed in reality (pro-social, asocial, antisocial) are present everywhere in society. What changes from one society to another is their combination: in some historical periods antisocial and/or asocial behaviour is prevalent; in others, prosocial behaviour is prevalent, with effects on the economic plane and on civil progress that it is easy to imagine. This raises the question: what is the main factor determining that, in a given society, in a particular historical period, the organic composition of behavioural traits sees the prevalence of one type or the other? Well, a decisive, if not the only, factor is the way in which a society structures its legislative system. If, in adopting a Hobbesian anthropology, the legislator makes laws which lay on the shoulders of all citizens heavy sanctions and punishments intended to prevent illegal acts on the part of the antisocial, it is evident that prosocial citizens, who would certainly not need those deterrents, will not be able to bear the cost and so, although obtorto collo, will modify their own motivational system in a home-grown way. As Stout writes (2011), if you want to increase the number of good people, you must not tempt them to be wicked. Today, more relevant than ever, this question turns on the thought and warning of the great Neapolitan figure of the Italian Enlightenment, Giacinto Dragonetti. Publishing in 1766 his Delle virtù e dei premi (On virtues and prizes) in respectful but firm criticism of Cesare Beccaria’s celebrated Dei delitti e delle pene (On crimes and penalties) (1764), Dragonetti takes seriously the claim of the Scholastics according to whom virtue is more contagious than vice, on condition that it is made known. That is why the legal apparatus must, in primis, provide rewards (not incentives) to the virtuous, and, in secundis, threaten wrongdoers with punishments. (Dragonetti’s work, translated at the time into four foreign languages, was to be quoted by Thomas Payne in the Declaration of Independence of the United States in 1776).

This is the so-called mechanism of crowding out: Hobbesian-type laws tend to increase the percentage of external motivations and, therefore, to augment the spread of antisocial behaviour, precisely because antisocial types are not very troubled by the cost of law enforcement, since they are always seeking to avoid the law in various ways. (See what happens with tax evasion and tax avoidance). In light of the latter, we are now able to understand how and where to intervene if we wish to increase the opportunities to advance practices which hinder the spread of individualistic behaviour. As long as we think of economic activity as a type of agency whose logic can only be that of the homo oeconomicus, it is clear that we shall never arrive at the admission that there can be a civil way of managing the economy. But that depends on the paradigm, i.e. the spectacles through which we observe reality, and not directly on reality itself. We must turn to the paradigm of civil economy and its categories of thought if we wish the present, second great transformation – the first one was the one that was investigated by Karl Polanyi (1944) – to constitute real progress for peoples, to aim, that is, at integral human development.[49]

The second strategy to contain individualism within reasonable limits is that of putting at the centre of public discourse once again the principle of fraternity – a word which already appeared on the flag of the French Revolution of 1789. It is the great merit of European culture to have known how to shape the principle of fraternity in both institutional and economic terms, making it become a load-bearer of the social order. It was the Franciscan school of thought that gave the term the meaning it has preserved over time. There are pages of the “Rule of Francis” which are very helpful to understand the real meaning of the principle of fraternity which is that of simultaneously fulfilling and exceeding of the principle of solidarity. In fact, whereas solidarity is the principle of social organisation which allows the unequal to become equal, the principle of fraternity is that principle of social organisation which allows the equal to be diverse, not different. Fraternity allows people who are equal in their dignity and fundamental rights to give a diverse expression to their life-plan, or their charisma. The times that we have left behind, the 1800s and, above all, the 1900s, were characterised by great battles, both cultural and political, in the name of solidarity, and this was a good thing; think of the history of the trade union movement and the struggle to acquire civil rights. The point is that the good society cannot be content with the horizon of solidarity because a society which is only solidaristic without being fraternal would be a society from which each one would seek to distance himself. The fact is that, whereas the fraternal society is also a solidaristic society, the opposite is not true.

Not only that, but where there is no gratuitousness, there cannot be any hope. In fact, gratuitousness is not an ethical virtue like justice. It concerns the super-ethical dimension of human action; its logic is that of superabundance, whereas the logic of justice is that of equivalence, as indeed Aristotle taught. Thus we understand why hope cannot be anchored in justice. In a society that was only, hypothetically, perfectly just, there would be no room for hope. What could its citizens ever hope for? This is not the case in a society where the principle of fraternity has succeeded in putting down deep roots, precisely because hope is nourished on superabundance.

Having forgotten the fact that it is unsustainable to have a human society in which the sense of fraternity is extinguished and in which everything is reduced, on the one hand, to improving transactions based on the exchange of equivalents, and, on the other hand, to increasing the transfers effected by organisations of the welfare state, this explains why, despite the quality of the intellectual powers in play, a solution to that trade-off has not yet been reached. The society in which the principle of fraternity is dissolved has no future; that is, the society in which there exists only “giving to have” or “giving through duty” is not able to advance. That is why neither the liberal-individualist vision of the world in which everything (or almost everything) is exchange nor the statist vision of society in which everything (or almost everything) is compulsory are safe guides to bring us out of the shallows in which the Fourth Industrial Revolution is putting our model of civilisation to a harsh test – as Pope Francis does not cease to emphasise.

END NOTES

[1] University of Bologna and Pontifical Academy of Social Sciences.
[2] For a useful review of the evidence of a large increase in AI-related activity, see J. Furman and R. Seamans, “AI and the Economy”, NBER, June 2018.
[3] The prefix “s” in the Italian word “sviluppo” (development) stands for “dis” and confers an opposite sense on the word to which it is joined.
[4] For a stimulating critical treatment, I refer to L. Palazzani, Il potenziamento umano. Tecnoscienza, Etica e Diritto, Torino, 2015, where the A. also tackles the problem of the editing of the genome, that is, the deliberate restructuring of one or more chapters of the genetic inheritance contained in cells.
[5] In this respect, the thought of H. Arendt comes to mind, when she said that the master is the one who takes responsibility for the world in which the pupil lives.
[6] J. Twenge, iGen, New York 2017.
[7] For an effective description, I refer to C.B. Frey, The Technology Trap, Princeton, Princeton University Press, 2019.
[8] For the details, see K. De Backer et Al., “Industrial Robotics and the Global Organization of Production”, OECD, Science, Technology and Industrial Policy Papers, 2018/03, Paris, 2018.
[9] J. Sachs et al., “Robots: curse or blessing? A basic framework”, NBER, April, 2015.
[10] Holacracy, Cambridge, Mass., Harvard University Press. 2007.
[11] For these, I refer to N. Bloom, C. Jones et al., “Are ideas getting harder to find?”, NBER, March 2018.
[12] D. Acemoglu and P. Restrepo, “AI, automation and Work”, NBER, Jan 2018.
[13] “The growing importance of social skill in the labour market”, NBER, August 2015.
[14] “Polanyi’s paradox and the shape of employment growth”, in Re-evaluating Labor Market Dynamics, Kansas City, Federal Reserve Bank of Kansas City, 2015.
[15] Not working. Where have all the good jobs gone?, Princeton, Princeton University Press, 2019.
[16] “The rate of return to the high-scope Perry pre-school program”, Journal of Public Economics, 94, 2010.
[17] A. Korinek, “Labor in the age of automation and AI”, Research Brief, Jan. 2019, Econfip.
[18] Concerning the effects of AI and other digital technologies on development pathways, particularly of low-income countries, see J. Sachs, “Reflections on Digital Technologies and Economic Development”, Ethics and International Affairs, 32, 2019.
[19] For a general and original framing of the issue see P. Donati, “How to promote the dignity of work in the face of its hybridization in the digital economy”, in this volume.
[20] Cfr. H. Leibenstein, “On some economic aspects of a fragile input: trust”, in R. Feiwel, ed., Essays in honour of Kenneth Arrow, MIT Press, 1987.
[21] Who can you trust?, New York, Public Affairs, 2017.
[22] Patrimony comes from the Latin patrum munus: the gift of the fathers.
[23] Treatise of Human Nature [1740] 1971, pp. 552-3.
[24] Ibid.
[25] D. Narvaez, Neurobiology and the Development of Human Morality, Norton, New York, 2014.
[26] YuMi is one of the first collaborative robots on the market, which operates not only in the manufacturing but also in the service sector.
[27] A cyborg is a kind of synthesis between the human being and technological components: a bionic man.
[28] R. Kurzweil, How to Create a Mind, New York, Viking, 2012. For a general treatment, see C.F. Camerer, “The potential of neuro-economics”, Economics and Philosophy, 24, 2008.
[29] A. Gramsci, Quaderni dal Carcere, Q 22, 11; “Americanismo e fordismo”. But already in Pope Leo XIII’s Rerum Novarum, 1891 these considerations had been formulated, although in a very different context.
[30] See S. Zamagni, Responsabili. Come civilizzare il mercato, Bologna, Il Mulino, 2019.
[31] On this theme, see the reflection of P. Donati, “Globalization of markets, distant harms and the need for a relational ethics”, Rivista Internazionale di Scienze Sociali, 1, 2017.
[32] See Caritas in Veritate of Benedict XVI, chaps. 3 and 4.
[33] Harvard Business Review, 2, 1968.
[34] L’uomo è antiquato. The outdatedness of human beings, originally in German, 1956.
[35] See the case of the driverless car, Tesla, which killed a passenger in May 2016.
[36] See the work by L. Palazzani, Dalla bioetica alla tecnoetica: nuove sfide al diritto, Torino, Giappichelli, 2017.
[37] “Incorporating Ethics into Artificial Intelligence”, Journal of Ethics, March, 2017.
[38] On the intricate and delicate question of the possibility of attributing “electronic personality” to intelligent robots and, more generally, the opportunity of going from Darwinian natural selection to the deliberate choice of the selection process via the shortcut of biotechnology, see N. Bostrom, “Welcome to a world of exponential change”, in P. Miller, J. Wilsdon, ed., Better Humans? The Politics of Human Enhancement and Life Extension, Demos, London, 2006 and S.M. Kampowski, D. Moltisanti, ed., Migliorare l’uomo? La sfida dell’enhancement, Cantagalli, Siena, 2011.
[39] Turing’s famous thesis has to do with this type of AI.
[40] A. Agrawal et al., “Economic Policy for Artificial Intelligence”, Rotman School of Management, University of Toronto, May 2018.
[41] Diabasis, Rome, 2008.
[42] Philosophical Topics, 32, 2004.
[43] Move fast and break things, London, Macmillan, 2017.
[44] Digital Economics, NBER, 23684, August 2017.
[45] The narrow corridor. States Societies and the Fate of Liberty, London, Penguin Random House, 2019.
[46] The Third Pillar, London, Harper and Collins, 2019.
[47] See, in particular, sections 106-114, where the Pope refers to “resistance to the technocratic paradigm” that is so often dominant.
[48] Cultivating Conscience. How Good Laws Make Good People, Princeton, Princeton University Press, 2011.
[49] L. Bruni, S. Zamagni, Civil Economy, London, Agenda, 2016.

Related

Dignity and the Future of Work

International  Symposium on Dignity and the Future of Work In the age of the 4th... Read more

©2012-2017 The Pontifical Academy of Social Sciences

 

Share Mail Twitter Facebook