How the Digital Technological Matrix Redefines Human Identities and Relations

Pierpaolo Donati | PASS Academician

How the Digital Technological Matrix Redefines Human Identities and Relations


The enhancement of human beings through digital technologies raises the question of evaluating whether and how the latter promote the flourishing (or, vice versa, the alienation) of what is properly genuinely human. The Author argues that the human/non-human distinction is revealed in the qualities and causal properties of the social relationality where human/artefacts (like AI/robot) interactions occur. It is about evaluating whether the technological mediation of the relations between human persons, both interpersonal and organizational ones, fosters or inhibits those relational goods that realize human fulfillment. In the Digital Matrix Land, being human means to learn how to manage the relational imperative, that is, how to face the concrete needs that a non-virtual relationship with the Other presents to us. The divide between social relationships that have intrinsically human qualities and powers and those that are human only in appearance is made difficult to trace due to the emergence of Humanted (the human augmented) and the hybridization of social relationships. But it does not disappear, which means that interhuman relations and those between humans and technologies are not comparable in spite of the supporters of the posthuman. The relational criterion becomes a discriminating factor in the making of new social forms.


The digital revolution brings with it great risks. As Pope Francis wrote (Fratres Omnes, 43):

“Digital media can also expose people to the risk of addiction, isolation and a gradual loss of contact with concrete reality, blocking the development of authentic interpersonal relationships. They lack the physical gestures, facial expressions, moments of silence, body language and even the smells, the trembling of hands, the blushes and perspiration that speak to us and are a part of human communication. Digital relationships, which do not demand the slow and gradual cultivation of friendships, stable interaction or the building of a consensus that matures over time, have the appearance of sociability. Yet they do not really build community; instead, they tend to disguise and expand the very individualism that finds expression in xenophobia and in contempt for the vulnerable. Digital connectivity is not enough to build bridges. It is not capable of uniting humanity”.

On the other hand, digital media are destined to spread more and more, and will have an exponential development, combining the internet, artificial intelligence and robotics. What can be done to avoid the evils feared by Pope Francis?

The thesis of my contribution is that the hybridization between the human and the digital is inevitable. If we want to avoid relational evils, we must understand the processes of hybridization and how to govern them by giving them human guidance.

When digital-based communication really increases human identities and relations? The challenge of hybridization

The transition from legacy media to platform systems means an increasing intermediation of digital and virtual reality in the relationships between human people and their communications. What is communicated in terms of representations, images, knowledge and actions is mediated by what I will call the ‘digital technological matrix’.

Communication – understood as making something common or shared – is the key to understanding how the self-relation emerges out of relations to others (Knudsen 2019). Since communication is a tension between social relationality and self-relationality, the means we use to communicate are decisive for the formation of personal identity (who I am for my Self) and social (who I am for others). Unavoidably human persons are ‘relational subjects’, which of course can be so in a more or less conscious, more or less reflective, more or less passive way, and so on (Donati and Archer 2015).

In a previous paper (Donati 2019: section 2.2.), I introduced the concept of ‘Matrix Land’ as the pervasive environment of digital (virtual) reality in which humanity is destined to live ever further from its natural origin. The Digital Technological Matrix (DTM) can be defined as the globalized symbolic code that governs the creation of digital technologies designed to enhance or replace human action, radically changing social identities and relationships. By modifying human action, digital technology conditions the human persons who use it, to the point that the DTM changes their identities together with the social relations that constitute them (given that identities and social relations are co-constitutive).

The historical phase we are going through is characterized by the fact that existing, for people, means renouncing a stable identity, to enter the only possible dimension: that of liquidity, that is, that of changing, dissimilar, dissociated and continuously ambiguous identity (Cantelmi 2013). The digital transformation of reality achieved by the DTM intercepts, enhances and shapes some characteristics of the liquid man: narcissism, speed, ambiguity, the search for emotions and the need for infinite light relationships. The fundamental characteristic of techno-liquid sociality is the techno-mediation of the relationship. The fact of making the social relationship virtual leads to consider the digital connection as an imaginary equivalent of the inter-human relationship. This form of relationship is pervaded by an increase in the perception of loneliness, especially in the most active people on social networks. The “frictionless connection” could allow social networks to send user status updates without their permission: every time we watch a video on YouTube or read the news in an online newspaper or download an image, a song or other, the social network in use will automatically communicate it to other users. Social networks, abolishing any form of distinction between private and public, have already transformed friendship into sharing digital content. Here, then, is the new form of relationship: mainly technomediated, entrusted to the connection, very fast, exciting and full of online sharing. New forms of artificial intelligence are bursting into these scenarios, capable not only of performing an almost infinite number of tasks better than humans, but also of socializing, experiencing and letting them feel emotions and consoling and helping humans in their existential needs. The new forms of artificial intelligence question us about the new and increasingly confused boundaries between the human and the non-human. The technoliquid mind is basically different on the cognitive, emotional-affective and socio-relationship level from the analogical mind.

From a material point of view, the DTM is made up of very complex communication networks that operate through platforms managed by artificial intelligence (AI platforms) and by smart robotics (AI robots). What makes a difference compared to legacy media is the use of AI, that is, algorithms that influence, direct and manipulate communication in what has been called the infosphere. The infosphere was conceived and created as an environment whose function is to increase human abilities.

If we define digital-based enhancement as the use of technological tools (such as ICTs, AI platforms and AI robotics) to increase the capacities of human persons, groups and social organizations to overcome certain limitations, internal or external to them, the problem that opens up is understanding how and to what extent ‘the human’ and its dignity are modified.

The challenge is great due to two complex sets of reasons: First, because the human is difficult to define, as its boundaries are always historically open; second, because digital devices are not mere tools but rather social forces that are increasingly affecting our self-conception (who we are), our mutual interactions (how we socialize), our conception of reality (our metaphysics), our interactions with reality (our agency), and much more (Floridi 2015).

In a previous contribution (Donati 2019), I supported the thesis that enhancement through digital technologies is more human the more it allows those intersubjective and social relationships that realize the humanization of the person. This argument is not found in most of the current literature, where enhancement is assessed with reference to the body and/or to the mind of the individual and, in some way, to his relations, but not to social relations as such. The topic of ‘relational enhancement’, as I understand it, is underdeveloped, if not virtually unexplored.

The aforementioned thesis is motivated by how digital technologies increasingly change social and human relations. That is why, in a subsequent paper (Donati 2020), I proposed to analyze the processes of hybridization of social identities, relations, and social organizations in order to understand under which conditions the enhancement brought about by the digital revolution can shape organizational forms that are capable of promoting, rather than alienating, humanity.

The challenge posed by Matrix Land is that of a future society, however uncertain, in which the cognition of historical time will be lost and, with it, also the classical (Euclidean) notion of space. The expansion of information and communication technologies changes the balance between the three social registers of time (Donati 2021: 203): the symbolic register of time (time without history), the relational register of time (the time in which a relationship lasts, the historical durée) and the interactive register of time (the evenemential time that lasts the time of communication and then disappears). It is a question of evaluating the consequences of the passage from symbolic time to relational-historical and then to interactive time due to the speed and acceleration impressed by the digital media, which become the independent variable that redefine social time and space. By increasing the speed and acceleration of life, time expands (people have the impression of living in a sort of eternal present that implies the idea of ​​the end of history, the absence of the past and the future, and the cancellation of historical memory) and the social space (social distance) is reduced. In this way, social time and space, beyond a certain threshold of speed and acceleration of digital communication, are practically canceled.

Time and space become illusions. Virtual reality will prevail over human nature so that human beings will think that what previously appeared real to them was on the contrary pure illusion.[1] From the point of view of the radical supporters of the DTM, reality exists only in the mind. Virtual logic will supersede analogical thought. What then will be left of the human?

For those who are fully immersed in Matrix Land, human reality is not something to understand or explain in order to remedy some of its defects but only a set of images hidden in the back of the human brain, formed on the basis of electrical stimulations aroused by the perceptions of the five bodily senses. The senses capture all kinds of stimulations, which come from both human beings and from every other non-human entity, mixed in such a way that the human reality conceived in the brain takes on unprecedented characteristics. Which ones?

According to the developments in quantum physics and biogenetics, our processes of imagination will allow us tomorrow to create something that today seems impossible or imaginative. In Matrix Land, the Mind creates what future society will concretely make possible. For example, thinking that human beings can fly will lead society to allow them, in the near or distant future, to actually fly; obviously only when it shall have the right tools to make it happen.

In this contribution I would like to evaluate this perspective to understand what it implies from the point of view of what ‘being human’ could mean in Matrix Land.

The rationale of my argument is that, in order to achieve a truly human enhancement through digital media, it is not enough to improve the abilities and performances of an individual (its body and/or mind), or a social group or organization, but it is necessary to verify that enhancement operations have positive repercussions on the persons’ social – that is ‘relational’ – life. I wonder what kinds of social relations between humans are favored (or impeded) by digital technologies and how the tools of digital enhancement affect human persons from the point of view of their intersubjective and social relations. Applying a digital device – no matter how intelligent it is – in order to improve the performances of an individual or a group of people is completely insufficient to affirm that this action of enhancement has properly human consequences. If so, under what conditions can we say that enhancement based on digital tools respects or favors human dignity rather than putting it at risk or damaging it?

Enhancement, Digital Revolution, and Social Relations

Although during the first industrial revolution, in the cultural climate of the Enlightenment, the human being was often conceived as a machine (see L’Homme Machine by J.O. de La Mettrie published in 1747). Yet, until the beginning of the 21st century, human relations have been regarded as distinct from mechanical relationships. The digital revolution threatens to erase this distinction. It is as if a new Enlightenment[2] is reformulating the idea of the machine and the idea of the human being within a single, conforming digital code. In this way, the relationships between humans and those between humans and machines (or animals, or whatever) become assimilable.

Accordingly, one wonders: What is the difference in relationality that connects human beings with mindless machines compared with the relationality between human and machines equipped with an autonomous artificial mind?

The crucial point concerns the possibility that the distinction between the personhood of humans and that of smart machines might disappear (Warwick 2015), so as to decree the death of the old humanism focused on that distinction (Breslau 2000). No wonder that even the distinction between interhuman relations and other kinds of relations (e.g. with non-human living beings or material things) disappears. This is the putative miracle of the DTM. The I-Thou relationship theorized by Martin Buber can now be applied to the relations that people have with their super-computer, a bat, or extra-terrestrials, provided that they have a first-person perspective, since “thou-ness is not distinct to humans”.[3]


In order to understand the specific identity of the human mind, it is useful to assume that a mind, in the abstract, is an effect (a relational entity) emerging from interactions between its constitutive elements working together, making it a product of three components: brain + stimulating factors (internal & external) + the autonomous contribution of the relations between brain and stimulating factors, which is ‘the third’ component of the emergent effect that is the operating mind.

Does the AI platform’s mind have the same third component (the autonomous role of the connecting relations) as the human mind? My answer is negative: the human and artificial minds are two incommensurable orders of reality because of their structurally different relationality, both internally and externally.

Identity is formed in relationships, and vice versa, relationships are formed through identities, which means that the process of interactions can have different outcomes, depending on whether the process occurs in a conflationary way between identity and relations or instead distinguishes them analytically over time as realities of different orders. Not any kind of interaction leads to the fulfillment of the human person. Between an arrangement in which interactions are of a reproductive type (morphostatic) and an arrangement in which they are of a chaotic type (turbulent morphogenesis), there are innumerable different configurations of which it is difficult to appreciate their more or less humanizing character. Consider, for example, the self-description of a ground-breaking hightech company of AI researchers, neuroscientists, psychologists, artists and innovative thinkers called SoulMachines. This company aims at re-imagining what is possible in Human Computing with the following declaration on its website:

“We bring technology to life by creating incredibly life-like, emotionally responsive Digital Humans with personality and character that allow machines to talk to us literally face-to-face! Our vision is to humanize computing to better humanity. We use Neural Networks that combine biologically inspired models of the human brain and key sensory networks to create a virtual central nervous system that we call our Human Computing Engine. When you ‘plug’ our engaging and interactive Digital Humans into our cloud-based Human Computing Engine, we can transform modern life for the better by revolutionizing the way AI, robots and machines interact with people”.

It is then a matter of analyzing what kind of hybridization between the human being and the machine is produced by the different forms of enhancement, and what consequences are produced in social relations, and therefore in the whole organization of society.

I would like to analyze this topic by looking at how the historical evolution of technologies is changing both the natural order and the social order through the practical order of reality.

Confronting the Digital Matrix: The Emergence of the Humanted

The Transition to the Humanted

Human identity, and its humanization, passes through the relationality of the mind in connection with its internal and external environments. It becomes essential to understand how these relationships change in different technological environments.

In table 1, I summarize the transition from the pre-DTM historical phase, to the advent phase of DTM and to the further development of DTM.

(I) In the pre-Matrix phase, machines can be more or less sophisticated, but they are not ‘thinking’. Therefore, human beings use them as instruments that can be mastered, even if the users are also affected by the instruments they use. In any case, human relationships remain clearly distinct from machinic (automatic) relations. Knowledge and communication are of an analogical type. Society is still seen as the exclusive domain of human beings, who are supposed to be its architects and its ‘center’ (anthropocentrism).

(II) In the transformation phase, the traditional sectors of society that operate in analog mode (including analog machines) are increasingly replaced by smart machines eventuated by the digital revolution. Behind these innovations, there is the visionary idea of a ‘society of mind’ that is cultural, scientific, and practical. This visionary idea is to think and configure society as it is to build a Mind that works on the basis of innumerable elements that are in themselves ‘stupid’, but, all working together, make ‘the whole’ – i.e. society itself as a mind – intelligent. According to Marvin Minsky this is the idea behind the construction of both the AI and the society they will create. In his words: Society of Mind is a “scheme in which each mind is made of many smaller processes. These we’ll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies – in certain very special ways – this leads to true intelligence” (Minsky 1988: 17). From my point of view, the DTM is the practical realization of this vision of society, in which agents are mere processes, neither reflexive people nor social subjects capable of expressing and putting into practice intelligent and meaningful projects. Such a DTM imposes itself as an impersonal and anonymous force. The tendency to replace the analogical code with the digital one has the consequence of eroding the distinctions between human-human relations and human-machine relations, because human relationships are replaced by the operations of smart machines and assimilated to their logic and their characteristics.

Current technologically advanced societies represent a middle-step between a society where there is no artificial intelligence and a society in which smart machines are endowed with minds (i.e. autonomous cognitive processes), so that new kinds of ‘persons’ (like ‘electronic persons’ and virtual networked organizations) become ‘agents’ on their own.

In this transitional phase, human rights are increasingly at stake due to what Teubner (2006: 240-41) calls ‘the anonymous Matrix of communication’:

The human-rights question in the strictest sense must today be seen as endangerment of individuals’ body/mind integrity by a multiplicity of anonymous and today globalized communicative processes (...) Failing a supreme court for meaning, all that can happen is that mental experience endures the infringement and then fades away unheard. Or else it gets ‘translated’ into communication, but then the paradoxical and highly unlikely demand will be for the infringer of the right (society, communication) to punish its own crime! That means turning poachers into gamekeepers.

(III) A society driven by DTM seems to be the point of arrival of what Karl Marx called the administration of things by things. Political institutions and civil actors try to dominate the new technologies, but more and more they realize that the power of intelligent machines changes their way of thinking and relating to each other.

These changes are marked by the passage from (I) the analogical symbolic code of the early modern society, to the (II) binary code of the post-modern society, to the (III) quantum code (qubit) of the trans-modern society.[4]

What I want to emphasize is the transformation of social relations. (I) The analogical code is that of classical ontology and epistemology in which symbols or models are applied to a constructed or artificial reality on the basis of analogies with a reality conceived as natural. Thus achieved is a correspondence between two different phenomena governed by the same laws, which therefore can be subsumed under a single model, social relations seen as natural. (II) The binary code refers to a dialectical ontology and epistemology in which 0 and 1 are used alternatively to produce dynamic and dialectical states that can in any case generate a certain stability at the macro level under certain very particular conditions. Social relations become a built reality of procedural and transactional character. (III) The quantum code (qubit) refers to the ontology and relational epistemology in which 0 and 1 overlap and intertwine (the phenomenon is called entanglement) in procedural states generally lacking in stability both at the micro and macro levels. The social relationship now becomes purely virtual. As Malo (2019) reminds us, social relationships have their own energy or what Aristotle called energeia. From my point of view, in the social sciences, the energy of the social relation occupies a position analogous to that of the quantum in the physical sciences. Just as quantum mechanics provides only a discrete set of multiple values for a fundamental variable that cannot be further broken down, so does my relational sociology for social relations.


(I) Before

the Digital Matrix

(II) Transition to

The Digital Matrix

(III) A society driven by the Digital Matrix

‘Man architect’

(Homo faber)

‘Self-constructed man’ (Homo sui-construens)

‘Virtual (digital) man’

(Homo digitalis)

Analogical code

(classic ontology & epistemology)

Binary code

(dialectic ontology & epistemology)

Quantum code (qubit)

(relationalist ontology & epistemology)

The Human being can design and master the machine, which is an instrumental and passive tool for practical activities

Technologies become more intelligent and autonomous so that their relations ‘redefine’ the human being

Human beings become accustomed to digital relations and take digital features from them (generation of the humanted)

Identities and social relations are supposed to reflect given for granted human features, since knowledge & communication are analogical

Identities and social relations become mentalized and hybridized, because knowledge & communication become digital (algorithms)

Identities and relations depend on the type and degree of reflexivity exercised on the processes of mentalization & hybridization of knowledge and communication

(reflexive hybridization)

Society represents itself as immediately ‘human’



Society represents itself as a ‘collective (digital) mind’


separation between human and social)


Society represents itself as a multiplicity of social worlds differentiated according to the guiding distinction:


Table 1 – How the Digital Technological Matrix progressively transforms humanness and society

One wonders where society is going. Certainly, as far as the human person is concerned, the result of this dynamic will be the emergence of an ‘augmented human’, which I call a Humanted (i.e., a human-augmented), a human person modified by technologies who is both the product and producer of the hybridization of society. The augmented human identity will enjoy a strengthening of natural abilities but will also experience new problems of relationship with herself, with others, and with the world.

What will be its future configuration when DTM will be further developed, to the point of acquiring its autonomy with respect to human subjects? Obviously, a series of scenarios are opened here for a society led by DTM. To put it shortly, these scenarios depend on two main processes.

The first process favors the mentalization of social relations and therefore of both personal identity and the representation of society. It makes Mind the cultural model for the whole society, replacing the old metaphor of society as industrialized labour, the one that the twentieth century called ‘machine civilization’ (Miller 1979).

The second process is the hybridization of social relations, which is closely linked to the first. It derives from the fact that social relations between humans, instead of being distinct from digital ones, tend to incorporate certain characteristics of the latter, and therefore hybridize. People are induced to think and act ‘digitally’ instead of analogically.

In the current use of AI platforms, there is something that binds the human person and the technological artifact, while still differentiating them. They differ as they must ‘adapt’ to each other if they want to work together. This adaptation takes place precisely via the interactions and transactions they establish between them. Their feedbacks are interactive and transactional, but not strictly relational (Donati 2013), which means that one incorporates certain modes of operation of the other, but the relationship remains problematic.

The problem can be understood in the words of Melanie Mitchell (2019) when she states that machine learning algorithms don’t yet understand things the way humans do – with sometimes disastrous consequences. Current progress in A.I. is stymied by a barrier of meaning. Anyone who works with A.I. systems know that behind the facade of humanlike visual abilities, linguistic fluency and game-playing prowess, these programs do not – in any humanlike way – understand the inputs they process or the outputs they produce. The lack of such understanding renders these programs susceptible to unexpected errors and undetectable attacks.

If one argues that personhood is not in principle confined to those entities that have a human body (or traceable to human bodies, as in moral persons), or is compatible with changing any part of the human body because personhood consists in possessing the first-person perspective, as Baker claims, the consequence is that personhood is mentalized. Mentalization consists in the fact that the intersubjective production of meanings (semiosis) is made virtual (Arnold 2002). The mentalization and hybridization of identities and social relationships promotes the anthropomorphic attribution of human characteristics to realities that are not human. Personification of robots, for instance, is precisely a strategy of dealing with uncertainty about their identity, which moves the pattern of attribution of identity from the causality induced by humans to that of the double Ego-Alter contingency, which presupposes the robot’s self-referentiality. The question is: does this self-referentiality produce the same emerging effects of inter-human relations?

The relationship between the human person and AI/robot becomes a mind-to-mind relationship. Deprived of a correspondence between two human bodies, emotional, sentimental, and psychological dimensions become an enigma. By losing the relationship with their specific bodily support, dialogue, conversation, and communication assume the character of a simulated, emulated, fake, or fantasized reality. If I think to relate myself to a star that is in a galaxy billion light-years away from me, I imagine a relationship that is purely mental, but which has an effect on me, because it redefines my identity.

As Henry Atlan (1985: 96) wrote:

“Ce qui nous pousse en fait à placer la barrière de façon arbitraire entre les hommes et le reste, c’est l’expérience immédiate d’une peau, d’un corps ou de mots, que nous faisons d’un autre système, extérieur à nous-memes. Cette experience est pré-scientifique ou post-scientifique et c’est un souci d’éthique de comportement plus que de connaissance objective qui nous fait placer intentions, projets, créativité et en meme temps responsabilité et liberté, à l’intérieur d’une peau qui enveloppe un corps dont il se trouve que, de près ou de loin, il ressemble au mien”.

For example, by attributing personality to a robot or AI, sexual identity is mentalized, since it no longer corresponds to a defined body but to an indefinitely hybridized medium. Entrusting family, friendship or business communications to an AI/robot instead of face-to-face relationships leads to mentalizing relationships rather than considering their concreteness, their materiality.     

Supplementing the first-person perspective by adding reflexivity and concerns in order to delineate personal and social identities can help to avoid these outcomes to some extent, but it is not enough to make social relationships adequate to meet human needs related to physicality. As many empirical investigations reveal, relationships between family members who frequently prefer to communicate through the internet rather than face-to-face, gradually take on the virtual logic of social networks: interpersonal relationships are decomposed and recomposed (unglued and re-glued), become more fragile, while communications are privatized on an individual basis. In sum, family relationships become mental rather than analogical (Cisf 2017). If a person does the daily shopping in a supermarket through the internet rather than going in person to the shops and meeting other people, she ends up impoverishing her human relationships and absorbs, unwittingly, a relational logic that is hybridized with how the supermarket app operates. The private lifestyle, at least in consumption, is made accessible to the knowledge of strangers and the boundaries between private sphere and public sphere collapse. The strength of DTM is nourished through the diffusion of a mentalized environment of reference common to all those who communicate, which, moreover, is retained and manipulated through big data. People who communicate outside DTM become socially irrelevant.

The Process of Hybridization

A society driven by DTM can evolve in various directions. In my opinion, the scenarios for a ‘digital society’ will be different: (i) depending on the type and degree of control and mastery that humans will have on DTM; (ii) according to the type and degree of reflexivity that people exercise on the processes of mentalization and hybridization of relations: and (iii) according to the forms of governance of the organizations and economies that use DTM.

Society will be less and less interpretable as human in a direct and spontaneous way, because human relations will be increasingly mediated by DTM. With all this, the human does not disappear, but what was once called ‘human society’ must be intentionally re-generated as ‘society of the human’, characterized by being produced through relational reflexivity on the human, through new distinctions between the various forms of social relations that generate different types of society. The so-called human society has been swept away by functional differentiation (Luhmann 1990), and the ‘society of the human’ can emerge only through a supra-functional relational differentiation able to challenge the cyber-society.

If an organization or social network wants to maintain the basic characteristics of the human, it will have to develop a culture and practices that give people the ability to reflect on the hybridization of social relations in order not to become the slaves of machines.

This problem is maintaining and empowering human agency, which is threatened by a social structure (the hardware of DTM) that has become the engine of change and that bypasses the agency by continuously adapting to itself a cultural system which in turn overrides human agency without giving it the ability to exercise its personal and relational reflexivity.

I summarize this process in Figure 1 that formulates, within the framework of relational sociology, the SAC (structure-agency-culture) scheme of morphogenesis suggested by Archer (2013: 1-21) so to meet the demands for greater clarification (Knio 2018) concerning the key role of relations in the double and triple morphogenesis of agency.

To be short, the core idea is that, when the human agency is unable to influence the structure, the latter determines the morphogenesis of agency in such a way as to reduce or prevent its reflexive capacities. In this case, the structure directly modifies the cultural system without agency reacting, and in this way the dominance of DTM is reinforced, which hybridizes identities and social relations. Hybridization proceeds to the extent that reflexive agency is blocked, so that the structural changes of DTM can change cultural processes without resistance and continuously reshape the identities and relationships of human persons. The latter becomes a passive Humanted.

The case of young people called hikikomori is a good example in this regard. Hikikomori refers to reclusive adolescents or adults who withdraw from society and seek extreme degrees of isolation and confinement. Estimates suggest that half a million Japanese youths have become social recluses, as well as more than half a million middle-aged individuals. Although these people are characterized by personal psychological disorders, empirical research has shown that hikikomori syndrome is powerfully exacerbated by digital communication technologies, such as the Internet, social media and video games.[5] Many of them show signs of Internet addiction. Video games and social media tend to reduce the amount of time that people spend outside and in social environments that require direct face to face interaction. The emergence of mobile phones and then smartphones has deepened the issue, given that people can continue their addiction to gaming and online surfing anywhere, even in bed.

Many examples of humanted people can be referred to the influence of ICTs (i.e. the internet and social networks) on phenomena such as the change of one’s sexual identity, cybersex relations, how women and men use the online role-play to explore and change their gender, identity, and sexuality, and how people modify their couple and family relationships by absorbing into them the characteristics of non-human entities, becoming actants (according to Bruno Latour’s ANT theory).

Digital technology allows men and women of the third millennium to be without constraints, to technomediate the relationship without being in relationship, to connect and to build liquid, changing, and at any moment fragile bonds, devoid of substance and verification ready to be interrupted. The DTM combines Musil’s man-without-qualities with today’s man-without-bonds, in a sort of continuous overlap between analog and virtual reality that defines the new horizon of the human identity theme. The crisis of male and female identity is its most obvious expression. The identity, that is the idea that each of us has of him/herself and the feeling that each of us feels of him/herself, is therefore in deep crisis, and the new paradigm is the ambiguity that is proper to the identity of the humanted (the human being enhanced by technology), often seen as a transition to the cyborg. The fundamental characteristic of technoliquid sociality is the pervasive technomediation of the relationship that changes identities.

Leaving the field of phenomena just mentioned, we can find other types of hybridization in the field of organizations and work. Take the case of Boeing 737 Max-8 aircrafts that have fallen in recent years (for example, that of the Ethiopian Airlines which crashed near Addis Ababa airport on March 10, 2019). True or not, one of the explanations for the accident was that the aircraft’s software – that is, the AI that had to monitor it – forced the pilot to do certain operations, not left to his discretion, in order to avoid possible terrorist hijacking. In the presence of an unexpected event (probably fire on board), the AI did not allow the pilot to do those maneuvers that could have prevented the fall. In a sense, the pilot’s identity (humanted) and his relationships to maneuver the plane were hybridized by the AI. This example is emblematic for all those cases in which an AI, although created to ensure the achievement of positive ends, prevents the use of relational reflexivity by those who drive the machine (passive humanted) and leads to a negative outcome of the action system or organization. The remedy is not sought in strengthening the human agent (the pilot as proactive humanted), but in designing a more sophisticated AI that can replace him. A well-known case is that of managers who entrust an AI with the task of establishing the duties and shifts of company employees so that, as AI does not allow the manager to use adequate personal and relational reflexivity (weakened or impeded agency in figure 1), a lot of employee dissatisfaction and an overall negative business climate are generated (cultural domain), which leads to seeking a remedy by replacing employees with robots, AI or other artificial instruments (structural domain).

We can say that being human in Matrix Land means having the chance to exercise the qualities and causal powers of human agency in such a way as to react to structural and cultural conditioning by reflexively redirecting social relations towards human persons. In figure 1, this means empowering the weak relations (dotted lines - - - >) and making them stronger and proactive (solid lines ). To exert effective reflexivity, agency needs a favorable social environment, to configure itself. In other words, to put into practice the reflexive imperative, it is necessary to satisfy the relational imperative, that is, how to face the concrete needs that a non-virtual relationship with the Other presents to us. This implies control and regulation of the conditioning social structure in order to prevent it from colonizing the cultural system in such a way as to bypass human agency.

When human agency, although influenced by DTM, can react to the latter with an adequate relational reflexivity (strong relationality), we see the emergence of a proactive augmented human (Humanted) as in figure 2.

The future of the world depends on the types and degrees of mastery over smart digital machines (ITCs, AI, platforms, robots). I cannot discuss here the various political, economic, organizational, and legal instruments that can serve this purpose at the macro and meso levels.[6] Al-Amoudi (2019: 182) has made clear how “managerial practices have contributed to dehumanising contemporary societies, and that management studies bear an important share of the blame”.

To counter this drift, we should understand the importance of the ontic necessity of relations in organizational studies.[7] Very important are studies on human-robot interaction (HRI) to assess the relational implications. At the micro level, it is necessary to develop a cybernetic literacy that does not limit itself to educating the individual as such, as proposed by Pierre Lévy (1997), but regards the way of operating networks in which individuals are inserted. Only in this way will we be able to prevent DTM from producing new segmentations and inequalities between social groups, due to the new divides created by the differentiation of the networks of social relations and the differentials in cyber literacy between people. It is at this point that we need to address the discourse on human dignity and human rights in the face of the society led by DTM.

Many wonder why the human-AI robot relationship is different from the human-human relationship. Some believe there are no differences (e.g. M.S. Archer). In my opinion, however, the Ego-Alter interaction will never be able to create a relationship similar to the interhuman one. This impossibility is due to two reasons: first, the human action and the behavior of AI robots are radically dissimilar due to the fact that the mind-body relationship in the human person is not replicable in the robot’s mind (AI)-machine relationship; secondly, since the relationship is an emerging phenomenon, the relationship that emerges from human-human interaction necessarily has different qualities and causal properties from that which can emerge from human-AI robot interaction.

When and How Can an Organization Using Digital Technologies Achieve Human Enhancement?

When Is the Enhancement Pursued Through Hybridization Human?

How do we distinguish when the enhancement practiced by an organization that works with digital technologies is humanizing rather than dehumanizing or even non-human?[8]

To make these distinctions, it is necessary to clarify what is meant by ‘human’ applied to the effects that enhancement technologies have on people and their relationships in a hybridized organization.

I do not want to enter the debate about the potential comparability of AI platforms or robots and humans. I limit myself to observing that human ontology is incommensurable with respect to the ontology of artefacts. Even if AIs can be made ‘sentient’, their subjectivity can never be human. I say this because I believe that personhood exists only in the relationships both between mind and body and between the person and the surrounding world.[9] Human dignity exists and is to be protected and promoted in its social relationality. Baker’s argument according to which “artefacts have as strong claim to ontological status as natural objects” (Baker 2004: 112) must be subjected to a critical examination because the ontological status of the human body cannot be equated to that of an artefact like an AI.

In a sense, even social organizations are artifacts, to which we attribute a legal personhood.[10] Organizations equipped with smart machines increase their intelligence and creativity to the extent that they are hybridized, that is, in which human subjects increase the ‘awareness of their consciousness’ just because they use AI platforms or robots. Using intelligent artefacts allows workers to be more available for non-routine and more creative action. The question is in what sense and in which way do these organizations ensure the human qualities of the relationships between their members and the relationships that those who benefit from the activities of that organization will have?

To answer this question, it is not enough to refer to the ability of the individual members of the organization or its customers. It is necessary to consider the quality of the relationships created between the members of the organization and the quality of the relationships that its customers can put in place following the use of AI platforms and robots.

To understand hybridized organizations in the sense understood here, it is necessary that personhood be not defined for its individual and self-referential abilities but be defined in a relational way to distinguish between the different types of relationships that are created in the organization with the introduction of intelligent machines.

In short, while individual human personhood requires possession of the first-person perspective, when we refer to a social organization as a relational subject we must reason in terms relational personhood. To manage the hybridized relationships of an organization in a human way, individual personhood must be the expression of a mind that works in connection with a human body (O’Connor 2017), able to reflect not only on itself and on its context, but on the relationship to the Other as such.

In hybridized social relations, there coexist both characteristics of the interhuman relationship  ̶  which is structured according to the Ego-Alter double contingency – and characteristics of the Ego-It digital relationship. In the latter, the expected contingency on the part of It is characterized by a drastic reduction compared to the complexity of the double contingency Ego-Alter.[11] If an organization hybridized by new technologies wants to avoid the reduction of Ego-Alter relationships to I-It relationships, it must maintain the high level of contingency in the Ego-Alter relationship. This requires the adoption of a second person’s perspective beyond that of the first person, necessary to communicate sensibly with the Alter, and in particular to recognize Alter’s differences and rights (Darwall 2007). In my opinion, admitted and not granted that the AI can act according to the perspective of the first person, to play the role of Alter (and vice versa Ego) in the relationship with a human person, the AI should be able to assume the perspective of the second person. The perspective of the second person implies that the agent (in this case the AI) should be able to act as a “Self like an Other” (Soi-même comme un autre),[12] which means that the AI should act like a human being and, as such, evaluating the good of the relationship (the relational good between Ego and Alter). This is impossible as long as AI does not have the same constitution as a human being.

In opposition to this statement, there are scholars who think that sentient AI can be (or become) capable of ‘reflecting’ on the Other and/or the relational context as if they were an Alter in the Ego-Alter relationship. At this point we find the problem of clarifying whether or not there are differences, and if so which are they, between humans and AI in social life, and, consequently, if any, between the dignity of one and the other.

The talk about personhood and human dignity

Charles Taylor (1985: 102) observes: “what is crucial about agents is that things matter to them. We thus cannot simply identify agents by a performance criterion, nor assimilate animals to machines …. [likewise] there are matters of significance for human beings which are peculiarly human and have no analogue with animals”. I think the same is true when it comes to AI, and not just animals.

One of the things that matters is social relations. They have a significance for human beings which are peculiarly human and have no analogue with animals or AI. To clarify this point, I suggest making a parallel between the distinction first/second-order desires made by Harry Frankfurt (1971), as essential to the demarcation of human agents from other kinds of agent, and the distinction first/second-order relations.

Human beings are not alone in having desires and motives, or in making choices. They share these things with members of certain other species, some of which even appear to engage in deliberation and to make decisions based on prior thought. This is possible also for AI. What is distinctive of a human person is the capacity to have a second-order relation when she has a desire or makes a choice whose object is her having a certain first-order relation. The first-order relation is an expression of inner reflexivity that can be present at certain times also in some animals and perhaps in some future AI, but only the human person can have second-order relationships that are an expression of relational reflexivity (Donati 2013). After all, the higher morality of human agency does not lie in the first-order relationship, but in the second-order relationship.

This point of view is particularly important because AI can be actors of new types and forms of relationships that differ greatly from the relationships that animals can have with human persons. Human-animal relationships belong to the natural order, while human-digital artefacts relate to the orders of social and practical realities of applied technologies. The actor-network-theory is flawed precisely because it conflates all these orders of reality.

The ‘relationality criterion’ should not be understood as a ‘performance criterion’ or another behaviourist criterion. G.H. Mead’s view, taken up by R. Harré and others (see Jones 1997: 453), that selves exist solely in lived discourse and derive their dynamics and intentionality from speech acts is fallacious precisely because social relations also exist without linguistic acts and they reflexively influence the self also in an indirect or unintentional way. In my view, the ‘relationality criterion’ becomes more and more important and significant precisely because the DTM dramatically amplifies the phenomena of hybridization of social relations and, more generally, it is the causal factor of a huge ‘relational revolution’ in the globalized world (Donati 2012).

As far as I know, no scholar has dealt with the issue of distinguishing human-AI relations in respect to human-human relations on the basis of a general theory of the qualities and causal properties of social relations in themselves, both in terms of dyads and complex networks.

Some suggestions can be found in the thesis advanced by David Kirchhoffer (2017) who rightly argues that the problem of dignity talk arises because proponents of various positions tend to ground human dignity in different features of the human individual. These features include species‐membership, possession of a particular capacity, a sense of self‐worth, and moral behaviour. He proposes a solution to this problem by appealing to another feature of human beings, namely their being‐in‐relationship‐over‐time.

This perspective can enable us to understand dignity as a concept that affirms the worth of the human person as a complex, multidimensional whole, rather than as an isolated undersocialized entity (rational choice theory), or a juxtaposition of ‘dividual’ features (Deleuze), or the product of functional differentiation (Luhmann) (see Lindemann 2016). Kirchhoffer elaborates his argument by observing that the concept of human dignity can serve both a descriptive and a normative function in the enhancement debates. At a descriptive level, asking what advocates of a position mean when they refer to human dignity will reveal what aspects of being human they deem to be most valuable. The debate can then focus on these values. The normative function, although it cannot proscribe or prescribe all enhancement, approves only those enhancements that contribute to the flourishing of human individuals as multidimensional wholes.

One can agree with the idea that a person’s ontological status rests on being a centre of value, ‘integrally and adequately considered’, but the foundation of such worth remains obscure. What is missing in Kirchhoffer’s argument is the clarification of what values are distinctive of the human and which characteristics must have the relationships that make them flourish. The argument that human dignity stems from the fact that the human person is a multidimensional whole is necessary but not sufficient. We need to enter into the analysis and evaluation of the vital relationality that characterizes that ‘whole’ and makes it exist as a living being that has a structure and boundaries, however dynamic and morphogenetic.

Generally speaking, in the so-called ‘relational turn’ of the last two decades mentioned by Raya Jones,[13] social relations have been almost always understood as interactions and transactions, instead of ‘social molecules’ to which we can attribute human qualities and properties or not. When social relationships have been observed as more substantial, stable and lasting phenomena, their characteristics have been treated in terms of the psychological (mainly cognitive) qualities deriving from the related terms, that is, human persons and AIs. The attributions of qualities and properties to the human/AI relationships as such are, mostly, psychological projections of human persons on entities to which is attributed an ontological reality that is the result of subjective feelings and mental abstractions.

In short, social relations have been treated as psychological entities, instead of being considered emergent social facts in which we can objectively distinguish human characters from those that are not. It is instructive, for example, that, speaking of the relational turn, Jones refers to authors such as G.H. Mead and Lev Vygotsky. He quotes the saying of Charles Cooley “Each to each a looking-glass. Reflects the other that doth pass” considering it as the premise of the interactionist relationalism, and then he still appreciates the perspective of Turkle according to which “in the move from traditional transitional objects to contemporary relational artefacts, the psychology of projection gives way to a relational psychology, a psychology of engagement”.

Jones wholly ignores all those perspectives according to which social relations cannot be reduced to social psychological traits. The researches cited by Jones only show that a growing number of scholars treat human/AI relationships as they study relationships between human and domestic animals, thinking that AI will do better than animals. Of course, those who love dogs or cats treat them as human beings: they grow fond of them, talk to them every day, adapt to their needs in their own relational life, and so on. But dogs and cats are not human beings. Of course, AIs can have many more human characteristics than dogs and cats: they can talk in turn, they can reciprocate smiles and gestures of sympathy, they can perform orders much better than any other pet. But they cannot have that ‘complex’ of qualities and causal properties that make up the human and generate other kinds of relationships, which are ontologically – not just psychologically – distinct from those with animals. The problem is that the researches cited by Jones lack a generalized paradigm to define precisely and substantively what is meant by social relation.

Jones (2013: 412) suggests that, perhaps, in the future, “robots may enter as relational partners” as if they were human. She does not distinguish between social relations between humans and relations between humans and artefacts from the point of view of social ontology. She seems to share the idea that “it is the human’s perception of the relationship that humanizes the machine” (Jones 2013: 415), thus demonstrating that she treats social relations as psychological projections, even when she criticizes individualist interactionism to affirm what she calls relational interactionism and ecological relationalism.

Social relationships are not just human because we think of them as human, even if, according to Thomas theorem, the fact of considering them as human leads to certain practical consequences. It is precisely these consequences that allow us to distinguish when social relationships are human and when they are not. Two examples can be mentioned: one in educational AI robotics, when we see that the use of AI robots can cause harm (e.g. psychological and relational disorders) to children (Sharkey 2010); the other in assistive AI robotics, when elderly people refuse the robots saying that they do not respect their human dignity precisely because they cannot replace human relations (Sharkey and Sharkey 2010; Sharkey 2014).

A litmus test will be the case in the future of sexual relations between humans and AI robots. We will have to check whether the sexual relations between humans and AI robots are as satisfying as those between humans, even if the latter are not always humanizing.

If we split personhood, defined in the moral sense I have just indicated, from humanness, by attributing moral personhood to non-human entities, the boundaries between human and non-human are lost (Donati 2021). Therefore, no humanism is more sustainable. Those who attribute moral qualities to non-human animals and, potentially, to post or trans-human beings do so. The conflation between human, infra-human and super-human, must then be legitimized on the basis of some evolutionist theory (be it materialistic, like Darwinian, or spiritualistic, like that of Teilhard de Chardin) according to which a novel species or genus of hominid will be born beyond homo sapiens (theory of singularity). Which means adhering to some mutating utopia of the human nature. For critical realism, this mutation is not possible, because the utopia on which it stands is not concrete. If posthuman beings are created, even if they have a superior intelligence, their personhood will no longer have any proper human character. They will be alien beings to the human, that is, to be more explicit, they will no longer have that ‘relational complex’ that characterizes the human.

I think the contraposition between transhumanists and bioconservatives is misleading. Bostrom’s (2005) proposal to elaborate a concept of dignity that is inclusive enough to also apply to many possible posthuman beings (“future posthumans, or, for that matter, some of the higher primates or human-animal chimaeras”), is confusing because it makes no sense to attribute a single concept of dignity to human and non-human beings. Certainly, dignity implies respect and recognition of a certain worth, but the kinds of respect and worth are not the same for humans and non-humans. Every existing species of beings (living and non-living) has its own dignity (Collier 1999), but it is different for each of them. A unique concept would lead to indifferentism and relativism in moral choices. Rather, it is necessary to use a concept of dignity that is differentiated for each order and layer of reality. The relational proposal is, in fact, to define the concept of dignity relationally, depending on the qualities and causal properties of the relationships that each being realizes or can realize. Thus, social organizations like hospitals should adopt a relational perspective if they want to be humanizing. A person X can receive a new heart (or another organ) with a transplant, if she needs it, but her relationship to the transplanted body will not be the same as it had been with the original.

Is it the same person? Sure, but the person X must recompose her identity with the new body. Undoubtedly this requires the activation of her mental abilities (the exercise of the first-person perspective, reflexivity, and endorsement of concerns), but her mental abilities that allow for self-consciousness are not enough. She has to elaborate a certain virtual relationship with the figure of the donor, which implies affective and symbolic elements of relationship with this Other that has become part of her bodily identity. That person finds herself still ipse (in her capacity to be still the same person), but not idem (she is not equal to what she was before), because the transplant, by changing the body, has changed her relational identity (with herself and the others): “I am still the same, but different”. It is this relational ability to maintain the same identity while changing it that characterizes the personhood of the human subject, beyond her cognitive abilities. This is what distinguishes the human from the artificial personhood: the human actualizes in the same subject ‘being for oneself’ and ‘being for others’ at the same time. As I have already said, in principle the AI can perform the first operation (being for oneself), but not the second (being for others), because, to be able to implement second-order relational reflexivity, it should have the same relational nature of humans.

If we admitted – hypothetically – that a super AI can have a cognitive sense of the Self, however it would not be able to manage the double contingency inherent in this relationality, which is beyond its reach (see, for instance, Eva’s behavior in the movie Ex Machina).

Something similar happens when the interpersonal relationships between the members of an organization are mediated by AI in such a way as to change the identity of people in their social roles

In short, from my point of view, in order to evaluate whether an organization providing enhancement is more or less humanizing versus not humanizing at all, it is necessary to adopt the relational optics, i.e. assessing the effects of the intervention of the organization on the relations both between the body and the mind of the person and on the specificity of her interhuman relationships with respect to other types of relationships.

This perspective is essential when we analyze the use of digital technologies for the enhancement of people working in complex networks or organizations. In that case, we need to see how technologies – such as AI – influence the most important resource of a social organization, i.e., the production of social capital and relational goods rather than the consumption of social capital and the feeding of relational evils.

Redefining the Human in Hybridized Organizations

Usually, ‘hybrid organizations’ are understood as networks based upon partnerships, open coordination, co-production, community networking, and the like between different sorts of organizations. They are social configurations intertwining system and social integration. In this context, I define ‘hybridized organization’ as a social form comprising multiple people linked together by a collective endeavour and connected by digital technologies both internally and with the external environment. Digital technologies are included in the system integration side, while human relations are ascribed to the lifeworlds of social integration.

We can observe what happens in organizations like a family, a school, a corporation, a hospital, a civil association when they are hybridized by digital technologies.

First of all, AIs are changing the relational context by adding relations that can complement or replace interpersonal relations. Reporting the results of empirical research on what happens in families, schools, hospitals, corporations, retirement homes for the elderly, and so on, would take too long.

Technology is now able to recognize our emotions and our tastes. It studies our behavior through algorithms and big data, thus directing the choices of individuals. To counter the constraints imposed by the technological market, it is necessary to relate to DTM with meta-reflexivity and resort to relational steering (which I will mention momentarily).

My argument is that the performance of digital technologies introduced for enhancement purposes should be considered as factors that always operate in a defined relational context, and work in a more or less human way depending on whether they generate a relational good or a relational evil.

If we assume that society “is relationship” (and not that it “has relations”), the qualities and properties of a concrete society and its organizational forms will be those of its social relations. The transformations of the forms of social organization in which the relationships are mediated by technologies (AI platforms or robots) must be evaluated by how they help the production of those social relations that establish a virtuous circle between social capital and relational goods (Donati 2014).

The decisive level for this evaluation is that of meso contexts, intermediate between micro and macro levels. Biologists tell us today that cancer is a tissue problem, that is, a network, not a single cell (network node). If a cancer cell is placed in an egg, the cell returns to normal. The meso relational context is also decisive in human behavior. Pathology, like the good, of human behavior is not in the single node (individual), but in the relational network.

The type of organization or social network and its dynamics depend on the agents’ ability to make sustainable over time those innovations that include new technologies, to the extent that the agents are able or not to have a more reliable relational reflexivity on their hybridized relationships in a way as to produce the social capital necessary to generate relational goods.

This is my proposal to counteract the trend, rightly denounced by Ismael Al-Amoudi (2019: 182), of “managerial practices contributing to dehumanising contemporary societies, and that management studies bear an important share of the blame”. Relational goods are common goods which can be produced only in networks that are organized in such a way as to share decisions and responsibilities according to styles of collegiality (Lazega 2017).

If a social or political movement entrusts decisions to an algorithm that limits itself to gathering the voting preferences of individual members and decides on that basis, how will the behavior of individual members (primary agents) and those of the movement as a corporate agent change? Experiments of this kind are still rare. One of them is the Five Stars political movement in Italy, which apparently has a democratic organization, but in reality it is governed by those who master the algorithm.

The fact is that using the web to build democratic social movements is problematic. For example, we have research on how social networks worked in the case of the various Arab springs. Apparently, these were democratic movements, but the results were very different from building a democracy. The reason is that such networks were not organized in order to produce relational goods, but were simply aggregations of masses of individuals sympathetic towards a collective protest action. In my opinion, the Arab springs fed by the web were not an expression of the creation of relational goods, as Carole Uhlaner (2014) claims, because these social networks did not realize the emergent effect they were hoping for, so much so that from the Arab springs arose non-democratic systems.

What is certain is that AI platforms and robots cannot create social capital per se. They cannot define our well-being, they cannot create relational goods, such as trust or friendship. There can there be no “we believe” between humans and AIs. They can certainly adapt the content of their information and messages of various kinds to individuals (as Graber 2016 claims), but based on the algorithmic identity of the recipient.

The risk of a society or social organization driven by a DTM environment is to become a ‘mental relation’ populated by disembodied minds. This gives rise to opposing feelings. On one side, for instance, the Dalai Lama is quite happy to contemplate the karma of digital technology while leaving geeky details to the younger crowd”,[14] while on the other side, people like Chamath Palihapitiya,[15] a venture capitalist born in Sri Lanka, raised in Canada, and a Facebook employee for a significant span of his life in Silicon Valley, claims that “social networks are destroying how society works” and that he feels “tremendous guilt” about his work. “It (Facebook) literally is at a point now where I think we have created tools that are ripping apart the social fabric of how society works” (…) “We are in a really bad state of affairs right now in my opinion, it is eroding the core foundations of how people behave by and between each other”.

The assessment of the human character of people’s enhancement in hybrid organizations should be done in the light of the criterion that the empowerment to act is viewed as arising from interaction within mutually empathic and mutually empowering relationships. The importance of technologies in human enhancement lies in creating and sustaining relationships and relational contexts that empower people in all life activities. The benefits of hybridization are to be assessed based on how much the technologies favour cooperative strategies and are sources of interorganizational competitive advantage (Dyer and Singh 1998).

It is important to place these phenomena in the frame of cultural processes. At the moment, the hybridization of identities, relationships and organizations takes place in different ways in the so-called Eastern and so-called Western cultures, apparently opposed. In the East (Asia), cultures are inspired by a hierarchical relational matrix on which all transactions depend. In this case, relationships drive functional performance (Yeh 2010; Liu 2015). In the West, on the contrary, relationships are reduced to performances within an individualistic cultural matrix. The prevailing culture treats relationships as instrumental entities to be used to improve management efficiency. The result is the commodification of social relations (Pawlak 2017).

Today we are witnessing a comparison between the different ways in which these two cultures develop and use technologies. In the long run, however, it is likely that the cultural environment of DTM can proceed towards forms of hybridization between Eastern and Western cultures. The Western individualistic and private model of Silicon Valley is already taking on the characteristics of an unscrupulous managerial and financial model such as China (Morozov 2011).

Conclusions: Being Human Before and After the Matrix

All cultures and societies must now confront the alternative between considering humanism dead or redefining the human in the new digital environment. The first solution makes residual what is properly human and places it in the environment of DTM. The second solution challenges DTM as the main driver of society and puts technologies back to the ontological level of means, rather than first drivers. This turn can only be done by managing the hybrids (hybridized identities, relations, and organizations) through distinctions that are defined by and within a social relational matrix based on critical realism rather than as an expression of a constructivist digital matrix.

The AI used for technological enhancement can only simulate the human and cannot be substantially human. The reason lies in the fact that AI cannot understand (Verstehen), that is, to attribute a meaning, to what it thinks or does, because it does not have a relationship with the real thing (existing in itself). If the AI could recognize the Other (the non-Ego), that is to put oneself in the Other’s shoes, it would have an Ego able to relate to another from itself. But AI cannot have this capability because the AI relationship is just a communication of information according to a symbolic code in which the Ego is split from the non-Ego. This code reads the ‘inter’ (that is, the relationship between the subjects) as a factor added to the two terms of the relationship, i.e. as one more thing, and not as the emergent effect of their actions on which they should be reflexive.

Traditional personalism (I do not like this word, but I use it because it is part of a historical debate), as a cultural model developed before the advent of the digital matrix, had a non-relational substantialist character. It cannot be further supported. The person must now be conceived in relational terms. However, here is a new comparison between those who reduce the person to relationships, and relationships only to communications, and those who maintain that the person cannot be dissolved in communications, because, if it is true that communications form the person, they cannot replace her nature. We can grant a status of legal persons to artificial beings, but we cannot interject human nature into them.

In this chapter, I have put forward the thesis according to which the human/non-human distinction is revealed in the kind (qualities and causal properties) of the social relationality that digital technologies and their use favour or not. In short, it is about evaluating whether the technological mediation between human persons and their social organizations promotes or inhibits those relational goods that realize human fulfillment. The challenge of existing as human beings in the future Digital Matrix Land will be to face the relational imperative: how to distinguish between social relations that are human and those that are not.

AI platforms and robots will certainly become ‘social beings’, but not human beings. The historical process is destined to differentiate more and more human social relations from non-human social relations. Lawrence, Palacios-González and Harris (2016: 250) rightly warn that “our possible relations to AI persons could be more complicated than they first might appear, given that they might possess a radically different nature to us, to the point that civilized or peaceful coexistence in a determinate geographical space could be impossible to achieve”.

In conclusion, why is the human person-AI relationship different from the relationship between human persons? Why is there no ‘we-believe’, no ‘we-ness’, no ‘we-relation’, no relational goods between humans and AIs? I justified my negative answer based on the argument that, even if it were possible to have new artificial beings capable of some reflexivity and behaviours suitable to the ethics of the first-person, these two criteria would not be sufficient to distinguish between person-person relationship and person-AI relationship. To see the distinction between the various types of relationships, we need to resort to relational reflexivity, which is different in nature from the individual one because it is based on the ethic of the second person. This distinction of forms of reflexivity corresponds to the distinction between two types of personalism: classical personalism, for which the person transcends herself in her own action, and relational personalism, for which the person transcends herself in the relationship with the Other. After the Digital Matrix has covered the globe, perhaps we all will become humanted, but the relational criterion will be even more discriminating than in the past.


Al-Amoudi, I. (2019) ‘Management and de-humanization in late modernity’. In I. Al-Amoudi and J. Morgan (eds.) Realist Responses to Post-Human Society: Ex Machina, pp. 182-194. Abingdon: Routledge.

Archer, M.S. (2013) ‘Social Morphogenesis and the Prospects of Morphogenic Society’. In M.S. Archer ed. (2013) Social Morphogenesis, pp. 1-21. Dordrecht: Springer.

Arnold, M. (2002) ‘The Glass Screen’. Information, Communication & Society 5 (2): 225-236.

Atlan, H. (1985) ‘Intelligence Artificielle et organisation biologique’. Les Cahiers du MURS 3: 67-96.

Baecker, D. (1999) ‘Gypsy reason: Niklas Luhmann’s sociological enlightenment’. Cybernetics & Human Knowing 6 (3): 5-19.

Baker, L.R. (2004) ‘The Ontology of Artefacts’. Philosophical Explorations 7: 99-112.

Bostrom, N. (2005) ‘In defence of Posthuman Dignity’. Bioethics 19 (3): 202-214.

Breslau, D. (2000) ‘Sociology after Humanism: A Lesson from Contemporary Science Studies’. Sociological Theory 18 (2): 289-307.

Cantelmi, T. (2013) Tecnoliquidità. La psicologia ai tempi di Internet: la mente tecnoliquida. Cinisello Balsamo: San Paolo Edizioni.

Cisf ed. (2017) Le relazioni familiari nell’era delle reti digitali. Cinisello Balsamo: San Paolo.

Collier, A. (1999) Being and Worth. Abingdon: Routledge.

Darwall, S. (2007) ‘Law and the Second-Person Standpoint’. Loyola of Los Angeles Law Review 40: 891-910.

Donati, P. (2012) ‘Doing Sociology in the Age of Globalization’. World Futures, 68 (4-5): 225-247.

Donati, P. (2013) ‘Morphogenesis and Social Networks: Relational Steering not Mechanical Feedback’. In M.S. Archer (ed.) Social Morphogenesis, pp. 205-231. Dordrecht: Springer.

Donati, P. (2014) ‘Social Capital and the Added Value of Social Relations’. International Review of Sociology – Revue Internationale de Sociologie 24 (2): 291-308.

Donati, P. (2019) ‘Transcending the Human: Why, Where, and How?’ In Ismael Al-Amoudi and Jamie Morgan (eds.) Realist Responses to Post-Human Society: Ex Machina, pp. 53-81. Abingdon: Routledge.

Donati, P. (2020) ‘The Digital Matrix and the Hybridization of Society’. In I. Al-Amoudi and E. Lazega (eds.) Before and Beyond the Matrix: Artificial Intelligence’s Organisations and Institutions. Abingdon: Routledge

Donati, P. (2021) Transcending Modernity with Relational Thinking. London: Routledge.

Donati, P. and Archer, M.S. (2015). The Relational Subject. Cambridge: Cambridge University Press.

Dyer, J.H. and Singh, H. (1998) ‘The relational view: Cooperative strategy and sources of interorganizational competitive advantage’. Academy of Management Review 23: 660-679.

Floridi, L. ed. (2015) The Onlife Manifesto. Being Human in a Hyperconnected Era. Dordrecht: Springer.

Frankfurt, H.G. (1971) ‘Freedom of the will and the concept of a person’. The Journal of Philosophy 68 (1): 5-20

Graber, C.B. (2016) ‘The Future of Online Content Personalisation: Technology, Law and Digital Freedoms’. Zurich, Switzerland: University of Zurich, i-call Working Paper No. 01.

House of Lords (2018) ‘AI in the UK: ready, willing and able?’ London: Select Committee on Artificial Intelligence, HL Paper 100, April 16.

Jones, R.A. (1997) ‘The Presence of Self in the Person: Reflexive Positioning and Personal Construct Psychology’. Journal for the Theory of Social Behaviour 27: 453-471.

Jones, R.A. (2013) ‘Relationalism through Social Robotics’. Journal for the Theory of Social Behaviour 43 (4): 405-424.

Kirchhoffer, D.G. (2017) ‘Human Dignity and Human Enhancement: A Multidimensional Approach’. Bioethics 31 (5) accessed August 25, 2018,

Knio, Karim (2018). ‘The morphogenetic approach and immanent causality: A Spinozian perspective’. Journal for the Theory of Social Behaviour 48 (4): 398-415.

Knudsen, N.K. (2019). ‘Relationality and Commitment: Ethics and Ontology in Heidegger's Aristotle’. Journal of the British Society for Phenomenology, 50(4): 337-357.

Lazega, E. (2017) ‘Networks and Commons: Bureaucracy, Collegiality, and Organizational Morphogenesis in the Struggles to Shape Collective Responsibility in New Shared Institutions’. In M.S. Archer (ed.) Morphogenesis and Human Flourishing, pp. 211-238. Dordrecht: Springer.

Lawrence, D., Palacios-González, C. and Harris, J. (2016) ‘Artificial Intelligence. The Shylock Syndrome’. Cambridge Quarterly of Healthcare Ethics 25: 250-261.

Lévy, P. (1997) L’intelligence collective: pour une anthropologie du cyberespace. Paris: La Découverte.

Lindemann, G. (2016) ‘Human dignity as a structural feature of functional differentiation – a precondition for modern responsibilization’, Soziale Systeme 19 (2): 235-258.

Liu, J. (2015) ‘Globalizing Indigenous Psychology: An East Asian Form of Hierarchical Relationalism with Worldwide Implications’. Journal for the Theory of Social Behaviour 45 (1): 82-94.

Luhmann, N. (1995) Social Systems. Stanford: Stanford University Press.

Luhmann, N. (1990) Paradigm Lost. Über die Etische Reflexion der Moral. Frankfurt a.M.: Suhrkamp.

Luhmann, N. and Schorr, K.E. (1982) Zwischen Technologie und Selbstreferenz. Frankfurt a.M.: Suhrkamp Verlag.

Malo, A. (2019) ‘Subjectivity, Reflexivity, and the Relational Paradigm’. In P. Donati, A. Malo, and G. Maspero (eds.). Social Science, Philosophy and Theology in Dialogue: A Relational Perspective. Abingdon: Routledge.

Miller, P. (1979) The responsibility of mind in a civilization of machines. Boston: University of Massachusetts Press.

Minsky, M. (1988) The society of Mind. New York: Simon & Schuster.

Mitchell, M. (2019) Artificial Intelligence: A Guide for Thinking Humans. New York: Farrar, Straus, and Giroux.

Morozov, E. (2011) The Net Delusion. The Dark Side of Internet Freedom. New York: Public Affairs.

O’Connor, C. (2017) ‘Embodiment and the Construction of Social Knowledge: Towards an Integration of Embodiment and Social Representations Theory’. Journal for the Theory of Social Behaviour 47 (1): 2-24.

Mikołaj Pawlak, M. (2017) ‘How to See and Use Relations’. Stan Rzeczy [State of Affairs] Warsaw, 1 (12): 443-448.

Porpora, D. (2019) ‘Vulcans, Klingons, and humans: What does humanism encompass?’ In I. Al-Amoudi and J. Morgan (eds.) Realist Responses to Post-Human Society: Ex Machina, pp. 33-52. Abingdon: Routledge.

Sharkey, A.J. (2010) ‘The crying shame of robot nannies: An ethical appraisal’. Interaction Studies 11 (2): 161-190.

Sharkey, A.J. (2014) ‘Robots and human dignity: A consideration of the effects of robot care on the dignity of older people’. Ethics and Information Technology 16 (1): 63-75.

Sharkey, A.J. and Sharkey, N. (2010) ‘Granny and the robots: Ethical issues in robot care for the elderly’. Ethics and Information Technology 14 (1): 27-40.

Simmons, H. (2018) ‘Enabling the marketing systems orientation: Re-establishing the ontic necessity of relations’. Kybernetes 4 (3) (

Smith, C. (2010) What is a Person? Rethinking Humanity, Social Life, and the Moral Good from the Person Up. Chicago: The University of Chicago Press.

Taylor, Ch. (1985) ‘The Concept of a Person’. Philosophical Papers. Volume 1. Cambridge: Cambridge University Press: 97-114.

Teubner, G. (2006) ‘The Anonymous Matrix: Human Rights Violations by “Private” Transnational Actors’. Modern Law Review 69 (3): 327-346.

Uhlaner, C.J. (2014) ‘Relational Goods and Resolving the Paradox of Political Participation’. Recerca. Journal of Thought and Analysis 14: 47-72.

Warwick, K. (2015) ‘The Disappearing Human-Machine Divide’. In J. Romport, E. Zackova, and J. Kelemen (eds.) Beyond Artificial Intelligence. The Disappearing Human-Machine Divide, pp. 1-10. Dordrecht: Springer.



[1] See ‘The illusion of reality’ at, and many other similar web sites.
[2] Let us think of Luhmann’s sociological neo-Enlightenment (see Baecker 1999).
[3] As suggested by Porpora (2019: 37): “a thou is what bears the character of an I (or at least per Buber what is addressed as such). But then what is an I? An I is anything to which it is appropriate to attach what Thomas Nagel calls a first person perspective. (…) Put otherwise, an I or what is properly addressed as such, i.e., a Thou, is an experiencing subject, where an experience is not just a matter of thought but also of feeling. (…) If in Nagel’s sense there is something it is like to be a bat, meaning it has what Nagel calls a first-person perspective, then, per my own argument, a bat is a thou. Which means that thou-ness is not distinct to humans. (…) Care, I would say, is the proper attitude to adopt toward a thou, human or not. To be clear, it is a non-instrumental care of which I speak. I care for my car but mostly because I do not want it to break down on me. The care I am suggesting that is properly extended to a thou, human or not, is concern for them as ends in themselves”. It seems to me that, according to Buber (and I would like to add Ricœur and Lévinas), a Thou should be another entity like the I (Ego). If it were an entity that is neither a Thou nor an It, but a personification of something (for example a tree for a Taoist, the sky for a Confucian, or the deities of woods and animals according to religions such as Hinduism and Buddhism), a question arises: do these entities speak to the Ego or, on the contrary, is the Ego talking to them? Or, again, is it the Ego who tells them what they have stimulated in himself?
[4] The term trans-modern indicates a caesura or profound discontinuity with modernity, while the term of late or ‘post’ modern indicates the developments that derive from bringing modernity to its extreme consequences on the basis of its premises.
[5] Masaru Tateno et al. (2019). ‘Internet Addiction, Smartphone Addiction, and Hikikomori Trait in Japanese Young Adult: Social Isolation and Social Network’. Frontiers in Psychiatry, 10 July 2019 (online); Yang Yu et al. (2019). ‘Susceptibility of Shy Students to Internet Addiction: A Multiple Mediation Model Involving Chinese Middle-School Students’. Frontiers in Psychiatry, 29 May 2019 (online).
[6] See for example House of Lords (2018).
[7] For instance, in marketing systems: see Simmons (2018).
[8] I mean ‘dis-humanizing’ enhancement as that which degrades the human (for example, using big data to condition consumer behavior), therefore distorting the human which, however, maintains its own potentiality, while ‘non-human’ means an action or intervention that reduces the human person to a simple animal, thing, or machine (for example, grafting a nanobot into the human brain to reduce the person to a slave).
[9] See the emergence of the human person from the links between body and mind in Smith (2010).
[10] We already attribute a subjectivity to fictitious (legal) persons, who are ontologically artefacts, such as corporations, civil associations, schools, hospitals, banks, and even governments.
[11] On the issue of contingency reduction respectively by technology and human beings: see Luhmann and Schorr (1982), and Luhmann (1995).
[12] Ricoeur (1990: 380): “l’Autre n’est pas seulement la contrepartie du Même, mais appartient à la constitution intime de son sens”. It may be useful here to clarify the meaning of the terms used by Ricoeur: “Soi” means “le soi (selbst, self) se distinguant de l’ego (je, Ich, I) non réfléchi”. “Même” means “l’ipséité (← ipse identité réflexive) s’oppose à la mêmeté (← idem ressemblance, permanence)”, and “Autre” means l’ipséité ne se définit pas contre l’altérité, mais par elle”.
[13] Raya A. Jones (2013: 405) writes: “Relationalism refers primarily to a standpoint in social psychology. This standpoint is premised on the threefold claim that “persons exist by virtue of individuals’ relations to others; that, cognately, ‘selves’ are an emergent property of semiotic I-You-Me systems; and that therefore the task for social psychology is to identify ‘regularities’ of interrelations between specific cultural practices and particular experiences of self”. This relational turn is derived mainly from authors such as Gergen and Harré.
[14] Melinda Liu, Dalai Lama, Twitter Rock Star: The Virtual Influence of His Holiness, August 6, 2012 (online).
[15] Interview at Stanford University, November 2017 (online).