top of page

AIRES 2021 Research Conference: Society, Culture, and Ethics

Escaping the Western Cosm-Ethical Hegemony: The Importance of Cultural Diversity in the Ethical Assessment of Artificial Intelligence

By Emmanuel R. Goffi [1]   

[1]  Global AI Ethics Institute in Paris 

AI Ethics Journal 2021, 2(2)-1, https://doi.org/10.47289/AIEJ20210716-1

Received 31 March 2021 || Accepted 15 July 2021 || Published 16 July 2021

 

Keywords: ethics, culture, universalism, particularism, artificial intelligence

0.1 Abstract

The world of Artificial intelligence (AI) is struggling to set standards that would be globally applied. In this struggle, ethics is extensively summoned to regulate the development and use of AI systems, but also to promote vested interests. 


The potential benefits associated with AI are such that many actors, public and private, have entered a race for AI dominance. Running at the international level, racers are way less preoccupied by ethical considerations than by the strategic outcomes of AI. 


Led by the United States and China, the race does not give much room for outsiders such as the European Union. Yet, through norms, some actors are slowly taking over AI regulation, and consequently shaping the whole market setting the limits regarding what is ethically acceptable and what is not. 


Thus, norms have become a tool for dominance, and short of legal ones, ethics is slowly imposing itself as the only regulatory option. Aware of the power of norms, the West has slowly spread its normative influence all around the world, releasing hundreds of documents pertaining to ethical principles. Doing so, the Western world is denying the reality of the humankind and its diversity of ethical stances.


Trying to impose through ethical narratives its own views on ethics applied to AI, it is shaping perceptions and influencing behaviors without consideration for the wide range of ethical traditions. Thus, as cosmetics helps to adorn faces, cosm-ethics has taken over ethics to makes the crude reality more beautiful. Using words of ethics, cosm-ethics is widely used by communication specialists to artificially build trust and promote specific interests. It legitimates and justifies the development and use of AI systems.


This paper aims at opening a debate on the reality of ethics applied to AI. It contextualizes the subject in a wider setting of race for AI dominance (1), stressing the Western ethical hegemony over AI (2) established through a pseudo ethical narrative (3). To illustrate these points, it focuses on the case of the European Union (4), to eventually stress the urgent need for cultural pluralism in the field of ethics applied to AI (5).

 

1.0 The international race for AI dominance

Artificial intelligence has undoubtedly become a new tool for power [1] [2]. Not only for big tech companies as private actors, but also for States that are trying either to take a leading position on the market or just to benefit from the financial godsends AI represents. Besides, AI must not be reduced to its economic dimension that is only part of the classic struggle for power at the international level. Thus, AI is just one more tool for States to impose their power and influence on the rest of the world. 


According to the Global AI Index 2020, some 62 countries have joined the race for AI dominance following the United States and China. In this race, the competition is tough and the US leadership is highly contested by China which, in its New Generation of Artificial Intelligence Development Plan, openly displayed its velleity  to “occupy the commanding heights of artificial intelligence technology” by 2030 [3], or by the Russian Federation following President Vladimir Putin’s declaration in 2017 on the fact that “[w]hoever becomes the leader in this sphere will become the ruler of the world”. Yet, Beijing and Moscow are not the only capitals with great ambitions in the field of AI. Canada has shown a strong will to establish itself as a normative actor, Saudi Arabia has massively invested in AI to reach a leading position, the European Union is slowly imposing itself as a normative power, and India has set a strong strategy and is working at creating a huge AI ecosystem. Many other countries such as France, Bahrain, Israel, Japan, Germany, Morocco, or the UK have been highly active in developing their own strategies and getting their lion’s share. 


In this struggle for AI dominance, international structures such as the United Nations (UN), the UN Educational, Scientific and Cultural Organization (UNESCO), the Organisation for Economic Co-operation and Development (OECD), the European Union (EU), the G7, the Council of Europe and G20 have been extensively used by major actors to promote their own interest, notably in setting principles and establishing standards regulating the development and use of AI Systems. 
 

Big tech companies are also used as tools for States’ power [4]. They are not only promoting private interests but also national strategies. Eventually, by working hand-in-hand with public authorities, private companies are participating to the setting of norms maintaining them under the umbrella of ethics to avoid being constrained by legal instruments [5] [6]. 


Thus, AI stakeholders, private and public, are competing together for AI dominance, and in this competition, norms have taken an important role. Unable to run against China or the United States at the technical level, some actors, the EU ahead, have initiated or entered a race for standardization working at setting ethical normative instruments that would eventually help them to take a stronger place in the market [7] [8].


AI is expected to impact all sectors of human activities with benefits but also with potential risks that are expected to be limited by ethical regulations. Relying on several sources, a briefing released by the European Parliament indicated that “global GDP may increase by up to 14 % (the equivalent of US$15.7 trillion) by 2030 as a result of the accelerating development and take-up of AI”, “that around 70 % of companies would adopt at least one type of AI technology by 2030”, and “that AI may deliver an additional economic output of around US$13 trillion by 2030, increasing global GDP by about 1.2 % annually” [9]. In 2017, a PricewaterhouseCoopers (PwC) report already stressed that “AI could contribute up to $15.7 trillion to the global economy in 2030”, with the “greatest gains” being in China and North America [10]. 


Stakes are high, in many sectors: diplomacy, education, transports and logistics, defense and security, health, agriculture, to mention but a few. They concern data, patents, applications, research and development, academic production, blockchain, digital currencies, software, and hardware [12]. Inevitably they arouse covetousness of many actors [1], and for those that do not have enough strategic advantages in these fields, norms have become a niche and a tool of choice in some national and transnational strategies [11] [5] [13]. 


The potential of AI has been clearly understood by many public and private actors and some states “have released multi-million-dollar (or in some cases billion-dollar) strategies related to the future of AI” [11]. Thus, States have entered the race using different strategies focusing on specific priority areas such as R&D in Canada, the United States, and the European Union; AI adoption in Finland, Germany, and Korea; or AI skills in Australia, Finland, the United Kingdom, and the United States [12]. According to Tim Dutton, “[e]ach AI strategy is unique and focuses on different aspects of AI policy” [11].


In some of these strategies, norms have become central to advance specific interests. The ETH Zurich [5] identified no less than 1,080 documents pertaining to ethical principles. 
 

Each of these documents are set, released and applied by different actors, and correspond to specific interests. Interestingly, one of the main findings of the studies is that among the eleven principles identified “[n]o single ethical principle appeared to be common to the entire corpus of documents, although there is an emerging convergence” around five of them that can be found in more than half of the corpus. Nonetheless, as Jobin et al. indicates that “further thematic analysis reveals significant semantic and conceptual divergences in both how the 11 ethical principles are interpreted and the specific recommendations or areas of concern derived from each” [5]. In other words, each actor releasing a code of ethics applied to AI as set its own standards in regard to its own interest. Eventually, with such a number of codes it is impossible to have a coherent set of norms to be implemented by all stakeholders. Consumers should then read all ethical codes before they decide which public or private actor fits to their own principles and expectations, which is absolutely unthinkable. The normative logorrhea inevitably leads to the blurring of ethics applied to AI and, instead of offering a beneficial framework, deregulates the development and use of AI.
 

Among the documents studied 50% are produced by private companies and governmental agencies and akin actors [5]. In a study released by the Berkman Klein Center for Internet & Society, 22.9% of the 36 examined documents are produced by the private sector and 37% by governments [8]. Eventually, out of the 133 codes pertaining to ethics listed by the Council of Europe, 50 come form private actors.  
At the end of the day, most of the initiatives regarding ethical norms applicable to AI has been launched by a limited number of actors mostly in the Western world. 

 

2.0 A Western-centric approach

During the last decade, normative codes dealing with AI have been multiplied at fast pace, particularly since 2017 [8] [17] reaching up to more than one thousand documents according to the ETH Zurich [5]. 57 of the 84 codes studied by the ETH team, were released by Western countries mainly the United States, the United Kingdom (together counting for more than half of the Western documents), the European Union, Australia, and Canada, not to mention codes written by wider international organizations including these main normative and influential actors. According to the AI Ethics Lab, it is no less than 82% of the codes that are from the West, almost the same figure as for the European Council Digital Policies Framework. At the end of the day, 70% of identified documents are Western productions, while the West at large represents barely more than 15% of Humanity. 


It is concerning to notice that Africa, which represents 16% of Humanity, is not represented outside of international organisations, and that countries like China (19% of Humanity) and India (17%) are almost absent from the normative production. How can a global governance be established without considering 52% of the world population’s viewpoints on ethics applied to AI? How can a fair global governance be the doing of such a small amount of people in such a small number of countries? 


These questions are all but neutral. Multilateralism, which should preside to any project of global governance, lies on the consideration of all stakeholders, not on the imposition of a specific perspective coming from a minority of actors.


Is this hegemony of the West over the AI normative framework ethically acceptable? Isn’t it kind of a bias? Could it lead to a “Western cultural hegemony” [14] if not tyranny? Is it the best solution for a sustainable and fair governance? Many questions that should be asked and addressed thoroughly, but that are absent from current discussions.


Ethical norms have so far been set in a way that they support Western interests [17]. Then, opening the debate to cultural particularisms would be problematic for it would give birth to new approaches and potentially make more difficult for norms entrepreneurs [15] to promote their own activities. Conversely, shrinking the subject to a unique perspective makes it easier to impose. In this perspective, a superficial type of deontology turns to be a perfect tool. 


Ethics applied to AI is thus mainly dealt with by Western actors. Even the approaches adopted are Western ones. Basically, the ethical acceptability of AI systems is appraised through the three main continental theories that are virtue ethics, deontology, and consequentialism. Yet, the diversity of ethical perspective around the world is way greater and richer than these three options. Even in the West, other ethical lenses are available to assess AI systems.


The focus on continental ethics is mainly due to the fact that, as we stressed it, the discussion on ethics applied to AI is led by Western countries, and that in these countries the three above mentioned theories are the most popular. 


Diving deeper into the subject, it clearly appears that deontology is favored over both virtue, which is barely used, and consequentialism, which is covertly used. Since, most standards are set by public authorities and the private sector, it seems obvious that these standards are aiming at specific purposes, and at serving “as a marketing strategy” [13] through some kind of ethics washing [16].

 

3.0 From ethics to cosm-ethics
 

Aiming at marketing and investment instead of focusing on ethics, is cosm-ethics, namely the creation of a whole narrative using ethical concepts, notions and vocabulary, without doing ethics [17] [18]. 


Trends clearly point to the overuse of cosm-ethics at the expense of real long-term philosophical reflections on the risks and benefits of artificial intelligence and its potential impacts on our societies. Building a narrative based on the vocabulary of ethics without doing ethics is not ethics. Cosm-ethics allows to hide the crude reality of international relations and the pursuit of strategic interests, behind a layer of ethical make-up. Summoning the vocabulary of ethics gives the impression that AI is framed by values and is thus aspiring at benefiting humanity. As Professor Thomas Metzinger stressed it about the concept of “Trustworthy AI” developed by the European Union, all this “AI story is a marketing narrative invented by industry, a bedtime story for tomorrow's customers” [16]. 


Deontology, reduced to a minimum, that is a highly superficial understanding of Immanuel Kant’s philosophy, is the perfect vehicle to shape perceptions and to influence behaviors. Here again, cosm-ethics, as a mere narrative used for communication purposes, conveys ideas and interests that are not related to the ethical – in the strictest sense of the word – appraisal of artificial intelligence. Deontology is a complex theory using a bottom-up approach starting with individual’s volition and ending with the setting of categorical imperatives that have passed through the universalization test [19]. Cosm-ethical deontology, conversely, is a top-down process consisting in imposing principles from above, namely from public authorities and private sector stakeholders, down to consumers. While deontology favors individual’s autonomy and self-determination, cosm-ethical deontology lies on conformism to rules established by authorities. The essential point here is to stress that cosm-ethical deontology is not deontology, since according to Kant, an action is not considered as morally good if it is performed merely in conformity with duty, that is, if the agent acts on the basis of pre-existing norms. Only acts performed from duty, in accordance with a maxim willingly, rationally, and autonomously chosen are. Philosophically speaking, this means that following norms established by third parties has no moral worth. Consequently, applying ethical standards related to AI does not make action ethically acceptable, nor does it make AI systems ethically acceptable [19]. 


In other words, cosm-ethics is using superficial deontology, through a narrative skilfully designed, as a governmentality (gouvernementalité) tool, that is to say, as Michel Foucault puts it, the process through which “the conduct of individuals or of groups might be directed” [20]. As such, the ethical acceptability of cosm-ethics is highly debatable, and so are its potential outcomes.  
 

Eventually, it appears that the whole discourse on ethics applied to AI is made of performative speech acts of which “'ethical propositions' are perhaps intended, solely or partly, to evince emotion or to prescribe conduct or to influence it in special ways” [21]. This rhetoric is insidiously imposing itself. Cosm-ethics suggests that the mere evocation of the word ethics, its simple addition as a qualifying adjective to AI, is enough to make the latter factual and acceptable, if not desirable, thus avoiding real ethical questions that could open doors to divergent perspectives and therefore to divergent sets of standards.

 

4.0 The European Union: a norm entrepreneur for AI cosm-ethics

The European Union is the perfect example of a “moral crusader” [22], a coms-ethics promoter trying to establish ethical standards for the whole world without any consideration for cultural particularisms. Knowing that it cannot compete against the American and Chinese leaders, the EU had to find a way to differentiate itself and to enter the AI race using norms as Trojan horse. Doing so it has participated to the multiplication of normative instruments and to the homogenisation of ethical perspectives worldwide.


Thanks to norms, the EU has found a niche to impose itself as a major actor in the field of AI while protecting its interests putting barriers in front of outsiders’ desires to conquer the European market. 


Interestingly, the EU is hiding highly pragmatic and consequentialists goals behind a veil of deontology. Ethics is used mainly as a marketing tool to promote European interests and to ensure the Union will benefit from AI godsends. 
 

In April 2019, the Ethics guidelines for Trustworthy AI were released by the High-Level Expert Group on Artificial Intelligence set by the European Commission. The guidelines offered seven principles aiming at “achieving Trustworthy AI” that would serve “humanity and the common good, with the goal of improving human welfare and freedom”. But, in the same document, a more pragmatic statement should draw some attention: “Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems”. This assertion must be put into the wider context of the European AI Strategy, Artificial Intelligence for Europe, published in 2018, stressing that AI has become “the most strategic technologies of the 21st century” and that “[o]ne of the main challenges for the EU to be competitive is to ensure the take-up of AI technology across its economy”. The final goal was set. Trustworthiness would be a mere tool among others presented in the Strategy, to make sure the challenge would be taken up “[a]mid fierce global competition”. Nothing wrong with that. The issue is definitely not that the EU is trying to make the most of AI. It is that the Union is advancing trustworthiness as a deontological requirement while it is a means to an end, namely competitiveness. 
 

In the same vein its White Paper On Artificial Intelligence - A European approach to excellence and trust released in 2020 and focusing on competitiveness supported by the establishment of an ecosystem of trust, the EU clearly stated that “Europe is well placed to benefit from the potential of AI, not only as a user but also as a creator and a producer of this technology” and that it must seize the opportunity ahead, specifically in the field of data. As one can see, the deontological stance regarding the importance of trustworthiness is hiding a highly consequentialist stance.
 

The use of ethics as a tool for promoting interests led Professor Thomas Metzinger, a member of the commission's expert group that has worked on the European ethics guidelines for artificial intelligence, to leave the group and to write that “[t]he Trustworthy AI story is a marketing narrative invented by industry, a bedtime story for tomorrow's customers. (…) Hence the Trustworthy AI narrative is, in reality, about developing future markets and using ethics debates as elegant public decorations for a large-scale investment strategy” [16].
 

From a constructivist perspective, it can be considered that the EU, recognizing its “material and social limits” in a highly competitive setting, is making choices to advance its goals, in light of its skills and resources [23].
 

Beyond the disputable use of ethics as a mere communication tool by the European Union, the problem is that this strategy may lead to much more harm than good.
 

Asking whether ethical guidelines do have an “impact on human decision-making in the field of AI and machine learning”, Thilo Hagendorff asserts that the “short answer is: No, most often not” [13]. So, it seems that the burgeoning of ethical guidelines for AI is leading to saturation and deregulation, instead of to a greater governance. 
 

The “EU normative hemorrhage” [24], of which the Artificial Intelligence Act is the last embodiment, might pose much more problems than it would solve. 
 

Acting as a moral crusader, the EU is imposing an AI ethical order supposedly universal by transgressing its own ethical standards regarding respect for diversity. The recourse to cosm-ethics in the field of AI could lead to some tensions with existing AI leaders such as China and even with rising powers that will not accept being constrained by norms that do not fit to their own ethical schemes and that could hinder their interests. In other words, cosm-ethics could prove to be counterproductive on the long run.
 

First, it could lead to an irrational multiplication of ill-defined fake norms that would have the opposite effect of the one expected: lost in the multitude of such cosm-ethical standards aimed at promoting the use of AI instead of providing a reliable ethical appraisal, actors will either no longer respect them, or they will try to circumvent them using grey areas. In some cases, it could lead some companies to outsource or relocate their activities in areas where norms are less constraining. 
 

Second, consumers would no longer be able to know which norms are trustworthy and which ones are not, which would go against the declared intention to build trust making things explainable and transparent.
 

Third, and not least, it could lead to international tensions. Indeed, the rise of new AI powers such as China, India, countries of the Middle East and of the Latin-American area, will see this cosm-ethical narrative as a way to promote Western interests, a vehicle for Western hegemony that could be an impediment to their activities and go against their own values and ethical standards. This could add tensions to existing ones, and lead to confrontation with the United States or the European Union. As an illustration, the declaration made on July 1st, by President Xi Jinping on the occasion of the centennial of the Chinese Communist Party leaves no room for doubt. Beijing “will never allow any foreign force to bully, oppress, or subjugate” China. By the same token, the vivid exchange between the US and the Chinese delegations during the opening remarks at the U.S.-China meeting in Anchorage, Alaska, on March 18 2021, demonstrates that China is not prone to accept being lectured or told what is acceptable and what is not. In this context and given that China has stated that it will become the leader in AI by 2030, there is no reason it will accept to yield to Western ethical rules, not to mention cosm-ethical ones. 
 

Finally, a Western oriented normative framework would be a denial of cultural diversity and consequently of the variety of ethical standpoints. Such a situation would violate international norms calling for the respect of cultural diversity.
 

 

5.0 A call for a culture-based approach of ethics applied to AI

In a globalized world, the Western normative proselytism seems inappropriate and against the flow. 


Cultural diversity must not be only praised, it must be respected. This essential imperative is stated in the 2001 Universal Declaration on Cultural Diversity (art. 4) of the UNESCO. It is also present in article 22 of the Universal Declaration of Human Rights as well as in the United Nations Charter which calls for “international cultural and educational cooperation” (art. 55), “with due respect for the culture” of the peoples (art. 73). 
 

In the field of AI the Institute of Electrical and Electronics Engineers underlined, in its Ethically Aligned Design report, the “monopoly on ethics by Western ethical traditions”, calling for the urgent broadening of “traditional ethics in its contemporary form of ‘responsible innovation’ (RI) beyond the scope of ‘Western’ ethical foundations” [25]. 
 

The Institute even presents some instances of valuable non-Western ethical traditions, stressing differences between the individualistic tendency of Western societies, compared to more collectivist practices such as Buddhism, Ubuntu, and Shinto, in Asia and Africa. 
 

Undoubtedly, this perspective would enrich current discussions on the global governance of AI. They would also give a voice to countries and societies that are barely listened to in the setting of ethical standards. This is obviously not only a matter of respect for cultural particularisms. It is also a fundamental requirement to establish a long-lasting governance where every cultures will be satisfied and will be allowed to promote their own interests based on their specific ethical stances. As mentioned in the IEEE report the full benefit of autonomous and intelligent technical systems “will be attained only if they are aligned with society’s defined values and ethical principles” [25].
 

The difficulty here is to understand that culture is the product of “the collective programming of the mind that distinguishes the members of one group or category of people from another” [26]. Thus, cultures lie on values that have been socially constructed and passed on, and on which are built ethical perspectives. 
 

Yet, the West persists in thinking that universalism is better than relativism, and that it is entitled, for some reason, to spread its values and ethical viewpoints worldwide. Westerners do believe that their values are universal [14], that they are shared by the whole world. However, universal (in the strictest sense of the word) values do not exist [5]. If there were universally shared values, there would be no need to struggle to establish a global ethical regulation. This belief in the universality of Western values and their pre-eminence, is a form of “cultural arrogance” [14] that some societies, especially rising AI powers, will not accept indefinitely. It is dubious that China will accept this cultural hegemony once it will be the leader in the field. It is dubious that Russia will endlessly give way to Western ethical standards.
 

Following international norms, cooperation between cultures, and respect for cultural diversity should be the guide for AI global governance. A global governance of AI cannot go through the arbitrary imposition of Western ethical norms through the setting of a code of cosm-ethical deontology. The only way to frame the development and use of AI for the greatest benefit of the greatest number, is to establish a multilateral instrument allowing different areas to set their own limits through a cross-cultural collaborative approach. Then, to help these areas to work with each other when their perspective are too different, a neutral mediating body should be established. 
 

Furthermore, a bottom-up open-minded approach should be favored over the current top-down narrow deontological process. If AI is to impact deeply our societies, and it definitely will, decision about what impacts are desirable and which ones are not must be decided by grassroots, not by diplomats that are mainly promoting economical and strategic interests, and most often conforming to rules established by great powers. 
 

The societal outcomes of AI could be as much beneficial as they might be dangerous. Their reality will cover a wide spectrum of situations depending on which community is concerned. It is then up to each community to decide upon which kind of society they envision for the future and what role should be granted to AI. 
 

Then, new perspectives should be included in the debate on ethics applied to AI. 
Buddhism would offer “ethical statements formulated in a relational way, instead of an absolutist way” [25], providing a way to articulate an individualistic perspective with a collectivist one. In the same vein, it would shed new lights on the concept of privacy [27].

 

African Ubuntu would give us a more relational perspective on ethics, reminding us that Humans are mere cogs in a huge ecosystem, that human are not isolated individualities, but that they are connected to each other [28], that privacy is not self-centered but collective [29]. 
Shintoism would help us to rethink our relation to technology making it more natural and devoid of any desire of control based on a supposed primacy of Humans over technological artifacts.

 

Eventually, Islam would invite us to think about ethics in a different way, based on religious considerations, questioning for instance life prolongation by technological means or the use of autonomous cars [30].
 

These are only few examples of how cultural standpoints from non-Western societies could enrich the current debate over ethics applied to AI.
 

We already see some cultural simmering in the field of AI with the conference on “The relevance of culture in the age of AI” held in Sydney, Australia, in 2019, and the subsequent Treaty of Waitangi adopted by New Zealand and taking into account the Maori cultural perspectives. In Northern America, some reflections have been initiated about the inclusion of First Nations’ thoughts into AI ethical norms. This is just a first step, a light momentum that needs to be strongly fostered.

6.0 Conclusion


AI is giving us a unique chance to revitalize the debate on ethics in its original vocation of social mediator. It is up to us to seize it and to challenge our convictions. A first step would be to accept the diversity of ethical perspectives without value judgement and implicit hierarchization of values. 


We must avoid falling into the trap of some kind of ethical absolutism carried by Western “moral crusaders”. Conversely to what we might think, moral absolutism is no more desirable than ideological relativism. Neither it is more ethically acceptable. 


In order to establish a fair and efficient governance of AI we need a multilateral approach freed from superficial deontology and cosm-ethics. A thorough and honest appraisal of AI systems should encompass different ethical stances and opinions based on a wide range of contextual use cases and give room to debate.


Ethics should be brought back to philosophy and released from mere communication ends. It should be left to ethicists instead of being monopolized by communication specialist. 
In this framework, real interests should not be hidden behind a layer of cosm-ethics. They should be assumed and integrated in any ethical assessment of AI systems. They should be included in the ethical equation.


More than anything else, particularisms must be respected. If they are not, any project of global governance will fail sooner or later and will potentially lead to tensions that could turn into massive deregulation. Global governance can only succeed through multilateralism, open-mindedness and listening.
 

Declaration of Interest

None

Disclosure of Funding

None

Acknowledgements

None

References

[1] Thibout, Charles. « La compétition mondiale de l’intelligence artificielle. » Pouvoirs - Revue française d’études constitutionnelles et politiques, 170.3 (2019): 131-142.


[2] Goffi, Emmanuel R. « L’intelligence artificielle comme facteur de puissance internationale. » Diplomatie, 104 (2020): 82-84.


[3] Chinese State Council. Notice of the State Council Issuing the New Generation of Artificial Intelligence Development Plan (2017).


[4] Goffi, Emmanuel R., and Colin, Louis. « GAFAM et BATX à la conquête du monde numérique. » Diplomatie, 104 (2020): 72-76.
 

[5] Jobin, Anna, Marcello Ienca, and Effy Vayena. “The global landscape of AI ethics guidelines.” Nature Machine Intelligence 1.9 (2019): 389-399.
 

[6] Greene, Daniel, Hoffmann, Anna Lauren, and Stark, Luke. “Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning.” HICSS (2019).
 

[7] Zachariadis, Ioannis A. “Standards and the digitalisation of EU industry: Economic implications and policy developments.” European Parliament Think Tank Briefing (2019). Available at https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_BRI(2019)635608
 

[8] Fjeld, Jessica, Achten, Nele, Hilligoss, Hannah, et al. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” Berkman Klein Center Research Publication 2020-1 (2020). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3518482 
 

[9] Szczepanski, Marcin. “Economic impacts of artificial intelligence (AI).” European Parliament Think Tank Briefing (2019). Available at https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_BRI(2019)637967 
 

[10] Rao, Anand S., and Verweij, Gerard. Sizing the prize: What’s the real value of AI for your business and how can you capitalise? PwC Report, 2017. https://www.pwc.com/gx/en/issues/analytics/assets/pwc-ai-analysis-sizing-the-prize-report.pdf
 

[11] Dutton, Tim. “Building an AI World: Report on National and Regional AI Strategies”. Canadian Institute for Advanced Research (2018). https://cifar.ca/wp-content/uploads/2020/05/buildinganaiworld_eng.pdf 
 

[12] OECD. “OECD Digital Economy Outlook 2020.” OECD Publishing (2020). 
 

[13] Hagendorff, Thilo. “The Ethics of AI Ethics: An Evaluation of Guidelines.” Minds and Machines 30 (2020): 99-120. https://link.springer.com/content/pdf/10.1007/s11023-020-09517-8.pdf
 

[14] Elmandjra, Mahdi. “Diversité culturelle : une question de survie.” Futuribles analyse et prospective 202 (1995): 5-15.
 

[15] Sunstein, Cass R. “Social Norms and social roles.” Columbia Law Review 96.4 (1996): 903-968. 
 

[16] Metzinger, Thomas. “Ethics washing made in Europe.” Der Tagsspiegel (2019). https://www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html
 

[17] Goffi, Emmanuel R. “De l’éthique à la cosm-éthique (1) : ce que l’éthique n’est pas.” Institut Sapiens (2019). https://www.institutsapiens.fr/de-lethique-a-la-cosm-ethique-ce-que-lethique-nest-pas/ 
 

[18] Goffi, Emmanuel R. “De l’éthique à la cosm-éthique (2) : ce qu’est l’éthique.” Institut Sapiens (2019). https://www.institutsapiens.fr/de-lethique-a-la-cosm-ethique-2-ce-quest-lethique/
 

[19] Kant, Immanuel. Groundwork of the Metaphysics of Morals.  In Ameriks, Karl, and Clarke, Desmond D (eds). Cambridge Texts in the History of Philosophy. Cambridge University Press (2012).
 

[20] Foucault, Michel. “The Subject and Power.” Critical inquiry 8.4 (1982): 777-795.
 

[21] Austin, John L. How to Do things with Words. The William James Lectures delivered at Harvard University in 1955. Oxford University Press, 1962.
 

[22] Becker, Howard S. Outsiders: Studies in the Sociology of Deviance. The Free Press, 1963.
 

[23] Onuf, Nicholas G. “Constructivism: A User's Manual.” In Vendulka Kubalkova, Nicholas G. Onuf, and Paul Kowert (eds). International Relations in a Constructed World. M. E. Sharpe (1998): 58-78.
 

[24] Goffi, Emmanuel R., and Momcilovic, Aco. “Too many norms kill norms: The EU normative hemorrhage.” ISSG – Beyond The Horizon (2021). Available at https://behorizon.org/too-many-norms-kill-norms-the-eu-normative-hemorrhage/
 

[25] IEEE. Ethically Aligned Design First Edition: Prioritizing Human Wellbeing with Autonomous and Intelligent Systems. Institute of Electrical and Electronics Engineers (2019). https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&utm_term=undefined
 

[26] Hofstede, Geert. Culture’s Consequences: Comparing Values, Behaviors, Institutions, and Organizations Across Nations. Sage Publications (2001).
 

[27] Hongladarom, Soraj. “Analysis and Justification of Privacy from a Buddhist Perspective.” In Hongladarom, Soraj, and Hess, Charles (eds). Information technology Ethics: Cultural Perspectives. Idea Group Reference (2007): 108-122.
 

[28] Mhlambi, Sabelo. “Ethical Implications of AI and Ubuntu as an Intervention.” Transcript of a speech given at the IFLA WLIC 2019 Conference. Author’s blog (2019). https://sabelo.mhlambi.com/2019/08/29/ethical-implications-of-ai-and-ubuntu-as-an-intervention
 

[29] Van Norren, Dorine. “The ethics of artificial intelligence through the lens of Ubuntu”. Draft-working paper Africa knows conference, Africa Study Centre (2020).
 

[30] Ahuja, Sparsh. “Muslim scholars are working to reconcile Islam and AI.” Wired (2021). Available at https://www.wired.co.uk/article/islamic-ai

Split reality: anti-social social media
Abstract
Introduction
Social media; relied on by us
The international race for AI dominance
A Western-centric approach
From ethics to cosm-ethics
The European Union: a norm entrepreneur
A call for a culture-based approach
Conclusion
The split subject
bottom of page