Search

Human Rights and Digital Empires

Print Friendly, PDF & Email

By Luca Brocca

Just Access

On October 10, 2023, Just Access participated in the event at the European Parliament “EPRS Book Talk | Digital Empires: The global battle to regulate technology”. After an introduction by European Parliament President Roberta Metsola, the book was discussed by Anu Bradford, author and Professor of Law at Columbia Law School.

Although the focus of the book is on the overview of competing digital governance models in the US, China, and the EU, this blogpost will attempt to delve deeper into human rights issues that must be considered when regulating technology. This blogpost will look at the key human rights at stake, how these three major powers are dealing with them, and what we should expect to see on this topic in the near future, given the so-called Brussels effect.

To begin, it is crucial to understand the ongoing competition among the three global powers in the realm of technology regulation and to comprehend the reasons behind Anu Bradford’s use of the term “digital empires” in her book. This term suggests an analogy to a conventional war, highlighting the struggle between the EU, the US, and China in the realm of technology regulation. This underscores the central question of which digital empire will ultimately emerge victorious in the worldwide competition for influence, while also illuminating their distinct approaches.

All three entities are engaged in a race to regulate technology companies, each promoting a different vision for the digital economy while simultaneously seeking to expand their influence in the digital realm. Across the world, individuals who rely on digital technologies have grown increasingly concerned about the rapid adoption and transformation of these technologies. This has resulted in an overly concentrated economy where a handful of powerful companies like Meta, Amazon or Apple wield significant economic wealth and political influence, undermining data privacy and exacerbating economic disparities.

In response to these concerns, global leaders are exploring ways to rein in the most dominant tech companies. Bradford delineates a clear division among the three competitive regulatory models: the American market-driven approach, the Chinese state-driven model, and the European rights-driven regulatory model. She also delves into how governments and tech companies navigate the inevitable conflicts that arise when these regulatory models clash on the international stage. Each of these regions is pushing forward with its unique vision for the digital economy while striving to extend its influence in the digital realm. The question of which digital empire will ultimately win this global competition for influence remains unanswered, but the differences in their strategies are becoming increasingly apparent.

While the three major powers try to take the lead in the realm of digital regulations, there is a specific legal domain that warrants careful consideration, and it is increasingly influenced by the decisions made regarding future privacy and data protection regulations: human rights. Privacy constitutes a fundamental human right, enshrined in numerous international human rights agreements worldwide. It stands at the core of preserving human dignity and serves as the foundation for any democratic society. Furthermore, it upholds other rights such as freedom of expression, information, and association. The right to privacy embodies the idea that individuals should have a realm of self-governing development, interaction, and freedom – a “private sphere” whether in isolation or engagement with others – shielded from governmental interference and unwanted intrusions by other individuals. With advancements in information technology allowing previously inconceivable methods of collecting, storing, and sharing personal data, the right to privacy has evolved to encompass various state responsibilities linked to safeguarding personal data.

Indeed, digital technologies have experienced rapid advancements in recent times, and their impact on human rights is undeniable, featuring both positive and negative aspects. On one hand, they have provided individuals with enhanced opportunities to communicate, share information, and, in turn, empower themselves to exercise rights like freedom of expression and association. This has allowed individuals to shed light on human rights abuses. On the other side, these technologies have also facilitated the spread of disinformation, cyber-surveillance, and harmful behaviors, including hate speech, cybercrime, and the improper use of personal data. Generally, actions that impinge on the right to privacy, such as surveillance and censorship, must meet specific criteria to be justifiable: they must be prescribed by law, necessary for a legitimate purpose, and proportionate.

There’s a widespread consensus that the same human rights and obligations applicable in offline environments should also extend to the digital realm. However, emerging technologies are ushering in a new paradigm for human interaction, revealing gaps in the existing international human rights framework. The central question in the international discourse revolves around how to address and close these gaps.

In the past year, we have witnessed remarkable advancements in generative AI, with accessible programs like ChatGPT. We recognise the immense potential of AI to benefit humanity, offering improvements in strategic forecasting, expanding access to knowledge, accelerating scientific progress, and enhancing information processing capabilities. However, it is crucial to acknowledge the inherent risks associated with the rapid development of generative AI systems like ChatGPT. Concerns such as ethical implications, potential biases in training data, and the misuse of advanced AI technologies underscore the importance of responsible development to ensure these innovations contribute positively to society. Therefore, to tap into this potential, we must ensure that the benefits outweigh the risks, and we must establish limitations. To be effective and humane, with people at the core of technological development, any solution or regulation must be firmly rooted in respect for human rights. Nevertheless, as will be elaborated in the following paragraphs, not all “empires” prioritise human rights in the same way.

When comparing the European and Chinese approaches to data protection, it is essential to acknowledge the complexity of evaluating a Western-style human rights model against an Asian backdrop, given the significant cultural differences at play. This task becomes even more challenging when China, where key human rights conditions like the horizontal application, independent courts, and legal certainty are not fully established, is the focal point. However, China’s status as a central economic power, a major EU trading partner, and a significant global political player requires a realistic approach in comparative law exercises.

On the contrary, the EU remains an international testing ground for data protection, although it is an advanced one. Besides pioneering influential legislative texts, its member States have consistently enforced data protection regulations in the past decades, with some countries already evolving to their second or third generation of data protection laws. The cornerstone of the EU’s contemporary approach to data protection is the General Data Protection Regulation (GDPR). Enacted in 2018, the GDPR embodies a “rights-based” perspective, distinguishing itself through a focus on safeguarding the rights of the data subject. Unlike traditional models that treat personal information merely as a commodity, the GDPR shifts the paradigm by recognising that individuals effectively own their personal data. This perspective grants individuals an inherent legal right to control their information, empowering them, among other things, to make informed decisions about its usage and determine who is authorised to access it.

Currently, a range of human rights concerns arises regarding data protection in China. China’s Internet regulations and legal framework are rooted in the concept of “guarded openness”, aiming to preserve the economic benefits of emerging information and communication technologies while safeguarding against foreign economic dominance and the potential for technology to coordinate anti-government activities. To monitor and censor the activities of approximately 400 million Internet users in China, the government employs a diverse set of tools and methods. In May 2010, China issued its initial white paper on the Internet, emphasising the idea of “Internet sovereignty,” which mandates that all Internet users within China adhere to Chinese laws and regulations. Surveillance and monitoring extend to telephone conversations, fax transmissions, emails, text messages, and Internet communications. Authorities routinely open and censor both domestic and international mail, while security services frequently intrude into residences and offices to gain access to computers, telephones, and fax machines. It is evident that these practices would be legally unthinkable in the EU, where the protection of data subject rights lies at the core of its approach.

In contrast, the United States has historically employed a “harms-prevention-based” approach to data privacy laws, comprising a patchwork of privacy safeguards designed to prevent or mitigate specific sector-related harm. Following California’s initiative, four additional states – Colorado, Connecticut, Utah, and Virginia – have commenced enforcing new statutes inspired by the GDPR in 2023. It is expected that additional states will follow a similar path. The implications of this profound shift in the underlying philosophical framework for data privacy protection will have enduring consequences in the coming years and decades. This shift prompted Anu Bradford to introduce the concept of the “Brussels effect” in 2012, describing the phenomenon of unilateral regulatory globalisation driven by the European Union, which extends its laws beyond its borders through market mechanisms. The Brussels effect leads regulated entities, particularly corporations, to conform to EU laws even when operating outside the EU for various reasons.

The United States stands as one of the world’s largest trade markets, yet data transfers to the US have remained a highly contentious domain within data protection law. Apart from the well-documented concerns about government surveillance, data protection law in the US is further complicated by its considerable variation among states and the absence of federal-level data protection legislation. As mentioned earlier, while certain states have enacted laws that could be seen as rivaling the GDPR, others have very limited or no regulations in place.

Historically, the United States has permitted businesses and institutions to gather personal data without explicit consent, concurrently regulating these practices to forestall or mitigate harms within specific sectors. This approach follows a philosophy centered on harm prevention. In contrast, the European Union’s rights-based framework for safeguarding personal data views data privacy as an inherent human right. Under this paradigm, individuals effectively control their personal information, and decisions regarding who can use it rest with them.

Despite the widespread implementation of data protection laws worldwide, no global standard has been universally established. The EU model may be the closest to becoming a global reference point, but it does not represent the sole applicable global standard. This distinction serves as a reminder that those accustomed to the EU model should be aware that, beyond the EU’s borders, this model is not universally accepted and applied. Moreover, terms like “data protection,” “information privacy,” or “data privacy” may carry different meanings compared to those within the EU.

The contrasting perspectives on the right to privacy may stem from the historical experiences of Europeans, who endured the infamous data collection methods of the Nazis. The Nazis meticulously collected and catalogued information about individuals, including their ancestry and affiliations, and used this data to commit heinous atrocities. This period was followed by a similar or even larger-scale data collection practice by the secret police of East Germany during its communist era. This tragic history made it abundantly clear that there was a pressing need to regulate the collection, storage, and utilisation of personal information.

The jurisdictions mentioned above provide a stark illustration of the highly dynamic nature of global data protection legislation, which presents a significant challenge for Data Protection Officers working in organisations with a presence in multiple countries. Although certain privacy agreements and regulations have been in place for many years, it is only in recent times that the global regulatory landscape has truly gained momentum. Just in the past year, the quantity and intricacy of privacy frameworks and enforcement actions across the globe have experienced a significant surge.

As we move forward, it is evident that the international human rights framework needs to evolve to address the unique challenges posed by new technologies and digital interactions. The “Brussels effect,” exemplified by the EU’s externalization of its laws, is likely to exert a growing influence on global regulations. However, as we navigate this changing environment, it is essential to ensure that any data protection regulations are firmly grounded in respect for human rights, as these regulations will have a profound impact on individuals and society at large.

1 Comment

  1. Michael Coughlan

    Reference the timeline and writings of Norbert Wiener on Cybernetics.
    The 3rd and 4th industrial revolution and EU Law change of the SEA Single European Act, as connects the treaty of Rome1957 and creating the internal market. SEA, internal market multiple applicable EEC directives obligations to be timely met by all the different EU member states, protecting human rights, freedoms, safety and security. (E.g. Product liability directive 85/374/EEC, Health and Safety framework directive 89/391/EEC as connect other specific directives, enterprise activities and data protection directive 95/46/EC. )

Leave a Reply

Your email address will not be published. Required fields are marked *

Related posts

REPUBLISHING TERMS

You may republish this article online or in print under our Creative Commons license. You may not edit or shorten the text, you must attribute the article to Just Access and you must include the author’s name in your republication.

If you have any questions, please email info@just-access.de

License

Creative Commons License Attribution-NonCommercial-ShareAlikeCreative Commons Attribution-NonCommercial-ShareAlike
Human Rights and Digital Empires