Luca Brocca
Just Access Representative to the UN Convention Against Corruption
Will AI destroy human civilisation?
Perhaps not. The super-intelligent, genocidally misguided, and inventively destructive AI systems of Age of Ultron, Terminator, and Do Androids Dream of Electric Sheep? remain confined to the realms of science fiction.
But that doesn’t mean that today’s AI is free from dangers. Although AI systems may not bring about the apocalypse any time soon, the training and use of AI can have serious impacts on human rights.
For the past sixty years, Artificial Intelligence technologies and contemporary human rights have developed in tandem, mutually influencing each other’s evolution as they mature.
Already, AI and machine learning-enabled technologies are used in medicine, transportation, education, the military, agriculture, manufacturing, and many other fields. As engineers and scientists delve into the possibilities of AI to propel discovery and technological advancements, smart technologies like smartphones and autonomous vehicles also exert a direct impact on our everyday life.
And with this impact, comes human rights challenges. Language models are trained on huge amounts of data, potentially breaching privacy and copyright safeguards. Driverless cars have to be trained for ‘no win’-scenarios when they can’t avoid an accident, leaving the system to make a choice of who to hurt. And increasingly capable technology has the potential to put whole workforces or industries out of work.
It is in this context that, on 13 March 2024, the European Parliament passed the AI Act. This decision was the result of tense negotiations, with certain governments—among them France, Germany, and Italy—suggesting that a binding regulation be replaced with a simple code of conduct. Their aim was to lessen regulatory burdens on European companies to enhance their competitiveness globally. However, European lawmakers disagreed, asserting that a balanced regulation would compel foreign companies to adhere to the AI Act as well, fostering fair competition.
MEP Dragos Tudorache stated:
The AI act is not the end of the journey but the starting point for new governance built around technology”
The AI Act operates by categorising products based on their level of risk and tailoring oversight accordingly. Its fundamental principle is to govern AI according to its potential societal harm: the higher the risk, the stricter the regulations. For instance, AI applications posing a “clear risk to fundamental rights,” such as those involving biometric data processing, will be prohibited. AI systems deemed “high-risk,” like those used in education or healthcare, must adhere to stringent requirements. Conversely, low-risk services, such as content recommendation systems, will be subject to lighter regulation.
Concerning governance and compliance, the AI Act establishes a European AI Office tasked with overseeing the most intricate AI models. Additionally, it mandates the formation of a scientific panel and an advisory forum to incorporate the perspectives of various stakeholders.
The significance of this legislation extends even beyond the borders of the EU due to Brussels’ influential role as a tech regulator, as demonstrated by the impact of GDPR on data management practices. While the AI Act does not directly apply to businesses outside the EU, it will undoubtedly influence the thinking of regulators in the US. Notably, the AI Act includes regulations governing the use of AI on social media platforms. This could affect US-based social media giants such as Facebook, Twitter, and Instagram, which boast significant user bases within the EU.
Italian lawmaker Brando Benifei, co-leader of Parliament’s efforts on the law, emphasised that Brussels’ stance on AI rules remains open-ended. Benifei suggested that additional AI-related legislation may emerge post-summer elections, addressing areas such as AI in the workplace, which the new law only partially addresses.
The AI act’s fundamental principle is to govern AI according to its potential societal harm: the higher the risk, the stricter the regulations
Concurrently with the development of the AI Act at the EU level, the Council of Europe (COE) is pursuing the adoption of an AI Convention. Unlike the EU’s Act, the COE Treaty would have a much wider reach, not only including the EU States, but also the larger group of States that are members of the COE, as well as being open to non-members which wish to sign the treaty. Some such States, including the USA, have been actively participating in the negotiations.
While the COE treaty has the potential to be a landmark in the development of AI regulation worldwide, some participating States are trying to limit the scope of the treaty. The USA, for example, is part of a group of States which want to constrain the treaty’s reach to public entities only. Similarly, some negotiators have suggested exempting national security concerns from the Convention’s scope.
Along with other NGOs, we at Just Access are concerned that narrowing the scope of the treaty will leave huge loopholes, and fail adequately to protect human rights. More than 90 European NGOs and other organisations are calling on the Council of Europe’s AI Convention negotiators to resist watering down the convention, and ensure that it robustly protects the human rights of people in Europe and beyond.