Dr. Maria Moloney, who is our Chief Researcher and AI Specialist at PrivacyEngine, delves into the intricate relationship between digital privacy and the advancement of artificial intelligence. She specifically focuses on how the EU’s AI Act can strengthen digital rights and promote a safer digital environment in Europe.
Introduction to Europe’s Regulatory Changes
Europe is experiencing a period of fluctuation, considering the fast-changing regulatory landscape. Two topics in particular are at the forefront of this evolution, namely data protection and artificial intelligence or AI. We have seen an agreement on a provisional EU AI regulation in December 2023, which is eerily reminiscent of the agreement we saw back in 2016 with the General Data Protection Regulation or what we now call the GDPR.
The Provisional EU AI Act of 2023
The EU’s AI Act is a proposed legislation that aims to regulate the development and use of AI in Europe. It classifies AI systems based on risk and mandates various development and use requirements. It imposes legally binding rules that require companies to notify individuals when they are interacting with AI systems. The Act also focuses on strengthening rules around data quality, transparency, human oversight, and accountability.
As the Act takes shape, it brings forth a set of guidelines and regulations aimed at addressing the ethical, legal, and societal implications of AI technologies. Like the GDPR, this new EU AI Act promises to re-enforce the digital rights of European citizens. It seeks to do this by building on the digital privacy rights enshrined in the GDPR to provide a more comprehensive set of digital protections for Europeans. In Europe, we have a clear definition of our digital rights. Advocates of these rights work to ensure that they are protected for everyone when engaging in the ever-expanding, interconnected world of ours.
The Impact of the EU’s AI Act on Digital Privacy
Moving on to the issue of digital privacy, individuals have the right to self-determination, without the threat of surveillance, in a digital context. This means having the freedom to make choices about online activities and digital identity while enjoying the protection of personal information from malicious digital actors or those looking to profit from data.
In light of this interpretation, a discussion of the European Union’s GDPR is inevitable. Arguably, in Europe, the GDPR over the last six years has reshaped how businesses handle personal data. The relationship that the GDPR has to digital privacy is that it works as the legal obligation placed on businesses and organisations to safeguard personal data from unauthorised access, disclosure, or misuse. The GDPR is one of the approaches used to ensure digital privacy in Europe.
The Role of GDPR in Protecting Digital Privacy
The GDPR guarantees the protection of personal data whenever it is processed by a business. These rules apply to both companies and organisations (public and private) in the EU and those based outside the EU who offer goods or services in the EU marketplace.
The GDPR is a cornerstone of legislation that ensures digital privacy in the EU. As our lives become more entwined with digital platforms, the need for digital privacy has never been more crucial. The digital landscape now involves navigating a complex web of data collection, surveillance, and cybersecurity challenges. The more complex this web becomes, the more appealing automation and AI feels to organisations. AI has become a way to reduce the burden of complexity for organisations by providing almost immediate pattern recognition and decision support. Not to mention the benefits of automation, robotics and supply chain optimisation, to name just a few uses of AI.
AI’s Influence on Data Privacy and Complexity
The problem that threatens digital rights, however, is that AI systems need huge amounts of data to be able to make precise decisions or to identify patterns and so on. The more data it has, the more precise the results become. However, when AI uses personal data to make precise decisions about individuals, such as deciding whether this person is trustworthy enough for a bank loan or whether an individual would be a good person to hire for a job opening, the amount of personal information going through such systems is vast.
The Necessity of Human Oversight in AI Decisions
This opens up the possibility for damage to individuals if their personal data is breached in some way by the system. It also puts the individual at a disadvantage when the AI system makes the decision on their behalf instead of a human because AI systems lack the understanding of context and nuance. This is why, for many years now, there have been protections in place to ensure that any decisions made by automated and AI systems are overseen by humans (article 22 of the GDPR).
Responsible AI Usage through Regulatory Foundations
Key areas of focus have come to light over the last few years, including transparency, accountability, and the protection of fundamental rights. This narrative around the regulation of AI in Europe sounds uncannily similar to the one seen a decade ago when Europe was working to put in place a comprehensive data protection regulation, namely the GDPR. Ensuring responsible technology use through transparency, extra-territorial scope, and risk assessments is foundational to both pieces of legislation.
Uniting GDPR Principles with EU AI Regulations
This overlap between the GDPR and the EU AI Act is intriguing because it demonstrates a shared commitment to upholding individual digital rights. Both address the need for transparent and accountable practices, and both seek to balance the potential benefits of AI (referred to as automated decision-making in the GDPR) with the protection of personal data. Both were the first of their kind globally to attempt to reign in the use of technology by large corporations in an attempt to safeguard the rights and freedoms of individuals in the digital world. They demonstrate that Europe’s continued pursuit of digital rights for citizens is not static, but it evolves with technological advancements, ensuring Europe remains relevant and effective in its pursuit. They represent Europe’s continued commitment to protecting its citizens in the digital realm, setting a global example.
Understanding this synergy is paramount in comprehending how the two domains can work in tandem to fortify digital rights. These two giants of legislation are vital building blocks in the fight to tame the digital realm. These regulations are crucial in an era where digital interactions are universal, ensuring individuals’ digital rights are not compromised in pursuing technological innovation and advancement.
The Future of Digital Rights in Europe
The overlap between digital privacy and the forthcoming EU AI Act holds the potential to significantly strengthen digital rights in Europe. By establishing clear rules for AI systems and ensuring their enforcement, the EU is taking yet another major step towards protecting the digital rights of Europeans and setting the global scene for other countries to follow suit. This will not happen overnight, as we saw with the GDPR, but it is a very positive state.
As we move forward, it will be interesting to see how these regulations evolve and adapt to the rapidly changing digital landscape, not just in Europe but around the world.