Navigating Compliance: Key Overlapping Areas between the AI Act and GDPR

PrivacyEngine Consultants Daniel and Rúadhán

    Need world class privacy tools?

    Schedule a Call >

    This document provides a summary written by Ruadhan Verbruggen and Daniel Whooley, both MSc Data Protection and Privacy: Computing and Law students and Consultants at PrivacyEngine.


    When it comes to the Proposal for a Regulation laying down harmonised rules on Artificial Intelligence, also known as the AI Act and the General Data Protection Regulation, there are a number of articles that overlap. This paper will break down the key sections within each piece of legislation that appear to regulate similar areas. The proposed AI Act, relying on the same treaty basis as the GDPR, refers repeatedly to the GDPR, including its principles, emphasising that they must be embedded in AI systems (Recital 45a of the AI Act).

    Key Overlapping Areas between the AI Act and GDPR

    Scope and Territorial Application

    Both Regulations have a territorial scope that applies to countries outside the EU, as stated in Article 2 of the AI Act. This is the same extraterritorial scope that the GDPR has in application.

    Data Training and Compliance

    It is known that AI uses an incredible amount of data for training its algorithm, which, if based on personal data, will be required to comply with GDPR. This means users of AI that process personal data will require a legal basis under Art.6 and comply with the various requirements under GDPR regarding consent and data protection principles under Art.5(1) which have particular relevance to AI systems under Section III. Data minimisation, purpose limitation and storage limitation will all be necessary principles for AI developers to take into consideration when using personal data for AI. AI systems must be designed with data protection considerations in mind rather than relegating these considerations to the final stages of the system’s creation or use—data privacy must be a primary focus from the outset.

    Privacy by Design

    The GDPR provisions on preventive measures, particularly those concerning privacy by design and by default, should not prevent the development of AI systems as systems should already be designed and implemented with privacy in mind, even though they may entail some additional costs. AI applications that present high risks and therefore require a preventive data protection assessment, and possibly the preventive involvement of data protection authorities will be considered once the AI Act is implemented.

    Accountability and Data Accuracy

    In March 2023, the Italian Data Protection Authority blocked the deployment of ChatGPT in Italy, noting, amongst other matters, that the data was frequently not accurate. It noted, based on ‘tests carried out so far, that the information made available by ChatGPT does not always match factual circumstances, so inaccurate personal data are processed. It is understandable that the data input and output of AI systems should be accurate and, where possible, made unidentifiable. A further related risk comes from the second clause of Article 5(1)(d), that personal data shall be ‘kept up to date’. AI developers, implementers and users will need to understand that they are responsible and are to be held accountable for the inaccuracies that an AI system may output if it ‘hallucinates’ or if it has answered a question based on incorrect training data.

    Challenges in Enforcing Data Rights with AI

    Given these considerations, navigating data rights under the General Data Protection Regulation (GDPR) in the context of generative AI models presents unique challenges, particularly for the rights of Erasure, Rectification, Access, and Objection. Data Protection Commissions will have huge difficulty in ensuring Data Subject Rights are upheld in relation to AI, but this point of contention may improve the current processes by encouraging transparency and the ability to explain within AI.

    Risk-Based Approach in AI Regulation

    The AI Act adopts a risk-based approach to AI systems and categorises the different AI systems under the Act, which is necessary for compliance purposes. The risk posed to the fundamental rights and freedoms of the individual is also considered relevant, with Section 36 of the Data Protection Act stating “the nature, scope, context and purposes of processing, as well as the risks for data subjects’ rights and freedoms” in order to implement technical and organizational measures compliant with the GDPR.

    Risk Management

    Article 9 of the AI Act regulates the establishment, implementation, and documentation of a risk management system, which needs to be properly managed throughout the entire life cycle of high-risk AI technologies. Art.35 requires data controllers to previously assess the impact of a processing activity on the protection of personal data, which is a risk management assessment under GDPR. By carrying out Data Protection Impact Assessments (DPIA), these can be closely aligned with best practice for AI Systems which is carrying out an Autonomous Decisionmaking Impact Assessment (ADIA).

    Penalties for Non-Compliance

    The GDPR and AI Act provide similar fines based on total turnover or a fixed sum, depending on whichever is higher. Minor infringement fines can be imposed of up to €10 million or 2 per cent of the total annual global turnover of the preceding financial year, whichever is higher. Fines can be raised to €20 million or 4 per cent of the total annual turnover of the preceding financial year for breaching other provisions of the AI Act and GDPR. Then, the AI Act has an increased layer of fines for more severe offences, up to €30 million or 6 per cent of the total annual turnover of the preceding financial year, whichever is higher.

    Supervision and Enforcement Mechanisms

    As for supervision and enforcement, both regulations require an overall similar oversight mechanism that relies on the cooperation of European and national authorities to ensure consistent and effective application of the regulation. GDPR relies on entities such as the DPC, CNLI, ICO, etc. The AI Act, once it becomes law, will require national entities to provide similar functions and will likely result in the same bodies carrying out the functions of enforcement and supervision under both Regulations.

    A question remains surrounding the separate authorities required for AI and GDPR – will the AI national entities position differ from that of data protection authorities when it comes to enforcement and guidelines issued to companies on issues that overlap between the two legislations. The GDPR will likely be interpreted and applied in such a way that it does not prevent the application of AI to personal data, and that it does not place EU companies at a disadvantage by comparison with non-European competitors.

    The AI Act acknowledges the risk of overlap, shown by direct references to the GDPR and caveats that the AI Act is intended to complement the GDPR and other related legislations. The overlap between the AI Act and GDPR could be the starting point for harmonious legislation across the EU’s digital economy, but it will require further collaboration between national authorities and transparency from companies using AI systems.

    PrivacyEngine’s AI Focus

    The AI Act will require further measures to be introduced in relation to transparency (leaning towards ensuring AI is ‘explainable’) while also setting the foundation for accountable and responsible AI within the European Union. PrivacyEngine is ensuring that companies that use AI are prepared for the incoming act by encouraging companies to become certified in ISO 42001:2023, which relates to the trustworthiness of AI systems.

    The document briefly surveys the existing approaches that can support or improve trustworthiness in technical systems and discusses their potential application to AI systems. The document discusses possible approaches to mitigating AI system vulnerabilities that relate to trustworthiness. The document also discusses approaches to improving the trustworthiness of AI systems. PrivacyEngine will combine the knowledge we have gained by carrying out ISO certification and the new ISO 42001:2023 to prepare your company for the AI Act. The AI Act requires:

    • Transparency obligations under Title IV to ensure users can make informed choices wheninteracting with AI.
    • Reporting obligations stem from Title VIII, which sets out the monitoring and reporting obligations for providers of AI systems regarding post-market monitoring and reporting and investigating AI-related incidents and malfunctioning.
    • Chapter 3 references the horizontal obligations that developers, users, and producers of AI will be required to adopt.
    • Title IX will create codes of conduct, which aim to encourage providers of non-high-risk AIsystems to apply voluntarily the mandatory requirements for high-risk AI systems.
    • Title X emphasizes the obligation of all parties to respect data confidentiality and sets out rules for exchanging information obtained during the regulation’s implementation.

    Conclusion

    By achieving compliance with ISO 42001:2023 and ensuring that the company’s AI systems reflect the principles derived from the GDPR, your company will be taking the first few steps to become compliant with the AI Act, and it will only be a matter of waiting to see whether there are any changes to the Regulation when it is transposed into Member State national law and whether any other AI systems will be added to Annex III, otherwise known as high-risk systems inventory, as referred to under Art.6(2). This Annex can be amended as per Title XII.

    Share this

    Try PrivacyEngine
    For Free

    Learn the platform in less than an hour
    Become a power user in less than a day

    PrivacyEngine Onboarding Screen