AI legal frameworks in the UK and the EU

Artificial Intelligence (AI) is transforming industries and societies across the globe, driving the need for robust legal frameworks to govern its use. In the United Kingdom (UK) and the European Union (EU), AI governance is a blend of existing legislation, including data protection, consumer rights, and intellectual property, and in the EU, dedicated legislative initiatives like the proposed AI Act.

This article is intended as a high-level overview to provide a general understanding of the regulatory landscape and the differing strategic approaches between the UK and the EU.

United Kingdom

The UK government’s AI white paper, published in 2023, outlines its vision for AI regulation, guided by five cross-sectoral principles:

  • Safety, Security, and Robustness – Ensuring AI systems operate reliably and mitigate risks.
  • Transparency and Explainability – requires AI systems to be understandable to users and regulators.
  • Fairness – promotes equitable outcomes and prevents bias in AI systems.
  • Accountability and Governance – establishes clear roles and responsibilities for AI developers and users.
  • Contestability and Redress – Ensuring mechanisms for users to challenge AI-driven decisions.

Instead of enacting new AI-specific legislation, the UK relies on existing regulators, such as the Information Commissioner’s Office (ICO), to enforce these principles within their domains.

European Union

The EU AI Act adopts a risk-based regulatory framework, classifying AI systems into four categories:

  • Unacceptable Risk – AI practices deemed harmful and prohibited, such as social scoring by public authorities or real-time biometric identification in public spaces (except for narrowly defined security purposes).
  • High Risk – includes AI systems used in critical areas such as healthcare, recruitment, and law enforcement. These are subject to stringent requirements, including risk assessments, transparency, and human oversight.
  • Limited Risk – includes applications like chatbots or recommendation systems requiring minimal obligations such as transparency notices.
  • Minimal or No Risk – Most AI systems are largely unregulated in this category.

The EU AI Act complements the EU Ethics Guidelines for Trustworthy AI, emphasising human autonomy, fairness, and societal well-being. Violations can result in significant fines, mirroring the General Data Protection Regulations (GDPR) enforcement model.

Leveraging existing laws

Both the UK and EU leverage existing laws to address challenges associated with AI:

  • Data Protection Laws – the EU’s General Data Protection Regulation (GDPR) and the UK’s Data Protection Act 2018 govern personal data collection, processing, and storage, which is integral to AI systems. Compliance with these laws is essential for ensuring responsible use of AI technology.
  • Consumer Protection Laws – AI-powered products and services must comply with laws like the EU’s Unfair Commercial Practices Directive and the UK’s Consumer Rights Act 2015 to prevent deceptive practices.
  • Intellectual Property (IP) Laws – the UK Copyright, Designs and Patents Act 1988 and the EU Copyright Directive govern this area, and the key challenges include:
    • Use of copyrighted materials to train AI models.
    • Ownership of AI-generated content.
    • Accountability if AI-generated content infringes on existing copyright.

AI regulation in the UK and EU reflects a balance between encouraging innovation and protecting fundamental rights. While the UK relies on flexible principles-based Governance, the EU has introduced a comprehensive legal framework through the AI Act. The UK and EU address ethical, legal, and societal challenges AI systems pose in a rapidly evolving technological landscape.

Privacy considerations and ISO 27701

ISO 27701 is an international standard that helps organisations manage and protect personally identifiable information (PII). It builds upon ISO 27001 – Information Security Management Systems (ISMS) by providing specific guidance on implementing, maintaining, and continually improving a Privacy Information Management System (PIMS).

Having recently reviewed the standard and the options to update an ISMS to include PIMS and considering working towards obtaining the ISACA Certified Data Privacy Solutions Engineer (CDPSE) qualification, this article contains a selection of issues that could severely impact privacy. I don’t intend this article to be exhaustive but rather to provide a good overview of things to consider when implementing this standard, conducting privacy-related audits, or complying with other legislation such as the General Data Protection Regulation (GDPR).

Inadequate incident response

A poorly defined or inadequate incident response plan can lead to delayed or improper handling of privacy breaches, escalating the damage and complicating recovery efforts. An ineffective response can erode trust and result in significant legal and financial repercussions. Swift and efficient incident response is critical for mitigating the impact of privacy breaches. Countermeasures include:

  • Create and maintain an adequate incident response plan.
  • Conduct regular drills to ensure all employees understand their roles and responsibilities.
  • Update the incident response plan to reflect current threats.

Third-party risks

Failing to conduct third-party due diligence properly can lead to privacy breaches. If third parties do not adhere to the same privacy standards, they can become a weak link in data protection. Uncontrolled third-party access can expose sensitive information to significant risks. Countermeasures include:

  • Conduct thorough due diligence and regular audits of third-party vendors.
  • Include strong privacy and data protection clauses in contracts with third parties.
  • Continuously monitor and assess third-party practices.

Insufficient access controls

Granting employees unnecessary access to sensitive information increases the risk of data breaches. Excessive permissions can lead to both accidental and malicious data misuse. Unrestricted access can result in significant vulnerabilities and data privacy issues. Countermeasures include:

  • Implement the principle of least privilege, ensuring employees have access only to the data necessary for their role.
  • Regularly review and adjust access controls based on changes in employee roles.
  • Use role-based access control (RBAC) to manage permissions.

Unsecured transmission

Employees might bypass encryption and other data protection measures to complete urgent tasks. This oversight often stems from cumbersome technology, recipients unable to handle encrypted messages, or insufficient training on secure data transmission. Unsecured data transmission increases the risk of interception by unauthorised parties, potentially leading to data breaches. Countermeasures include:

  • Conduct regular training and support.
  • Ensure privacy protection technologies are user-friendly.
  • Provide comprehensive support documentation to help resolve common problems.
  • Clearly defined responsibilities and accountabilities.

Weak password practices

Using weak or shared passwords among team members increases the risk of unauthorised access. Password reuse across multiple platforms exacerbates this vulnerability. Weak password practices are a common entry point for cyberattacks, compromising data security. Countermeasures include:

  • Implement a policy that requires complex passwords.
  • Use multi-factor authentication (MFA) to add an extra layer of security.
  • Provide training on password practices.

Lack of regular audits

Businesses may fail to identify vulnerabilities or comply with privacy policies without regular audits of privacy practices and data handling processes. This oversight can lead to significant privacy breaches and regulatory penalties. Regular audits are essential for maintaining data security and compliance. Countermeasures include:

  • Perform regular audits of privacy practices and data handling processes.
  • Use audits to ensure compliance with privacy policies and regulations.
  • Proactively identify and address potential risks through audits.

Neglecting data retention and disposal policies

Failing to comply with retention policies will increase exposure in a data breach. Employees might leave sensitive documents unsecured or neglect to wipe data from old devices, leading to significant privacy breaches if the data falls into the wrong hands.

Countermeasures include:

  • Develop and enforce policies for secure data disposal.
  • Ensure that you shred, wipe, or render irretrievable all sensitive information before disposal of equipment.
  • Conduct regular audits and provide training on proper data disposal practices.

Sending files to incorrect recipients

One of the most prevalent issues is the accidental transmission of sensitive data to the wrong email addresses. Email software that auto-adds addresses from previous contacts increases the likelihood of such errors, usually discovered only after the fact. This mistake can result in unauthorised individuals accessing sensitive information, leading to significant privacy breaches. Countermeasures include:

  • Encourage employees to double-check recipient addresses.
  • Implement email verification steps before sending to unfamiliar addresses.
  • Use email prompts for confirmation.

Social engineering attacks

Employees may become victims of social engineering attacks, such as phishing, which can lead to the inadvertent disclosure of sensitive information. Social engineering exploits human psychology to bypass technical security measures. These attacks can significantly compromise data privacy. Countermeasures include:

  • Provide regular training on recognising and responding to social engineering threats.
  • Implement multi-factor authentication and email filtering.
  • Improve awareness and vigilance among employees to defend against social engineering.

Lack of privacy by design

Not incorporating privacy considerations into designing new systems, products, or processes can lead to vulnerabilities and compliance issues. Overlooking privacy in the development stage can result in significant risks and challenges. Privacy should be a foundational element in all business systems. Countermeasures include:

  • Integrate privacy by design principles into project management and development processes.
  • Conduct privacy impact assessments during the early stages of any new initiative.
  • Ensure privacy is built into systems and processes to prevent future issues.

Collecting too much data

Despite clear privacy policies, employees may forget the specifics amidst their busy schedules. If employees collect more data than necessary, it risks privacy incidents and potential legal repercussions for not adhering to the company’s privacy commitments. Over-collection can lead to storing unnecessary data, increasing the risk if this data is compromised. Countermeasures include:

  • Educate employees on the principle of data minimisation.
  • Encourage the use of internal identifiers instead of government IDs.
  • Implement techniques like truncating, masking, or scrambling data.
  • Provide regular reminders and training on data minimisation.

I have lost count of the number of firms that have asked me for my date of birth when there is no legitimate need for them to know or store such information. Some businesses even ask people to confirm their date of birth when they don’t already have it so they can add it to their records.

Inconsistent business processes

Rapid business responses can lead to changes not being communicated, resulting in processes not aligning with documented privacy policies and exposing the company to legal and civil actions and operational risks. Unvetted changes can lead to significant vulnerabilities and compliance issues. Countermeasures include:

  • Establish a robust change control process, including privacy impact assessments.
  • Document all changes in a central repository.
  • Ensure all changes are vetted and documented to maintain alignment with privacy policies.

Being overly helpful

Employees often go above and beyond to meet clients’ needs, sometimes sharing more personal information than necessary. This well-meaning behaviour can expose sensitive data to unauthorised individuals. Without proper guidelines, employees might not recognise the limits of information sharing, inadvertently causing privacy breaches. Countermeasures include:

  • Provide continuous and targeted privacy training.
  • Conduct follow-up sessions and periodic knowledge checks.
  • Ensure employees are aware of what information is appropriate to share.

Multitasking

Juggling multiple system windows heightens the risk of privacy incidents. Employees might enter data into the wrong screen, leading to incorrect data transmissions. This error is often due to distraction or confusion, increasing the likelihood of privacy breaches. Countermeasures include:

  • Encourage focused work practices and limit multitasking.
  • Implement system controls that highlight or lock fields for sensitive data.
  • Establish mindful data handling.

Employee turnover and onboarding

High employee turnover can lead to lapses in privacy training and knowledge transfer. This gap can result in an increased risk of privacy incidents and non-compliance. Countermeasures include:

  • Ensure comprehensive privacy training during staff onboarding.
  • Conduct regular refresher courses to maintain knowledge continuity.
  • Maintain up-to-date documentation and resources for employees to reference.

Building a privacy culture

Addressing privacy risks is a continuous effort that requires a team-wide commitment across the business. Collaboration among various business units is essential to build strong relationships, identify privacy challenges, and develop training and practical resources. In response to incidents, it is critical to assess control failures to minimise the likelihood of future occurrences. Please remember that improving privacy controls is a continuous journey, not a destination.

Although I am currently focusing on integrating ISO 42001 into ISO 27001, I see a more long-term strategy that includes ISO 27001, ISO 42001, and ISO 27701 working together as a combined management system.

Facing AI challenges

In a previous article, I mentioned the need to conduct Artificial Intelligence Impact Assessments (AIIA). Businesses are accountable for using AI, even when developed by third parties. Ethical standards and legal requirements are evolving in this direction, and the need includes:

  • Essential due diligence when purchasing new software.
  • Maintaining an inventory of software and its use of AI
  • Understanding how the AI works and its impact on stakeholders
  • Being responsible for the outcomes

These are core requirements for the implementation of ISO 42001. AI can impact individuals, groups of individuals, and society as a whole. The following explores the impact in more detail.

AI can disrupt personal lives in ways that raise ethical, psychological, and practical concerns. These issues often stem from the misuse of data and lack of transparency. One of my concerns is that with AI, “Computer says NO” is quickly becoming “Computer says NO on steroids”.

  • Many systems collect and process personal data without user consent, and even with widespread privacy legislation, this has still led to a lack of control over personal information. AI only exacerbates this.
  • Automation continues to replace manufacturing, retail, and transportation jobs, leaving many individuals without employment or requiring them to acquire new skills in a rapidly changing economy.
  • Personal decisions, such as loan approvals or job applications, may be influenced by biased AI systems, such as unfairly favouring specific demographics and perpetuating any existing discrimination.
  • Dependence on AI for everyday tasks, such as navigation, time management, drafting of documents and communication, may reduce critical thinking and problem-solving skills over time. Here is a quote from Star Trek: Insurrection – “We believe that when you create a machine to do the work of a man, you take something away from the man.

Under GDPR, people have the right not to be subjected to decisions based solely on automated processing, the right to a meaningful explanation of any decisions made, and the right to contest the outcome. These rights extend to using artificial intelligence and its impact on individual rights. The expected result from a request for a meaningful explanation is often limited to “we put the information into the computer it gives us the answer”. Emerging AI legislation, including the EU AI Act, strengthens these rights.

AI poses profound challenges to society due to its scale and potential misuse: 

  • AI-generated fake images, videos, and news erode trust in media, businesses, institutions, and elections.
  • Governments and corporations increasingly use AI for mass surveillance, raising ethical issues around civil liberties and human rights, such as discriminatory misuse of facial recognition.
  • Automation can disproportionately impact low-skilled workers, leading to unemployment and widening economic gaps as industries like retail and manufacturing transform rapidly. Businesses need to
  • AI decisions in critical areas such as healthcare and law enforcement prompt questions about responsibility when errors or harm occur. The EU AI Act includes an unacceptable risk category for banned use and a high-risk category with increased safeguarding requirements.

Adopting AI in business brings ethical, compliance, and operational risks, and a failure to address these can lead to financial, reputational, or legal repercussions: 

  • AI systems can be vulnerable to hacking, adversarial attacks, and data breaches like any other software system. Manipulated AI outputs can compromise decision-making and business operations. 
  • AI-powered recruitment, lending, and advertising tools may perpetuate biases, exposing businesses to reputational damage and legal liabilities. 
  • Evolving laws, like the EU AI Act, require legal and technical expertise. Failing to comply risks, fines, and operational disruptions. 
  • Misuse or failures in AI can harm trust. Biased recommendations or faulty product suggestions can alienate customers and damage corporate brands. 
  • Heavy reliance on AI systems risks operational disruptions from bugs, data inaccuracies, or cyberattacks. Businesses must maintain contingency plans that include operating with AI and other software.
  • AI requires significant investment in infrastructure, training, and maintenance, often challenging businesses to demonstrate a return on investment. 
  • AI-generated content raises questions of ownership and copyright, creating potential disputes over AI-driven designs or innovations. 

Adopting AI-based software requires a clear understanding of its impacts to ensure responsible use and to avoid biases, privacy issues, or harmful inaccuracies, raising ethical and accountability concerns. The societal and economic effects, like job loss and trust erosion, highlight the need for proactive risk management. Considering these implications, businesses and policymakers can create systems that balance innovation with fairness and security.

Avoiding the risks of Shadow AI

As AI becomes integral to modern business operations, many businesses must consider embracing or restricting its use. While concerns about data security, compliance, and ethical implications often lead to restrictive AI policies, these well-meaning efforts can inadvertently cause the rise of Shadow AI. As an existing Information Security professional with  ISACA qualifications, my AI governance journey is more than just learning about AI; it is about integrating new knowledge with existing security, risk, and audit knowledge.

Shadow AI refers to using unapproved AI tools and applications within an organisation. Employees often turn to these tools when official policies restrict AI use, perceiving them as necessary for improving efficiency or achieving work goals. While these tools may offer short-term productivity gains, their unauthorised nature introduces a range of risks that can jeopardise security, compliance, and operational integrity. Risks associated with shadow AI include:

  • Data security vulnerabilities
    • Shadow AI tools often lack the rigorous security protocols organisations require, creating vulnerabilities that cybercriminals can exploit.
    • Unauthorised tools may store sensitive information in unprotected environments, exposing businesses to data leaks and intellectual property theft.
    • These tools may store data in jurisdictions that do not align with regional data privacy laws, such as GDPR, leading to potential legal liabilities.
  • Compliance and legal risks
    • Shadow AI tools may not comply with industry-specific regulations, increasing the risk of fines and legal consequences.
    • Employees might inadvertently share confidential or sensitive information with third-party AI tools, violating privacy policies and agreements.
  • Operational inefficiencies
    • Shadow AI creates fragmented workflows, as different teams may use disparate tools for the same tasks, leading to inefficiencies and errors.
    • Unapproved tools rarely align with existing systems, causing data silos and disrupting business processes.
    • If Shadow AI tools are detected and abruptly banned, it can interrupt workflows that have become reliant on these tools.
  • Data integrity challenges
    • Unauthorised tools may produce inconsistent or biased results, compromising decision-making processes.
    • Decentralised data processing complicates maintaining data integrity and consistency across the organisation.
  • Ethical and bias concerns
    • Shadow AI tools may perpetuate biases inherent in their training data without proper oversight, leading to unfair or unethical outcomes.
    • Organisations may lack visibility into how Shadow AI tools make decisions, raising ethical questions and damaging trust.
  • Strategic and cultural risks
    • Shadow AI initiatives may not align with the organisation’s broader strategic objectives, leading to wasted resources.
    • The proliferation of Shadow AI will result in a culture where employees bypass policy and erode trust and accountability.
    • Shadow AI undermines efforts to create a unified, strategic approach to AI adoption, with employees pulling in different directions.

Preventing Shadow AI requires proactive measures that balance employee needs with organisational goals. Here are some thoughts on how to mitigate the risks:

  • AI policy implementation
    • Avoid outright bans on AI tools.
    • Develop a policy that allows the controlled use of vetted and approved AI applications.
    • Set clear guidelines on how to use AI tools, ensuring compliance with security, privacy, and ethical standards.
  • Employee education
    • Conduct training sessions to inform employees about the risks of Shadow AI, including its impact on security, compliance, and operations.
    • Provide education on approved AI tools and their benefits, ensuring employees understand the value of adhering to company policies.
  • Monitoring and auditing
    • Implement monitoring systems to detect unauthorised AI tools. Regular audits can identify potential Shadow AI activity before it escalates.
    • Encourage transparency by providing a straightforward process for employees to request approval for new AI tools.
  • Encourage open communication
    • Create an environment where employees feel comfortable discussing their AI needs and challenges.
    • Collaborate with teams to identify tools that improve workflows while meeting organisational standards.
  • Alignment with business objectives
    • Integrate AI into the broader business strategy, ensuring its adoption supports long-term goals.
    • Develop a unified approach to AI implementation, balancing innovation with security and compliance.
  • Incentivise compliance
    • Recognise and reward teams for adhering to approved AI policies.
    • Make it easy for employees to access and use authorised AI tools, minimising the temptation to turn to Shadow AI alternatives.

Businesses should adopt a balanced approach that embraces AI to improve productivity while managing the risks. Key benefits of this approach include:

  • Employees can openly leverage approved AI tools to streamline workflows and make data-driven decisions.
  • Vetted tools reduce vulnerabilities and ensure compliance with regulatory requirements.
  • Allowing employees to use authorised AI tools encourages skill development and positions the business as a leader in AI adoption.
  • A controlled approach ensures AI use aligns with business objectives, minimising fragmentation and inefficiencies.
  • Reduce the need for employees to feel that they must use AI tools in secret.

Shadow AI is a growing risk in workplaces that either explicitly restrict AI or fail to adopt it proactively. By implementing an AI policy, providing education and transparency, and aligning AI initiatives with strategic goals, businesses can avoid the risks of Shadow AI while exploiting AI to its full potential.