AI legal frameworks in the UK and the EU

Artificial Intelligence (AI) is transforming industries and societies across the globe, driving the need for robust legal frameworks to govern its use. In the United Kingdom (UK) and the European Union (EU), AI governance is a blend of existing legislation, including data protection, consumer rights, and intellectual property, and in the EU, dedicated legislative initiatives like the EU AI Act.

This article is intended as a high-level overview to provide a general understanding of the regulatory landscape and the differing strategic approaches between the UK and the EU.

United Kingdom

The UK government’s AI white paper, published in 2023, outlines its vision for AI regulation, guided by five cross-sectoral principles:

  • Safety, Security, and Robustness – Ensuring AI systems operate reliably and mitigate risks.
  • Transparency and Explainability – AI systems must be understandable to users and regulators.
  • Fairness – promotes equitable outcomes and prevents bias in AI systems.
  • Accountability and Governance – establishes clear roles and responsibilities for AI developers and users.
  • Contestability and Redress – Ensuring mechanisms for users to challenge AI-driven decisions.

Instead of enacting new AI-specific legislation, the UK relies on existing regulators, such as the Information Commissioner’s Office (ICO), to enforce these principles within their domains.

European Union

The EU AI Act adopts a risk-based regulatory framework, classifying AI systems into four categories:

  • Unacceptable Risk – AI practices deemed harmful and prohibited, such as social scoring by public authorities or real-time biometric identification in public spaces (except for narrowly defined security purposes).
  • High Risk – includes AI systems used in critical areas such as healthcare, recruitment, and law enforcement. These are subject to stringent requirements, including risk assessments, transparency, and human oversight.
  • Limited Risk – includes applications like chatbots or recommendation systems requiring minimal obligations such as transparency notices.
  • Minimal or No Risk – Most AI systems are largely unregulated in this category.

The EU AI Act complements the EU Ethics Guidelines for Trustworthy AI, emphasising human autonomy, fairness, and societal well-being. Violations can result in significant fines, mirroring the General Data Protection Regulation (GDPR) enforcement model.

Leveraging existing laws

Both the UK and EU leverage existing laws to address challenges associated with AI:

  • Data Protection Laws – the EU’s General Data Protection Regulation (GDPR) and the UK’s Data Protection Act 2018 govern personal data collection, processing, and storage, which is integral to AI systems. Compliance with these laws is essential for ensuring responsible use of AI technology.
  • Consumer Protection Laws – AI-powered products and services must comply with laws like the EU’s Unfair Commercial Practices Directive and the UK’s Consumer Rights Act 2015 to prevent deceptive practices.
  • Intellectual Property (IP) Laws – this are is governed by the UK Copyright, Designs and Patents Act 1988 and the EU Copyright Directive. Key challenges include:
    • Use of copyrighted materials to train AI models.
    • Ownership of AI-generated content.
    • Accountability if AI-generated content infringes on existing copyright.

As AI continues to evolve, the regulatory approaches of the UK and EU will shape not only compliance expectations but also how innovation unfolds within ethical and legal boundaries. While the UK relies on a flexible, principles-based model, the EU has introduced a comprehensive legal framework through the EU AI Act. Both regions aim to balance innovation with the protection of fundamental rights, addressing the ethical, legal, and societal challenges posed by emerging AI technologies.

Facing AI challenges

In a previous article, I mentioned the need to conduct Artificial Intelligence Impact Assessments (AIIA). Businesses are accountable for using AI, even when developed by third parties. Ethical standards and legal requirements are evolving in this direction, and the need includes:

  • Essential due diligence when purchasing new software.
  • Maintaining an inventory of software and its use of AI
  • Understanding how the AI works and its impact on stakeholders
  • Being responsible for the outcomes

These are core requirements for the implementation of ISO 42001. AI can impact individuals, groups of individuals, and society as a whole. The following explores the impact in more detail.

AI can disrupt personal lives in ways that raise ethical, psychological, and practical concerns. These issues often stem from the misuse of data and lack of transparency. As I mentioned in an earlier article, the risk is that AI turns ‘Computer says NO’ into ‘Computer says NO on steroids’, automated decisions without explanation, empathy, and with limited recourse.

  • Many systems collect and process personal data without user consent, and even with widespread privacy legislation, this has still led to a lack of control over personal information. AI only exacerbates this.
  • Automation continues to replace manufacturing, retail, and transportation jobs, leaving many individuals without employment or requiring them to acquire new skills in a rapidly changing economy.
  • Personal decisions, such as loan approvals or job applications, may be influenced by biased AI systems, such as unfairly favouring specific demographics and perpetuating any existing discrimination.
  • Dependence on AI for everyday tasks, such as navigation, time management, drafting of documents and communication, may reduce critical thinking and problem-solving skills over time. “We believe that when you create a machine to do the work of a man, you take something away from the man.” – Star Trek: Insurrection

Under GDPR, people have the right not to be subjected to decisions based solely on automated processing, the right to a meaningful explanation of any decisions made, and the right to contest the outcome. These rights extend to using artificial intelligence and its impact on individual rights. The expected result from a request for a meaningful explanation is often limited to “we put the information into the computer it gives us the answer”. Emerging AI legislation, including the EU AI Act, strengthens these rights.

AI poses profound challenges to society due to its scale and potential misuse:

  • AI-generated fake images, videos, and news erode trust in media, businesses, institutions, and elections.
  • Governments and corporations increasingly use AI for mass surveillance, raising ethical issues around civil liberties and human rights, such as discriminatory misuse of facial recognition.
  • Automation can disproportionately impact low-skilled workers, leading to unemployment and widening economic gaps as industries like retail and manufacturing transform rapidly. Businesses need to transform rapidly, anticipate these changes,and develop strategies for workforce transition, reskilling, and support.
  • AI decisions in critical areas such as healthcare and law enforcement prompt questions about responsibility when errors or harm occur. The EU AI Act includes an unacceptable risk category for banned use and a high-risk category with increased safeguarding requirements.

Adopting AI in business brings ethical, compliance, and operational risks, and a failure to address these can lead to financial, reputational, or legal repercussions:

  • AI systems can be vulnerable to hacking, adversarial attacks, and data breaches like any other software system. Manipulated AI outputs can compromise decision-making and business operations.
  • AI-powered recruitment, lending, and advertising tools may perpetuate biases, exposing businesses to reputational damage and legal liabilities.
  • Evolving laws, like the EU AI Act, require legal and technical expertise. Failing to comply risks, fines, and operational disruptions.
  • Misuse or failures in AI can harm trust. Biased recommendations or faulty product suggestions can alienate customers and damage corporate brands.
  • Heavy reliance on AI systems risks operational disruptions from bugs, data inaccuracies, or cyberattacks. Businesses must maintain contingency plans that include operating with AI and other software.
  • AI requires significant investment in infrastructure, training, and maintenance, often challenging businesses to demonstrate a return on investment.
  • AI-generated content raises questions of ownership and copyright, creating potential disputes over AI-driven designs or innovations.

Adopting AI-based software requires a clear understanding of its impacts to ensure responsible use and to avoid biases, privacy issues, or harmful inaccuracies, raising ethical and accountability concerns. The societal and economic effects, like job loss and trust erosion, highlight the need for proactive risk management.

By acknowledging these challenges, businesses and policymakers can shape AI systems that balance innovation with fairness, accountability, and long-term trust.

Avoiding the risks of Shadow AI

As AI becomes integral to business operations, organisations must decide whether to embrace it or impose restrictions. While concerns about data security, compliance, and ethical implications often lead to restrictive AI policies, these well-meaning efforts can inadvertently contribute to the rise of Shadow AI. As an existing Information Security professional with ISACA qualifications, my AI governance journey is more than just learning about AI; it is about integrating new knowledge with existing security, risk, and audit knowledge.

Shadow AI refers to using unapproved AI tools and applications within an organisation. Employees often turn to these tools when official policies restrict AI use, perceiving them as necessary for improving efficiency or achieving work goals. While these tools may offer short-term productivity gains, their unauthorised nature introduces a range of risks that can jeopardise security, compliance, and operational integrity. Risks associated with shadow AI include:

  • Data security vulnerabilities
    • Shadow AI tools often lack the rigorous security protocols organisations require, creating vulnerabilities that cybercriminals can exploit.
    • Unauthorised tools may store sensitive information in unprotected environments, exposing businesses to data leaks and intellectual property theft.
    • These tools may store data in jurisdictions that do not align with regional data privacy laws, such as GDPR, leading to potential legal liabilities.
  • Compliance and legal risks
    • Shadow AI tools may not comply with industry-specific regulations, increasing the risk of fines and legal consequences.
    • Employees might inadvertently share confidential or sensitive information with third-party AI tools, violating privacy policies and agreements.
  • Operational inefficiencies
    • Shadow AI creates fragmented workflows, as different teams may use disparate tools for the same tasks, leading to inefficiencies and errors.
    • Unapproved tools rarely align with existing systems, causing data silos and disrupting business processes.
    • If Shadow AI tools are detected and abruptly banned, it can interrupt workflows that have become reliant on these tools.
  • Data integrity challenges
    • Unauthorised tools may produce inconsistent or biased results, compromising decision-making processes.
    • Decentralised data processing complicates maintaining data integrity and consistency across the organisation.
  • Ethical and bias concerns
    • Shadow AI tools may perpetuate biases inherent in their training data without proper oversight, leading to unfair or unethical outcomes.
    • Organisations may lack visibility into how Shadow AI tools make decisions, raising ethical questions and damaging trust.
  • Strategic and cultural risks
    • Shadow AI initiatives may not align with the organisation’s broader strategic objectives, leading to wasted resources.
    • The proliferation of Shadow AI will result in a culture where employees bypass policy and erode trust and accountability.
    • Shadow AI undermines efforts to create a unified, strategic approach to AI adoption, with employees pulling in different directions.

Preventing Shadow AI requires proactive measures that balance employee needs with organisational goals. Here are some thoughts on how to mitigate the risks:

  • AI policy implementation
    • Avoid outright bans on AI tools.
    • Develop a policy that allows the controlled use of vetted and approved AI applications.
    • Set clear guidelines on how to use AI tools, ensuring compliance with security, privacy, and ethical standards.
  • Employee education
    • Conduct training sessions to inform employees about the risks of Shadow AI, including its impact on security, compliance, and operations.
    • Provide education on approved AI tools and their benefits, ensuring employees understand the value of adhering to company policies.
  • Monitoring and auditing
    • Implement monitoring systems to detect unauthorised AI tools. Regular audits can identify potential Shadow AI activity before it escalates.
    • Encourage transparency by providing a straightforward process for employees to request approval for new AI tools.
  • Encourage open communication
    • Create an environment where employees feel comfortable discussing their AI needs and challenges.
    • Collaborate with teams to identify tools that improve workflows while meeting organisational standards.
  • Alignment with business objectives
    • Integrate AI into the broader business strategy, ensuring its adoption supports long-term goals.
    • Develop a unified approach to AI implementation, balancing innovation with security and compliance.
  • Incentivise compliance
    • Recognise and reward teams for adhering to approved AI policies.
    • Make it easy for employees to access and use authorised AI tools, minimising the temptation to turn to Shadow AI alternatives.

Businesses should adopt a balanced approach that embraces AI to improve productivity while managing the risks. Key benefits of this approach include:

  • Employees can openly leverage approved AI tools to streamline workflows and make data-driven decisions.
  • Vetted tools reduce vulnerabilities and ensure compliance with regulatory requirements.
  • Allowing employees to use authorised AI tools encourages skill development and positions the business as a leader in AI adoption.
  • A controlled approach ensures AI use aligns with business objectives, minimising fragmentation and inefficiencies.
  • Reduce the need for employees to feel that they must use AI tools in secret.

Shadow AI is a growing risk in workplaces that either explicitly restrict AI or fail to adopt it proactively. By implementing an AI policy, providing education and transparency, and aligning AI initiatives with strategic goals, businesses can minimise Shadow AI risks and build a transparent, secure, and strategically aligned AI environment.

Implementing AI Policy and Governance

Embracing AI is no longer a choice but necessary for businesses to stay competitive. Rather than resisting this technological shift, companies should proactively harness the potential of AI to drive strategic advantages. Attempting to restrict its use will undoubtedly result in Shadow AI – the use of AI within businesses without formal approval or oversight, potentially leading to security and compliance risks.

When I started this blog, I didn’t want to write about the same things from the same perspective as everyone else. I tried to find new angles to discuss and contribute to the overall body of knowledge, but I have written very little over the last few years. After embracing AI about a year ago, I can focus on AI governance and bring it together with Information Security, Privacy, Risk Management, and Internal Audit.

After completing the ISACA Artificial Intelligence Fundamentals certificate and becoming “Certified in Emerging Technology (CET)”, I decided the next phase would include the new ISO 42001 Lead Auditor and Artificial Intelligence Governance Professional (AIGP) qualifications. I recently developed an AI policy and embarked upon a journey to deliver ISO 42001:2023.

Here are some important considerations:

  • Compliance with regulations – Ensure that AI applications adhere to local and international laws, including the General Data Protection Regulation (GDPR), which govern data protection and privacy. Failing to comply could result in severe financial sanctions and reputational damage. Robust data protection measures are essential.
  • Ethical AI use – Ethical guidelines are essential to prevent AI from perpetuating bias and discrimination. Businesses should implement working practices that ensure fair, transparent, and explainable AI decisions, allowing stakeholders to understand how outcomes are derived. AI has the potential to become “Computer says NO” on steroids. Transparency is essential.
  • Data Quality – The success of AI relies on the quality of data used for training models – garbage in, garbage out. Accurate, relevant, and current data is essential to produce reliable output. Data cleaning and validation practices help maintain data integrity.
  • Reliability – AI models must perform reliably across various scenarios. Regularly evaluating and updating models ensures they remain accurate and effective in changing environments, preventing degradation over time.
  • Alignment with business objectives – AI initiatives should directly support strategic goals. Identifying specific business problems that AI can solve and setting measurable outcomes ensures that AI investments deliver tangible benefits.
  • Employee training and involvement – Providing employees with the necessary training to embrace AI tools helps further develop a collaborative environment and significantly increase productivity. Involving staff from different departments ensures that AI applications meet diverse needs and gain wider acceptance.
  • Change management – Implementing AI will require significant changes throughout the business. Effective change management, including clear communication about the benefits and addressing concerns.
  • Continuous monitoring – Ongoing monitoring of AI systems is crucial to ensure they continue performing as expected. Establishing feedback mechanisms allows for timely adjustments based on user input and evolving requirements.
  • Governance framework – A structured governance framework is critical for overseeing AI initiatives. Clearly defined roles, responsibilities, and accountabilities help manage the lifecycle of AI projects.
  • Risk assessment – Conduct thorough AI Impact Assessments (AIIA) and address issues before they escalate to ensure the resilience of AI systems.
  • Social Responsibility – Firms should consider the broader impact of AI on society. Using AI for social good and avoiding harm contributes positively and aligns with corporate social responsibility goals. Regularly assessing the ethical implications of AI use is essential to align with societal values.
  • Vendor due diligence and technology evaluation – Evaluating the credibility and reliability of AI technology vendors ensures adherence to best practices and ethical standards. Understanding technical capabilities, limitations, and future-proofing allows businesses to adapt without significant rework of existing solutions.

Understanding these considerations will help businesses deal with the complexities of AI integration and ensure the ethical, effective, and sustainable use of AI technologies. Future articles in this series will include:

  • Data Privacy Implications
  • The EU AI Act and other legislation
  • Integration of ISO42001:2023 into ISO27001:2022
  • Conducting AI Impact Assessments

My professional goals include becoming a subject matter expert in AI governance to accompany my existing ISACA qualifications – Certified Information Security Manager (CISM), Certified Information Systems Auditor (CISA), and Certified in Risk and Information Systems Control (CRISC).