Avoiding the risks of Shadow AI

As AI becomes integral to business operations, organisations must decide whether to embrace it or impose restrictions. While concerns about data security, compliance, and ethical implications often lead to restrictive AI policies, these well-meaning efforts can inadvertently contribute to the rise of Shadow AI. As an existing Information Security professional with ISACA qualifications, my AI governance journey is more than just learning about AI; it is about integrating new knowledge with existing security, risk, and audit knowledge.

Shadow AI refers to using unapproved AI tools and applications within an organisation. Employees often turn to these tools when official policies restrict AI use, perceiving them as necessary for improving efficiency or achieving work goals. While these tools may offer short-term productivity gains, their unauthorised nature introduces a range of risks that can jeopardise security, compliance, and operational integrity. Risks associated with shadow AI include:

  • Data security vulnerabilities
    • Shadow AI tools often lack the rigorous security protocols organisations require, creating vulnerabilities that cybercriminals can exploit.
    • Unauthorised tools may store sensitive information in unprotected environments, exposing businesses to data leaks and intellectual property theft.
    • These tools may store data in jurisdictions that do not align with regional data privacy laws, such as GDPR, leading to potential legal liabilities.
  • Compliance and legal risks
    • Shadow AI tools may not comply with industry-specific regulations, increasing the risk of fines and legal consequences.
    • Employees might inadvertently share confidential or sensitive information with third-party AI tools, violating privacy policies and agreements.
  • Operational inefficiencies
    • Shadow AI creates fragmented workflows, as different teams may use disparate tools for the same tasks, leading to inefficiencies and errors.
    • Unapproved tools rarely align with existing systems, causing data silos and disrupting business processes.
    • If Shadow AI tools are detected and abruptly banned, it can interrupt workflows that have become reliant on these tools.
  • Data integrity challenges
    • Unauthorised tools may produce inconsistent or biased results, compromising decision-making processes.
    • Decentralised data processing complicates maintaining data integrity and consistency across the organisation.
  • Ethical and bias concerns
    • Shadow AI tools may perpetuate biases inherent in their training data without proper oversight, leading to unfair or unethical outcomes.
    • Organisations may lack visibility into how Shadow AI tools make decisions, raising ethical questions and damaging trust.
  • Strategic and cultural risks
    • Shadow AI initiatives may not align with the organisation’s broader strategic objectives, leading to wasted resources.
    • The proliferation of Shadow AI will result in a culture where employees bypass policy and erode trust and accountability.
    • Shadow AI undermines efforts to create a unified, strategic approach to AI adoption, with employees pulling in different directions.

Preventing Shadow AI requires proactive measures that balance employee needs with organisational goals. Here are some thoughts on how to mitigate the risks:

  • AI policy implementation
    • Avoid outright bans on AI tools.
    • Develop a policy that allows the controlled use of vetted and approved AI applications.
    • Set clear guidelines on how to use AI tools, ensuring compliance with security, privacy, and ethical standards.
  • Employee education
    • Conduct training sessions to inform employees about the risks of Shadow AI, including its impact on security, compliance, and operations.
    • Provide education on approved AI tools and their benefits, ensuring employees understand the value of adhering to company policies.
  • Monitoring and auditing
    • Implement monitoring systems to detect unauthorised AI tools. Regular audits can identify potential Shadow AI activity before it escalates.
    • Encourage transparency by providing a straightforward process for employees to request approval for new AI tools.
  • Encourage open communication
    • Create an environment where employees feel comfortable discussing their AI needs and challenges.
    • Collaborate with teams to identify tools that improve workflows while meeting organisational standards.
  • Alignment with business objectives
    • Integrate AI into the broader business strategy, ensuring its adoption supports long-term goals.
    • Develop a unified approach to AI implementation, balancing innovation with security and compliance.
  • Incentivise compliance
    • Recognise and reward teams for adhering to approved AI policies.
    • Make it easy for employees to access and use authorised AI tools, minimising the temptation to turn to Shadow AI alternatives.

Businesses should adopt a balanced approach that embraces AI to improve productivity while managing the risks. Key benefits of this approach include:

  • Employees can openly leverage approved AI tools to streamline workflows and make data-driven decisions.
  • Vetted tools reduce vulnerabilities and ensure compliance with regulatory requirements.
  • Allowing employees to use authorised AI tools encourages skill development and positions the business as a leader in AI adoption.
  • A controlled approach ensures AI use aligns with business objectives, minimising fragmentation and inefficiencies.
  • Reduce the need for employees to feel that they must use AI tools in secret.

Shadow AI is a growing risk in workplaces that either explicitly restrict AI or fail to adopt it proactively. By implementing an AI policy, providing education and transparency, and aligning AI initiatives with strategic goals, businesses can minimise Shadow AI risks and build a transparent, secure, and strategically aligned AI environment.

Implementing AI Policy and Governance

Embracing AI is no longer a choice but necessary for businesses to stay competitive. Rather than resisting this technological shift, companies should proactively harness the potential of AI to drive strategic advantages. Attempting to restrict its use will undoubtedly result in Shadow AI – the use of AI within businesses without formal approval or oversight, potentially leading to security and compliance risks.

When I started this blog, I didn’t want to write about the same things from the same perspective as everyone else. I tried to find new angles to discuss and contribute to the overall body of knowledge, but I have written very little over the last few years. After embracing AI about a year ago, I can focus on AI governance and bring it together with Information Security, Privacy, Risk Management, and Internal Audit.

After completing the ISACA Artificial Intelligence Fundamentals certificate and becoming “Certified in Emerging Technology (CET)”, I decided the next phase would include the new ISO 42001 Lead Auditor and Artificial Intelligence Governance Professional (AIGP) qualifications. I recently developed an AI policy and embarked upon a journey to deliver ISO 42001:2023.

Here are some important considerations:

  • Compliance with regulations – Ensure that AI applications adhere to local and international laws, including the General Data Protection Regulation (GDPR), which govern data protection and privacy. Failing to comply could result in severe financial sanctions and reputational damage. Robust data protection measures are essential.
  • Ethical AI use – Ethical guidelines are essential to prevent AI from perpetuating bias and discrimination. Businesses should implement working practices that ensure fair, transparent, and explainable AI decisions, allowing stakeholders to understand how outcomes are derived. AI has the potential to become “Computer says NO” on steroids. Transparency is essential.
  • Data Quality – The success of AI relies on the quality of data used for training models – garbage in, garbage out. Accurate, relevant, and current data is essential to produce reliable output. Data cleaning and validation practices help maintain data integrity.
  • Reliability – AI models must perform reliably across various scenarios. Regularly evaluating and updating models ensures they remain accurate and effective in changing environments, preventing degradation over time.
  • Alignment with business objectives – AI initiatives should directly support strategic goals. Identifying specific business problems that AI can solve and setting measurable outcomes ensures that AI investments deliver tangible benefits.
  • Employee training and involvement – Providing employees with the necessary training to embrace AI tools helps further develop a collaborative environment and significantly increase productivity. Involving staff from different departments ensures that AI applications meet diverse needs and gain wider acceptance.
  • Change management – Implementing AI will require significant changes throughout the business. Effective change management, including clear communication about the benefits and addressing concerns.
  • Continuous monitoring – Ongoing monitoring of AI systems is crucial to ensure they continue performing as expected. Establishing feedback mechanisms allows for timely adjustments based on user input and evolving requirements.
  • Governance framework – A structured governance framework is critical for overseeing AI initiatives. Clearly defined roles, responsibilities, and accountabilities help manage the lifecycle of AI projects.
  • Risk assessment – Conduct thorough AI Impact Assessments (AIIA) and address issues before they escalate to ensure the resilience of AI systems.
  • Social Responsibility – Firms should consider the broader impact of AI on society. Using AI for social good and avoiding harm contributes positively and aligns with corporate social responsibility goals. Regularly assessing the ethical implications of AI use is essential to align with societal values.
  • Vendor due diligence and technology evaluation – Evaluating the credibility and reliability of AI technology vendors ensures adherence to best practices and ethical standards. Understanding technical capabilities, limitations, and future-proofing allows businesses to adapt without significant rework of existing solutions.

Understanding these considerations will help businesses deal with the complexities of AI integration and ensure the ethical, effective, and sustainable use of AI technologies. Future articles in this series will include:

  • Data Privacy Implications
  • The EU AI Act and other legislation
  • Integration of ISO42001:2023 into ISO27001:2022
  • Conducting AI Impact Assessments

My professional goals include becoming a subject matter expert in AI governance to accompany my existing ISACA qualifications – Certified Information Security Manager (CISM), Certified Information Systems Auditor (CISA), and Certified in Risk and Information Systems Control (CRISC).

Time for some Digital Housekeeping

As the internet evolved, so did the growing need for user accounts to access online information and services. Consequently, we have hundreds of accounts, each requiring separate login credentials. This proliferation of digital accounts has led to significant issues and risks:

  • Unnecessary requirement for login credentials – not every site needs login credentials, yet we are often still required to create an account. I thought about writing this article many times, and then recently, I needed to sign up to 3 separate sites to benefit from the service offered by a single business. In this case, one use account should have been sufficient. Many sites and services shouldn’t need an account at all.
  • Remembering numerous passwords – It is inconvenient and introduces security risks – to remember login credentials, people often resort to using simple passwords, repeating the same password for different accounts, or creating lots of slightly different passwords based on a theme. If a hacker compromises one of your passwords, they could easily compromise many.
  • Password vaults – there are many different password management solutions, but these are not always foolproof and often require trust and dependence on third-party services. Password vaults do make the use of long, complex passwords viable.
  • Risk exposure – the more accounts you have, the more personal information is stored online, which increases the risk of sensitive data exposure in a breach. Also, more accounts mean more emails and communications from online services, and more time and effort are required to distinguish between legitimate messages and phishing attempts from scammers impersonating services to steal login credentials.
  • Privacy – each account will collect and store personal information. The more accounts you have, the more places your data is stored, increasing the risk that site owners will misuse or sell your data or track online behaviour, preferences, and interactions, leading to privacy concerns.
  • Attack surface – each account is a potential entry point for cybercriminals. The more accounts you have, the larger the attack surface.
  • Time-consuming – managing so many accounts can become time-consuming and distract from more productive activities.
  • Blocking cut and paste – storing passwords in a vault makes using long, complex passwords convenient. It is not helpful if site owners block cut and paste and require users to type passwords manually. A more recent change is to measure the time it takes to type a password and reject login attempts that are too quick. This blocks pasted passwords and passwords automatically filled from browser-based password vaults. It is well-intentioned but risks replacing complex passwords with simple passwords.

Practicing good cyber hygiene is essential:

  • Use a password vault so you don’t need to remember every password – this makes using strong passwords for each account easy. Be careful whose solution you choose. Make sure you select a reputable vendor.
  • Establish an inventory of sites where you have online accounts – using a password vault makes this much more manageable.
  • Delete accounts that you no longer need. You will still have an account even if you signed up for an online service using Google, Apple, Microsoft, Facebook, or LinkedIn credentials. In addition to deleting the account, it is also necessary to revoke access to the credentials – i.e., remove the service from the list of third-party sites in  Google or other login services.
  • Don’t repeat passwords – use a different password for each user account.
  • Use Multi-Factor Authentication (MFA) to add security to your online account.The long-term effectiveness of MFA is the subject of much debate, given rapid technological changes and adaptability in cybercrime. The use of MFA is still better than not using it.
  • Don’t store credit card details on the sites unless absolutely necessary.
  • Avoid using immutable facts for authentication purposes. For example, your mother’s maiden name or the name of your first school will remain the same. Immutable facts are wrong for security, but websites and service providers still use them.

Ransom payments are an awful idea

In a nutshell, ransomware is malicious software designed to encrypt data. Threat actors then demand a ransom in exchange for decryption keys and deletion of stolen data. In practice, paying a ransom to unencrypt data or to prevent the release of sensitive information to the public can be highly problematic:

  • No guarantee – Paying the ransom does not guarantee that the attackers will provide the decryption key or that they will not release the data. Once you make the payment, you don’t have any control over the attacker’s actions. There is no enforceable contract in place. Someone has committed a serious crime, yet they expect you to trust them. Hope is not a viable strategy.
  • Encourages future attacks – Paying the ransom encourages cybercriminals by giving them a highly lucrative incentive to continue their malicious activities. It also signals to other potential attackers that ransomware is a profitable business model. Attackers will add details of businesses willing to pay to a list and sell it to other cyber criminals.
  • Deprived of vital resources to improve security posture – Paying the ransom does not address the underlying security vulnerabilities that enabled the breach. In addition, paying the ransom deprives businesses of funding to address such vulnerabilities, leaving businesses susceptible to further attacks.
  • Funds illegal activities – The funds obtained through ransom payments can finance further criminal activities, including additional cyberattacks, organized crime, and terrorism.
  • Legal and regulatory implications – Knowingly paying the ransom to cybercriminals in countries subject to government financial sanctions is illegal. Many countries have regulations prohibiting financial transactions with individuals and businesses in sanctioned countries, and sending money violates the sanctions. Paying a ransom is not an exception to this rule.
  • Payment can lead to a subscription model – Ransoms can be very high, and no guarantee paying once will prevent future demands. Cybercriminals can easily make repeated financial demands to prevent sensitive data from being released and keep demanding more.

If an attack occurs, work with law enforcement, information security professionals, and insurance providers to respond to the incident. There may be a tendency to fear authorities or regulators and choose to deal with cyber criminals rather than face the consequences of allowing an attack. In practice, dealing openly and honestly with authorities and regulators is more appropriate and viable.