In a previous article, I mentioned the need to conduct Artificial Intelligence Impact Assessments (AIIA). Businesses are accountable for using AI, even when developed by third parties. Ethical standards and legal requirements are evolving in this direction, and the need includes:
- Essential due diligence when purchasing new software.
- Maintaining an inventory of software and its use of AI
- Understanding how the AI works and its impact on stakeholders
- Being responsible for the outcomes
These are core requirements for the implementation of ISO 42001. AI can impact individuals, groups of individuals, and society as a whole. The following explores the impact in more detail.
AI can disrupt personal lives in ways that raise ethical, psychological, and practical concerns. These issues often stem from the misuse of data and lack of transparency. One of my concerns is that with AI, “Computer says NO” is quickly becoming “Computer says NO on steroids”.
- Many systems collect and process personal data without user consent, and even with widespread privacy legislation, this has still led to a lack of control over personal information. AI only exacerbates this.
- Automation continues to replace manufacturing, retail, and transportation jobs, leaving many individuals without employment or requiring them to acquire new skills in a rapidly changing economy.
- Personal decisions, such as loan approvals or job applications, may be influenced by biased AI systems, such as unfairly favouring specific demographics and perpetuating any existing discrimination.
- Dependence on AI for everyday tasks, such as navigation, time management, drafting of documents and communication, may reduce critical thinking and problem-solving skills over time. Here is a quote from Star Trek: Insurrection – “We believe that when you create a machine to do the work of a man, you take something away from the man.“
Under GDPR, people have the right not to be subjected to decisions based solely on automated processing, the right to a meaningful explanation of any decisions made, and the right to contest the outcome. These rights extend to using artificial intelligence and its impact on individual rights. The expected result from a request for a meaningful explanation is often limited to “we put the information into the computer it gives us the answer”. Emerging AI legislation, including the EU AI Act, strengthens these rights.
AI poses profound challenges to society due to its scale and potential misuse:
- AI-generated fake images, videos, and news erode trust in media, businesses, institutions, and elections.
- Governments and corporations increasingly use AI for mass surveillance, raising ethical issues around civil liberties and human rights, such as discriminatory misuse of facial recognition.
- Automation can disproportionately impact low-skilled workers, leading to unemployment and widening economic gaps as industries like retail and manufacturing transform rapidly. Businesses need to
- AI decisions in critical areas such as healthcare and law enforcement prompt questions about responsibility when errors or harm occur. The EU AI Act includes an unacceptable risk category for banned use and a high-risk category with increased safeguarding requirements.
Adopting AI in business brings ethical, compliance, and operational risks, and a failure to address these can lead to financial, reputational, or legal repercussions:
- AI systems can be vulnerable to hacking, adversarial attacks, and data breaches like any other software system. Manipulated AI outputs can compromise decision-making and business operations.
- AI-powered recruitment, lending, and advertising tools may perpetuate biases, exposing businesses to reputational damage and legal liabilities.
- Evolving laws, like the EU AI Act, require legal and technical expertise. Failing to comply risks, fines, and operational disruptions.
- Misuse or failures in AI can harm trust. Biased recommendations or faulty product suggestions can alienate customers and damage corporate brands.
- Heavy reliance on AI systems risks operational disruptions from bugs, data inaccuracies, or cyberattacks. Businesses must maintain contingency plans that include operating with AI and other software.
- AI requires significant investment in infrastructure, training, and maintenance, often challenging businesses to demonstrate a return on investment.
- AI-generated content raises questions of ownership and copyright, creating potential disputes over AI-driven designs or innovations.
Adopting AI-based software requires a clear understanding of its impacts to ensure responsible use and to avoid biases, privacy issues, or harmful inaccuracies, raising ethical and accountability concerns. The societal and economic effects, like job loss and trust erosion, highlight the need for proactive risk management. Considering these implications, businesses and policymakers can create systems that balance innovation with fairness and security.

Information security, risk management, internal audit, and governance professional with over 25 years of post-graduate experience gained across a diverse range of private and public sector projects in banking, insurance, telecommunications, health services, charities and more, both in the UK and internationally – MORE