I am pleased to report that I completed the next stage of my journey to become an AI subject matter expert; I passed the ISO 42001 Lead Auditor exam. Although I only qualify for PECB Certified ISO 42001 Provisional Auditor, I can upgrade this from Provisional Auditor to Auditor later this year.
This journey has included attending seminars, reading news articles, conversations with other professionals, and generally trying to keep up and not get left behind in a rapidly evolving field. This article summarises professional concerns and forms a core part of delivering governance of artificial intelligence. The extent to which these risks and concerns already exist and unfold daily is open to debate and not part of this article. I leave this with you to consider.
Misinformation and disinformation
AI can create and amplify false or misleading information at an unprecedented scale, threatening trust in media and democratic institutions.
- AI models can generate thousands of fake articles, social media posts, or reviews in seconds, tailored to spread specific narratives, making manipulating public opinion easier for bad actors.
- AI can create realistic videos or audio clips of individuals saying or doing things they never actually did for purposes such as blackmail, propaganda, or to discredit public figures.
- AI-powered automated bots can hijack social media platforms, amplifying false narratives or silencing dissenting voices.
- As AI-generated content becomes more challenging to distinguish from genuine material, people may lose trust in legitimate sources of information, leading to societal instability.
- State-sponsored actors could leverage AI to influence elections, destabilise economies, or create population discord.
Bias and discrimination
AI systems are only as unbiased as the training data. Without careful oversight, they can perpetuate or even exacerbate discrimination.
- AI learns from historical data, which often reflects societal inequalities. Recruitment algorithms, for example, trained on biased data might favour specific demographics over others.
- Without transparency in AI decision-making processes, it is challenging to identify and address discriminatory outcomes.
- AI tools and solutions developed by teams with limited diversity can lead to blind spots in understanding and addressing diverse needs.
- Companies deploying biased AI systems can face reputational damage, lawsuits, and regulatory scrutiny.
Job displacement and economic impact
AI is transforming the job market, raising concerns about unemployment and economic inequality.
- Routine manufacturing, logistics, customer service, and transportation jobs are highly susceptible to automation. Self-driving vehicles could replace millions of drivers, for example.
- Transitioning displaced workers into new roles requires significant training programs and education investment. The lag between technological advancement and workforce adaptation is an important concern.
- AI may disproportionately benefit those who own and develop the technology, widening the gap between low and high-income groups.
- While AI boosts productivity, the economic benefits may not translate into job creation, potentially leaving millions without viable employment.
Privacy
AI systems thrive on data, but this dependency raises concerns about privacy violations, unethical data usage, and mass surveillance.
- Companies and governments could collect vast amounts of personal data to train AI models without explicit consent.
- AI-powered surveillance tools like facial recognition cameras can track movements and activities, often infringing on civil liberties.
- The centralisation of data for AI training can increase the risk of breaches, exposing sensitive information to hackers.
- Using AI to analyse and link disparate data sources can make it nearly impossible for individuals to remain anonymous.
Loss of control
As AI systems grow more sophisticated, there is increasing concern about their autonomy and the potential for catastrophic misuse.
- Advanced AI systems may act in ways their creators did not anticipate, potentially causing harm in critical areas such as healthcare or transportation.
- AI-driven weapons could operate without human intervention, raising ethical and strategic dilemmas, including the potential for accidental escalation of conflicts.
- When AI surpasses human intelligence, it might prioritise itself over the well-being of humanity, leading to existential threats.
- Many AI algorithms are complex and opaque, making it challenging to understand decision-making processes. This lack of transparency can lead to dangerous or harmful outcomes.
- Governments and organisations struggle to keep up with the pace of AI development, creating a gap in oversight that could allow harmful applications to flourish.
I am confident that collectively, through international cooperation, we can proactively address these risks and concerns with the continued establishment of clear and enforceable regulations, ethical design and development, and increased public awareness.

Information security, risk management, internal audit, and governance professional with over 25 years of post-graduate experience gained across a diverse range of private and public sector projects in banking, insurance, telecommunications, health services, charities and more, both in the UK and internationally – MORE