Rethinking the Ethics of AI in publishing

In publishing, where authority and trust are paramount, ethics can be both a guiding light and a minefield. Artificial Intelligence is reshaping how content is written and published, offering unprecedented efficiency but also raising ethical questions. This article explores how AI use in publishing can either support expert work or simulate it for influence or profit, often with damaging consequences. Ethical AI in support of human expertise includes:

  • Use of AI to enhance the work of expert writers and publishers – AI can support skilled professionals by streamlining research, suggesting improvements, and accelerating drafting processes. This allows authors, editors, and publishers to produce higher-quality content more efficiently while retaining creative and authoritative control.
  • Use of AI to fake expertise in writing and publishing for profit or influence – AI tools can generate polished, authoritative-sounding text, enabling individuals with little subject knowledge to publish materials with no means to verify accuracy, such as sites with a primary purpose of generating advertising or referral-related revenue. Inaccurate or misleading information undermines trust, distorts public discourse, and floods the market with low-quality or deceptive content while projecting authenticity.

Ethical AI use enhances productivity while ensuring that human expertise, critical judgment, and accountability remain central to the publishing process. The threshold for ethical AI use is clear to me: AI should assist experts, not create expertise, and the authors should be the subject matter experts and able to write the article independently, even if they choose to use AI to enhance productivity, clarity, and efficiency.

Ethical AI in Support of Human Expertise

When used ethically, AI can enhance the capabilities of skilled professionals without compromising integrity or authority. These are examples of how AI can respectfully support subject matter experts in the publishing process:

  • AI-assisted outlining and structuring – Subject matter experts can use AI to organise ideas, generate summaries, or improve coherence, allowing them to focus on deep analysis and insights.
  • Improved productivity without sacrificing integrity – AI helps streamline tasks like grammar correction, rewording, and formatting, reducing the time spent on administrative aspects of writing.
  • Experts are responsible for factual accuracy – AI may provide suggestions, but final validation, critical thinking, and real-world expertise shape the published work.
  • Refinement without compromising meaning – AI tools can enhance readability, correct errors, and optimise for audience engagement while preserving the writer’s original message and intent.

Where Ethical Boundaries Are Crossed

Unfortunately, AI is often used not to enhance genuine expertise, but to simulate it. This approach introduces risk, erodes trust, and undermines professional standards. Misuse includes bypassing real subject knowledge, misleading audiences, or generating content purely for financial gain, regardless of accuracy or credibility. AI should never be used to publish work that the author couldn’t understand, write, or explain without it.

  • Mass-production of AI-generated content without subject knowledge – Using AI to generate numerous articles on specialized topics without subject-matter expertise leads to shallow and misleading content.
  • Plagiarism and misinformation risks – AI can fabricate facts, misinterpret sources, or produce content that closely resembles existing material, raising ethical and legal concerns.
  • Deception and false authority – Presenting AI-generated work as if written by an expert misleads readers and erodes trust in professional knowledge.
  • Revenue-driven content farming – Some use AI to create high volumes of low-quality content, designed solely to rank on search engines and generate advertising revenue, regardless of accuracy or reader value.
  • Automated publishing without human oversight – AI lacks ethical judgment, industry experience, and the ability to apply critical judgement in context, making unchecked AI-generated content prone to serious errors and misleading claims.

Ethical AI use supports expertise, boosts efficiency, and preserves credibility. In contrast, misuse leads to misinformation, low-quality content, and eroded trust in professional knowledge. AI is a powerful assistant but human expertise remains irreplaceable. As AI continues to evolve, so too must our standards for credibility, authorship, and trust. In a world where anyone can publish, the true measure of value lies not in how content is created, but in who stands behind it.

Concerns among professionals in the AI space

I am pleased to report that I completed the next stage of my journey to become an AI subject matter expert. I passed the ISO 42001 Lead Auditor exam. Although I currently qualify only as a PECB Certified ISO 42001 Provisional Auditor, I can upgrade this from Provisional Auditor to Auditor later this year.

This journey has included attending seminars, reading news articles, conversations with other professionals, and generally trying to stay informed and remain current in a rapidly evolving field. This article summarises professional concerns and forms a core part of delivering governance of artificial intelligence. The extent to which these risks and concerns already exist and unfold daily is open to debate and not part of this article. I leave this with you to consider.

Misinformation and disinformation

AI can create and amplify false or misleading information at an unprecedented scale, threatening trust in media and democratic institutions.

  • AI models can generate thousands of fake articles, social media posts, or reviews in seconds, tailored to spread specific narratives, making manipulating public opinion easier for bad actors.
  • AI can create realistic videos or audio clips of individuals saying or doing things they never actually did for purposes such as blackmail, propaganda, or to discredit public figures.
  • AI-powered automated bots can hijack social media platforms, amplifying false narratives or silencing dissenting voices.
  • As AI-generated content becomes more challenging to distinguish from genuine material, people may lose trust in legitimate sources of information, leading to societal instability.
  • State-sponsored actors could leverage AI to influence elections, destabilise economies, or create population discord.

Bias and discrimination

AI systems are only as unbiased as the training data. Without careful oversight, they can perpetuate or even exacerbate discrimination.

  • AI learns from historical data, which often reflects societal inequalities. Recruitment algorithms, for example, trained on biased data might favour specific demographics over others.
  • Without transparency in AI decision-making processes, it is challenging to identify and address discriminatory outcomes.
  • AI tools and solutions developed by teams with limited diversity can lead to blind spots in understanding and addressing diverse needs.
  • Companies deploying biased AI systems can face reputational damage, lawsuits, and regulatory scrutiny.

Job displacement and economic impact

AI is transforming the job market, raising concerns about unemployment and economic inequality.

  • Routine manufacturing, logistics, customer service, and transportation jobs are highly susceptible to automation. Self-driving vehicles could replace millions of drivers, for example.
  • Transitioning displaced workers into new roles requires significant training programs and education investment. The lag between technological advancement and workforce adaptation is an important concern.
  • AI may disproportionately benefit those who own and develop the technology, widening the gap between low and high-income groups.
  • While AI boosts productivity, the economic benefits may not translate into job creation, potentially leaving millions without viable employment.

Privacy

AI systems thrive on data, but this dependency raises concerns about privacy violations, unethical data usage, and mass surveillance.

  • Companies and governments could collect vast amounts of personal data to train AI models without explicit consent.
  • AI-powered surveillance tools like facial recognition cameras can track movements and activities, often infringing on civil liberties.
  • The centralisation of data for AI training can increase the risk of breaches, exposing sensitive information to hackers.
  • Using AI to analyse and link disparate data sources can make it nearly impossible for individuals to remain anonymous.

Loss of control

As AI systems grow more sophisticated, there is increasing concern about their autonomy and the potential for catastrophic misuse.

  • Advanced AI systems may act in ways their creators did not anticipate, potentially causing harm in critical areas such as healthcare or transportation.
  • AI-driven weapons could operate without human intervention, raising ethical and strategic dilemmas, including the potential for accidental escalation of conflicts.
  • When AI surpasses human intelligence, it might prioritise itself over the well-being of humanity, leading to existential threats.
  • Many AI algorithms are complex and opaque, making it challenging to understand decision-making processes. This lack of transparency can lead to dangerous or harmful outcomes.
  • Governments and organisations struggle to keep up with the pace of AI development, creating a gap in oversight that could allow harmful applications to flourish.

With international cooperation, proactive regulation, ethical development, and public awareness, we can collectively address these risks and shape a safer, more trustworthy AI future.

Ethical considerations of AI

Ethical development, deployment, and use of artificial intelligence is essential to ensure responsible innovation, fairness, trustworthiness, and societal benefit.

  • When developing AI systems, it is crucial to prioritise human well-being, autonomy, and dignity.
    • AI should enhance user capabilities and decision-making processes.
    • Design systems to accommodate people of all abilities and demographics.
    • Provide clear, understandable explanations of AI functionality and outcomes.
    • Incorporate mechanisms to prevent harm, misuse, or unintended negative consequences.
    • Regularly incorporate user feedback to improve AI systems and address potential concerns.
  • Transparency builds trust and understanding between users and AI systems, making it essential to communicate AI processes.
    • Users should always be aware of when they interact with AI technologies.
    • Provide detailed yet understandable explanations of how the AI operates and makes decisions.
    • Share potential risks, limitations, and intended uses of AI systems openly with stakeholders.
    • Be transparent about how AI models collect, use, and safeguard data.
    • Maintain an open dialogue with users, researchers, and regulators to ensure ongoing alignment with ethical standards.
  • Develop and maintain AI systems to promote equitable outcomes and avoid discrimination.
    • Conduct regular audits to identify and mitigate biases in data and algorithms.
    • Use diverse datasets to prevent systemic inequalities from being embedded into AI systems.
    • Test and validate systems to guarantee fair treatment for all users.
    • Build AI solutions that actively address and reduce societal inequities.
    • Ensure compliance with laws and ethical norms to safeguard fairness and equality.
  • Protecting user data and respecting privacy rights is critical when designing and implementing AI systems.
    • Only collect the data necessary for the intended purpose.
    • Ensure sensitive data is anonymised to protect user identities.
    • Employ appropriate security measures to protect data from breaches or misuse.
    • Obtain explicit, informed consent for data collection and usage.
    • Align all practices with relevant privacy laws and regulations such as GDPR.
  • Accountability mechanisms ensure the responsible use of AI and the ability to address ethical challenges effectively.
    • Establish specialised teams or committees to oversee ethical compliance.
    • Conduct periodic reviews to verify adherence to ethical policies.
    • Define transparent processes to identify, address, and resolve issues related to AI systems.
    • Provide ongoing education for teams to remain informed on best practices and emerging ethical challenges.
    • Maintain accessible avenues for reporting concerns or suggesting improvements.
  • As technology and societal expectations evolve, so should the ethical frameworks surrounding AI.
    • Regularly review and update policies to address new challenges and opportunities in AI ethics.
    • Partner with global AI ethics communities to exchange insights and best practices.
    • Stay informed of advancements and risks to refine ethical approaches proactively.

I recently looked at Certified Ethical Emerging Technologist (CEET), a certification from CertNexus. The certification marketplace is expanding as more professional bodies offer qualifications in AI. CertNexus also offer the Certified AI Practitioner (CAIP) certification.

I chose to focus on the Artificial Intelligence Governance Professional (AIGP) from the International Association of Privacy Professionals (IAPP) and both Certified ISO/IEC 42001 Lead Auditor and Certified ISO/IEC 42001 Lead Implementer from the Professional Evaluation and Certification Board (PECB).

In a rapidly evolving field, embedding ethics into AI development is not a constraint, it is a critical enabler of long-term trust and value.

Unacceptable use of AI

The European Union’s Artificial Intelligence Act (EU AI Act) prohibits certain AI practices that pose unacceptable risks. The law considers these practices to significantly undermine fundamental rights, distort human behaviour, or cause harm to individuals or society. The following examples align with the EU AI Act’s definition of unacceptable risk practices, which are strictly prohibited within the EU:

  • Deploying subliminal, manipulative, or deceptive techniques impairs informed decision-making and causes significant harm.
    • AI-driven adverts with subliminal cues that exploit consumers’ unconscious desires, leading to overspending or unhealthy consumption habits.
    • Undermining informed decision-making with virtual assistants that guide users toward specific political agendas or products.
    • Use of AI in computer games to manipulate player behaviours into making excessive in-game purchases.
  • Exploiting vulnerabilities related to age, disability, or socioeconomic circumstances to distort behaviour in a way that causes or is likely to cause significant harm.
    • Use of AI to exploit low-income individuals by encouraging them to take out high-interest loans.
    • Using educational software to manipulate children’s choices or limit their learning potential based on stereotypes or biases.
    • Targeting elderly individuals with deceptive offers for unnecessary products or services, capitalising on cognitive impairments or isolation.
  • Using biometric systems to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
    • AI systems profiling individuals based on facial features to infer their religious beliefs, leading to discriminatory treatment in public services or employment.
    • Systems that categorise by sexual orientation or political views, resulting in exclusion or targeting in advertising or societal participation.
  • Evaluating or classifying individuals or groups based on social behaviour or personal traits in ways that result in detrimental or unfavourable treatment unrelated to the original purpose of data collection:
    • Scoring individuals based on social media activity to determine access to housing, loans, or educational opportunities.
    • Monitoring employees’ behaviour and punishing them for perceived misalignment with organisational culture.
    • Allocating public services based on AI-generated social scores that disadvantage vulnerable groups.
  • Assessing or predicting an individual’s likelihood of committing criminal offences solely based on profiling or personality traits, except when augmenting human assessments grounded in objective and verifiable facts directly linked to criminal activity:
    • Identifying potential offenders based on socioeconomic background, place of residence, or prior associations.
    • Predictive policing models disproportionately target ethnic minorities or marginalised communities, reinforcing existing biases.
    • Systems that flag individuals as risks based on psychological assessments rather than direct criminal behaviour.
  • Compiling facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
    • AI systems that harvest facial images from social media platforms without consent to create mass surveillance databases.
    • Using AI to collect and store via in-store cameras without proper notification or consent.
  • The EU AI Act prohibits the use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes in situations such as:
    • Scanning crowds at peaceful protests to identify participants for potential targeting.
    • Using remote biometric identification in shopping malls to monitor and track individuals in real time without evidence of criminal activity.
    • Deploying real-time facial recognition for general surveillance rather than specific investigations at public events.
  • However, exceptions to this prohibition include:
    • Searching for missing persons, abduction victims, or individuals subjected to human trafficking or sexual exploitation.
    • Preventing a substantial and imminent threat to life, such as an act of terrorism.
    • Identifying suspects in serious crimes, including murder, rape, armed robbery, drug and weapons trafficking, and organised crime.

The EU AI Act places compliance obligations on businesses that use AI systems, even if licensed by third-party providers. Businesses remain responsible for ensuring the AI complies with the law when offered to EU customers. Using unacceptable AI within the EU is prohibited, even if it is developed or procured from outside the EU.