Stay safe and avoid Black Friday scams

Black Friday is approaching again, and while it promises incredible deals, it’s also a time to exercise caution. Cybercriminals see this as an opportunity to prey on unsuspecting shoppers who may let their guard down in pursuit of huge discounts.

  • Stick to trusted retailers – it can be tempting to explore unfamiliar websites offering huge discounts, but this is where the risk of scams is highest.
    • Stick with the businesses you know and trust, especially those you have successfully shopped with before.
    • If you are curious about a new retailer, search for reviews and verify their legitimacy before purchasing.
  • Avoid clicking links in emails – phishing scams are rampant during shopping seasons, with fraudulent emails disguised as offers from popular brands.
    • Go directly to the retailer’s official website through your browser.
    • Scammers often use addresses that look similar to legitimate companies but include subtle differences.
  • Beware of unnecessary software and apps – installing unfamiliar software or apps to access discounts is a significant red flag.
    • Avoid downloading new apps unless they are from familiar and trusted retailers and official app stores.
    • Avoid apps that request excessive access to your device or personal data.
  • Watch out for hidden memberships – special deals may sometimes come with strings attached, such as hidden memberships that require regular full-price purchases.
    • Before completing a transaction, ensure you’re not unwittingly subscribing to a recurring service.
    • Avoid deals that feel overly complicated.
    • Genuine bargains don’t require convoluted commitments.
    • Avoid paying for access to discounts.
  • Use secure payment methods – protect your financial information by choosing safer payment options when shopping online.
    • Use credit cards or payment services such as PayPal or Apple Pay, which often provide buyer protection in case of fraud.
    • Avoid direct bank transfers.
    • Avoid payment methods that don’t offer recourse if something goes wrong.
  • Look for HTTPS and Security Indicators – before entering any personal or payment information online, ensure the website is secure.
    • A secure website address will have “https://” at the beginning of the URL, along with a padlock icon in the address bar.
    • Be cautious and avoid unsecured websites.
  • Monitor your bank statements – fraudulent transactions can go unnoticed if you don’t keep an eye on your bank accounts.
    • Check your bank statements regularly to spot any unauthorised transactions.
    • Report suspicious activity immediately to your bank or card provider.
  • Avoid public Wi-Fi for online shopping – shopping on public Wi-Fi networks can leave you vulnerable to hackers.
    • Make purchases using private, password-protected Wi-Fi connections.
    • Virtual Private Networks (VPNs) add an extra layer of security, making your online activity harder to intercept.
  • Think before you buy – impulse purchases often lead to regret, especially for items you wouldn’t normally consider buying.
    • Be realistic about the product’s value.
    • Pause before purchasing. If something seems worthless or unnecessary at the recommended retail price, it’s likely not worth buying with a 90% discount.

Although this article is about Black Friday, adopting these practices all year round is wise to ensure safe and secure online shopping. Generally speaking, it is good practice to avoid buying in a way that doesn’t align with societal norms; being asked to do so should be considered a huge red flag.

Concerns among professionals in the AI space

I am pleased to report that I completed the next stage of my journey to become an AI subject matter expert; I passed the ISO 42001 Lead Auditor exam. Although I only qualify for PECB Certified ISO 42001 Provisional Auditor, I can upgrade this from Provisional Auditor to Auditor later this year.

This journey has included attending seminars, reading news articles, conversations with other professionals, and generally trying to keep up and not get left behind in a rapidly evolving field. This article summarises professional concerns and forms a core part of delivering governance of artificial intelligence. The extent to which these risks and concerns already exist and unfold daily is open to debate and not part of this article. I leave this with you to consider.

Misinformation and disinformation

AI can create and amplify false or misleading information at an unprecedented scale, threatening trust in media and democratic institutions.

  • AI models can generate thousands of fake articles, social media posts, or reviews in seconds, tailored to spread specific narratives, making manipulating public opinion easier for bad actors.
  • AI can create realistic videos or audio clips of individuals saying or doing things they never actually did for purposes such as blackmail, propaganda, or to discredit public figures.
  • AI-powered automated bots can hijack social media platforms, amplifying false narratives or silencing dissenting voices.
  • As AI-generated content becomes more challenging to distinguish from genuine material, people may lose trust in legitimate sources of information, leading to societal instability.
  • State-sponsored actors could leverage AI to influence elections, destabilise economies, or create population discord.

Bias and discrimination

AI systems are only as unbiased as the training data. Without careful oversight, they can perpetuate or even exacerbate discrimination.

  • AI learns from historical data, which often reflects societal inequalities. Recruitment algorithms, for example, trained on biased data might favour specific demographics over others.
  • Without transparency in AI decision-making processes, it is challenging to identify and address discriminatory outcomes.
  • AI tools and solutions developed by teams with limited diversity can lead to blind spots in understanding and addressing diverse needs.
  • Companies deploying biased AI systems can face reputational damage, lawsuits, and regulatory scrutiny.

Job displacement and economic impact

AI is transforming the job market, raising concerns about unemployment and economic inequality.

  • Routine manufacturing, logistics, customer service, and transportation jobs are highly susceptible to automation. Self-driving vehicles could replace millions of drivers, for example.
  • Transitioning displaced workers into new roles requires significant training programs and education investment. The lag between technological advancement and workforce adaptation is an important concern.
  • AI may disproportionately benefit those who own and develop the technology, widening the gap between low and high-income groups.
  • While AI boosts productivity, the economic benefits may not translate into job creation, potentially leaving millions without viable employment.

Privacy

AI systems thrive on data, but this dependency raises concerns about privacy violations, unethical data usage, and mass surveillance.

  • Companies and governments could collect vast amounts of personal data to train AI models without explicit consent.
  • AI-powered surveillance tools like facial recognition cameras can track movements and activities, often infringing on civil liberties.
  • The centralisation of data for AI training can increase the risk of breaches, exposing sensitive information to hackers.
  • Using AI to analyse and link disparate data sources can make it nearly impossible for individuals to remain anonymous.

Loss of control

As AI systems grow more sophisticated, there is increasing concern about their autonomy and the potential for catastrophic misuse.

  • Advanced AI systems may act in ways their creators did not anticipate, potentially causing harm in critical areas such as healthcare or transportation.
  • AI-driven weapons could operate without human intervention, raising ethical and strategic dilemmas, including the potential for accidental escalation of conflicts.
  • When AI surpasses human intelligence, it might prioritise itself over the well-being of humanity, leading to existential threats.
  • Many AI algorithms are complex and opaque, making it challenging to understand decision-making processes. This lack of transparency can lead to dangerous or harmful outcomes.
  • Governments and organisations struggle to keep up with the pace of AI development, creating a gap in oversight that could allow harmful applications to flourish.

I am confident that collectively, through international cooperation, we can proactively address these risks and concerns with the continued establishment of clear and enforceable regulations, ethical design and development, and increased public awareness.

Ethical considerations of AI

It is paramount to approach artificial intelligence development, deployment, and usage with a focus on ethics to ensure responsible AI innovation that enables trust, fairness, and societal benefit.

  • When developing AI systems, it is crucial to prioritise human well-being, autonomy, and dignity.
    • AI should enhance user capabilities and decision-making processes.
    • Design systems to accommodate people of all abilities and demographics.
    • Provide clear, understandable explanations of AI functionality and outcomes.
    • Incorporate mechanisms to prevent harm, misuse, or unintended negative consequences.
    • Regularly incorporate user feedback to improve AI systems and address potential concerns.
  • Transparency builds trust and understanding between users and AI systems, making it essential to communicate AI processes.
    • Users should always be aware of when they interact with AI technologies.
    • Provide detailed yet understandable explanations of how the AI operates and makes decisions.
    • Share potential risks, limitations, and intended uses of AI systems openly with stakeholders.
    • Be transparent about how AI models collect, use, and safeguard data.
    • Maintain an open dialogue with users, researchers, and regulators to ensure ongoing alignment with ethical standards.
  • Develop and maintain AI systems to promote equitable outcomes and avoid discrimination.
    • Conduct regular audits to identify and mitigate biases in data and algorithms.
    • Use diverse datasets to prevent systemic inequalities from being embedded into AI systems.
    • Test and validate systems guarantee fair treatment for all users.
    • Build AI solutions that actively address and reduce societal inequities.
    • Ensure compliance with laws and ethical norms to safeguard fairness and equality.
  • Protecting user data and respecting privacy rights is critical when designing and implementing AI systems.
    • Only collect the data necessary for the intended purpose.
    • Ensure sensitive data is anonymised to protect user identities.
    • Employ appropriate security measures to protect data from breaches or misuse.
    • Obtain explicit, informed consent for data collection and usage.
    • Align all practices with relevant privacy laws and regulations such as GDPR.
  • Accountability mechanisms ensure the responsible use of AI and the ability to address ethical challenges effectively.
    • Establish specialised teams or committees to oversee ethical compliance.
    • Conduct periodic reviews to verify adherence to ethical policies.
    • Define transparent processes to identify, address, and resolve issues related to AI systems.
    • Provide ongoing education for teams to remain informed on best practices and emerging ethical challenges.
    • Maintain accessible avenues for reporting concerns or suggesting improvements.
  • As technology and societal expectations evolve, so should the ethical frameworks surrounding AI.
    • Regularly review and update policies to address new challenges and opportunities in AI ethics.
    • Partner with global AI ethics communities to exchange insights and best practices.
    • Stay informed of advancements and risks to refine ethical approaches proactively.

I recently looked at Certified Ethical Emerging Technologist (CEET), a certification from CertNexus. The certification marketplace is expanding as more professional bodies offer qualifications in AI. At this time, the extent of the overlap between the different certifications is unclear. CertNexus also offer the Certified AI Practitioner (CAIP) certification.

I chose to focus on the Artificial Intelligence Governance Professional (AIGP) from the International Association of Privacy Professionals (IAPP) and both Certified ISO/IEC 42001 Lead Auditor and Certified ISO/IEC 42001 Lead Implementer from the Professional Evaluation and Certification Board (PECB).

Unacceptable use of AI

The European Union’s Artificial Intelligence Act (EU AI Act) prohibits certain AI practices that pose unacceptable risks. The law considers these practices to significantly undermine fundamental rights, distort human behaviour, or cause harm to individuals or society. Here are some examples:

  • Deploying subliminal, manipulative, or deceptive techniques distorts individuals’ behaviour, impairs their ability to make informed decisions, and causes significant harm.
    • AI-driven adverts with subliminal cues that exploit consumers’ unconscious desires, leading to overspending or unhealthy consumption habits.
    • Undermining informed decision-making with virtual assistants that guide users toward specific political agendas or products.
    • Use of AI in computer games to manipulate player behaviours into making excessive in-game purchases.
  • Exploiting vulnerabilities related to age, disability, or socioeconomic circumstances to distort behaviour in a way that causes or is likely to cause significant harm.
    • Use of AI to exploit low-income individuals by encouraging them to take out high-interest loans.
    • Using educational software to manipulate children’s choices or limit their learning potential based on stereotypes or biases.
    • Targeting elderly individuals with deceptive offers for unnecessary products or services, capitalising on cognitive impairments or isolation.
  • Using biometric systems to infer sensitive attributes such as race, political opinions, trade union membership, religious or philosophical beliefs, or sexual orientation.
    • AI systems profiling individuals based on facial features to infer their religious beliefs, leading to discriminatory treatment in public services or employment.
    • Systems that categorise by sexual orientation or political views, resulting in exclusion or targeting in advertising or societal participation.
  • Evaluating or classifying individuals or groups based on social behaviour or personal traits in ways that result in detrimental or unfavourable treatment unrelated to the original purpose of data collection:
    • Scoring individuals based on social media activity to determine access to housing, loans, or educational opportunities.
    • Monitoring employee’s behaviours and punishing them for perceived misalignment with organisational culture.
    • Allocating public services based on AI-generated social scores that disadvantage vulnerable groups.
  • Assessing or predicting an individual’s likelihood of committing criminal offences solely based on profiling or personality traits, except when augmenting human assessments grounded in objective and verifiable facts directly linked to criminal activity:
    • Identifying potential offenders based on socioeconomic background, place of residence, or prior associations.
    • Predictive policing models disproportionately target ethnic minorities or marginalised communities, reinforcing existing biases.
    • Systems that flag individuals as risks based on psychological assessments rather than direct criminal behaviour.
  • Compiling facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
    • AI systems that harvest facial images from social media platforms without consent to create mass surveillance databases.
    • Using AI to collect and store via in-store cameras without proper notification or consent.
  • Using real-time remote biometric identification in publicly accessible spaces for law enforcement purposes.
    • Such as:
      • Scanning crowds at peaceful protests to identify participants for potential targeting.
      • Using remote biometric identification in shopping malls to monitor and track individuals in real time without evidence of criminal activity.
      • Deploying real-time facial recognition for general surveillance rather than specific investigations at public events.
    • Exceptions include:
      • Searching for missing persons, abduction victims, or individuals subjected to human trafficking or sexual exploitation.
      • Preventing a substantial and imminent threat to life, such as an act of terrorism.
      • Identifying suspects in serious crimes, including murder, rape, armed robbery, drug and weapons trafficking, and organised crime.

The EU AI Act places compliance obligations on businesses that use AI systems, even if licensed by third-party providers. Businesses remain responsible for ensuring the AI complies with the law when offered to EU customers. It is unlawful to develop or procure unacceptable AI from outside the EU and use it within the EU.