Protect Your Home with Land Registry Property Alerts

Imagine discovering that someone has taken out a mortgage on your home, or even sold it, without your knowledge. Property fraud is rare but financially devastating. Criminals target properties that are unmortgaged, rented out, or standing empty, especially when the owner lives elsewhere. The UK Land Registry property alert service offers a simple early-warning to identify potential problems. It can’t stop fraud by itself, but it gives you the crucial time to act before a transaction is completed.

How the alert system works

When you sign up for Property Alerts, the Land Registry emails you whenever it records activity on a monitored title deed, such as a change of ownership, a new mortgage, or a change to the registered owner’s details.

You can monitor up to ten properties per account, even if you don’t live at them. The service is free, quick to set up, and you can unsubscribe at any time.

Alerts don’t block applications. They simply let you know that something has been submitted, prompting you to check whether it’s legitimate. If you weren’t expecting any activity, you can intervene before money or ownership changes hands.

Who should register

Almost everyone can benefit, but it’s particularly valuable for:

  • Mortgage-free properties that criminals could attempt to mortgage or sell.
  • Rental or vacant properties where post might go unnoticed.
  • Elderly or vulnerable owners who may not spot irregularities quickly.
  • Second homes or overseas owners who spend long periods away.

If you fall into any of these categories, registering for alerts is one of the easiest and most effective precautions you can take. When a property is mortgaged, the lender’s financial interest provides a layer of protection. The mortgage company, as the primary creditor, won’t permit changes to ownership or certain new charges, such as additional loans, without their consent while the mortgage is still outstanding. However, secondary charges, like secured loans, can still be placed on the property, even if the mortgage is still in place. Once the mortgage is fully repaid, the primary creditor’s protections no longer apply, and the property is free from these restrictions.

How to strengthen your defences

Property alerts are only one layer of protection. Here are some additional options:

  • Consider monitoring properties belonging to family and friends, and have them monitor your property.
  • Ask a solicitor or conveyancer to add an anti-fraud restriction requiring identity certification before any sale or mortgage.
  • Stay alert to emails from the HM Land Registry Property Alert service and use inbox rules to flag them if your incoming email volume is exceptionally high. If an unexpected alert arrives, contact HM Land Registry straight away using the official number on their website
  • If identity theft is suspected, call your bank, report to Action Fraud, and consider a Credit Industry Fraud Avoidance System (CIFAS) protective registration.

Once a fraudulent action is registered, unravelling it can be lengthy and costly, though not impossible. Although HM Land Registry indemnifies victims if they lose property through no fault of their own, the process of proving your claim and restoring the title deed can be long and distressing. Prevention remains far easier than correction. Identifying problems and responding quickly is vital, and doesn’t require complex tools or expensive subscriptions.

You can register for free at https://propertyalert.landregistry.gov.uk/

Companies House: Protecting People with Identity Verification

Imagine discovering that you are listed as a director of a company you have never heard of, or worse, you are the director of a business used for criminal activity. From 18th November 2025, Companies House will implement one of the most consequential reforms in recent years. Under the Economic Crime and Corporate Transparency Act 2023, every company director and every Person with Significant Control (PSC) must verify their identity before acting in that capacity.

While most commentary so far has focused on who needs to verify and how, the deeper story is about protection, protecting people from identity theft, misuse, and reputational harm, while strengthening the integrity of UK businesses.

Practical changes

The process for directors and PSCs will change fundamentally. New appointments must be verified before registration, and existing directors and PSCs will be required to complete verification by the date of their company’s next confirmation statement, within a 12-month transition period. Verification can be completed either through GOV.UK One Login or through an authorised corporate service provider, such as an accountant or solicitor.

Once verified, each individual will have a unique verified identity that serves as their secure identifier for all future filings. It will be an offence for an individual to act as a director, or for a company to permit an unverified individual to act, without verification once the new rules are in force. Together, these measures mark a shift from a passive registry to an active verification system, one that checks who people are, not just what they type.

How the reform protects individuals

For years, people have found themselves listed as directors of companies they had never heard of. Fraudsters could simply type in a name and file it. The new system stops identity theft before it starts by ensuring that every appointment is tied to a verified identity, confirmed through secure government channels. It prevents criminals from registering fake companies under someone else’s name or using an address to lend legitimacy to fraud.

The reforms also build trust in public records. The Companies House register shapes perceptions among banks, clients, and regulators. With verified identities, the names listed will correspond to real, consenting individuals, making each professional record more credible and resistant to impersonation or error.

Verification also puts individuals in control of their corporate identity. A director’s verified identity becomes the key to their official record, ensuring that no one can appoint them to a company or amend their details without consent. This change gives individuals confidence that their name cannot be used behind the scenes without their knowledge.

Another important aspect is protection from unwanted liability. Under the old system, it was possible for people to be framed as company officers, attracting tax demands, debt notices, or legal correspondence they did not deserve. The verification process closes that loophole, meaning individuals can no longer be held accountable for companies they never agreed to join.

How the reform protects personal information

These reforms don’t just verify who people are; they also reduce how much of their personal information is exposed. Verification relies on secure digital checks using documents such as a passport or driving licence, but those documents are not stored or made public. Companies House will retain only the minimum information needed to maintain an accurate register, ensuring that identification data never appears online or remains in long-term storage.

Only verified individuals and authorised agents will be allowed to file or amend details, which means a person’s name cannot be added, edited, or reused without their verified code. This creates a built-in safeguard against unauthorised or malicious submissions.

Another major improvement is that one verification replaces repeated exposure.

Previously, directors had to send ID documents multiple times for different incorporations or filings. Now, verification will typically happen once, using GOV.UK One Login or an authorised provider, after which the secure status can be reused. This reduces the number of data copies in circulation and lowers the risk of breaches or leaks.

Each submission to Companies House will now link to a verified individual or regulated service provider, creating a robust digital audit trail. If someone misuses another person’s information, it can be traced, making transparency itself a deterrent and ensuring that misuse becomes both detectable and punishable.

Companies House will also have stronger powers to remove false or outdated information and to suppress entries that pose a risk. Errors and outdated data can therefore be corrected more quickly, reducing long-term exposure. This aligns the new model with key data-protection principles, data minimisation, purpose limitation, and confidentiality, ensuring that corporate transparency finally coexists with personal privacy, something the UK register has long lacked.

Transparency, trust, and accountability

Every verified record will still be public, but every identity will be real, consent-based, and better protected. In a time when trust is fragile and information spreads instantly, this is not a minor upgrade. A verified register strengthens the system as a whole. It raises the bar for everyone, making it harder to create shell companies, curbing money-laundering, delivering better transparency, reinforcing accountability, and restoring trust. The reforms will help create a cleaner and more trustworthy business environment for all.

Shadow Data: Identifying hidden risks

In most organisations today, data is one of the most valuable assets, yet it is also one of the most difficult to control. Even with well-managed official systems, a parallel world of untracked, unmanaged, and unmonitored data often exists, also known as shadow data. Previous articles of mine cover Shadow IT and Shadow AI. In contrast, shadow data is sensitive or business-critical data that has slipped outside approved processes and governance controls.

Shadow data exists outside sanctioned systems, controls, and oversight. It typically arises because people prioritise convenience, speed, or workarounds over policy. The problem is not that the data exists, but that it is often invisible to those responsible for protecting it.

Forms of shadow data

  • Unapproved copies of sensitive data. An analyst downloads customer records into a spreadsheet. The official database is secure, but the spreadsheet is not.
  • Data in unsanctioned apps, such as the use of personal cloud storage or messaging tools to share files instead of company-approved platforms.
  • Orphaned backups or snapshots. Forgotten database snapshots or cloud storage remain open long after they are no longer needed, often with excessive access rights.
  • Forgotten test and development data. Developers copy production data into test environments. These environments often lack the same protections as live systems, yet they still contain sensitive details.

Why shadow data matters

  • Shadow data is often outside encryption, access controls, or monitoring. Attackers will look for weak links, such as laptops, shared drives, or forgotten cloud storage.
  • Regulations such as GDPR require organisations to know where personal data resides. Shadow data undermines these compliance efforts and may lead to fines or sanctions.
  • Duplicate datasets lead to inconsistent reporting, poor decision-making, and unnecessary storage costs.
  • In the event of a breach, businesses may underestimate the scope because they are unaware of hidden datasets.

Working examples

A hospital stores patient records in a secure, encrypted database, but:

  • A doctor, needing to work quickly, exports patient details into a spreadsheet
  • Copies of data not under hospital control
  • Sensitive health data across multiple insecure locations
  • Introduction of compliance, legal, and reputational risk

A law firm manages client files in a secure document management system, but for convenience, solicitors, partners, trainees, or other staff:

  • Save case files to USB or laptop drives
  • Email document bundles through public email systems
  • Collaborate through personal Dropbox or OneDrive accounts

Shadow copies of data may contain privileged client data. If a laptop is lost or if a client requests data deletion, the firm cannot ensure removal of these unofficial copies. What began as minor workarounds now represents serious compliance and reputational risk.

While shadow data often arises from legitimate needs, it introduces risks that can outweigh the convenience. For businesses bound by regulation, trust, and professional duty, shadow data can quietly erode compliance and expose sensitive information. One quick copy can multiply into a long-lasting vulnerability. Bringing shadow data into the light is no longer optional.

The Emergence of AI Insurance

Artificial intelligence is now embedded in decisions across finance, health, retail, and public services. With the upside comes exposure: model errors, bias, data misuse, and unpredictable failures. Insurers are beginning to write AI-specific cover, but as with car or home insurance, protection won’t be automatic. To secure a policy that is both affordable and viable for insurers, organisations will need to show reasonable precautions through robust governance, a living AI policy, and evidence that requirements are followed in practice.

The idea of insuring against AI risks may sound futuristic, but real-world harms are no longer hypothetical. From credit scoring to recruitment and medical imaging, failures are already creating measurable impacts. This shift explains why insurers are now exploring AI-specific products, and why businesses must increasingly demonstrate that they are managing these risks responsibly.

Lessons from other insurance products

The insurance industry has always operated on the principle of reasonable precautions. A policyholder must take steps to protect themselves and their property, otherwise the insurer may decline a claim. These expectations are explicit in policy terms and not left to interpretation.

  • Car insurance – policyholders are expected to lock the doors, remove the keys from the ignition, and avoid leaving valuables visible. If a car is stolen while the engine is left running or the keys are inside, insurers will reject claims on the grounds of negligence.
  • Home insurance – insurers require homeowners to close windows and lock all external doors when leaving the property. Many policies also specify the use of approved locks. If entry was gained through an open window or an unlocked door, theft claims can be denied. Some insurers also mandate working smoke alarms as a condition for fire coverage.
  • Travel insurance – travellers must take reasonable care of their belongings, such as keeping passports and valuables on their person or in a hotel safe. Losses that occur because items were left unattended, for example, on a sunbed at the beach, are commonly excluded.
  • Health insurance – policyholders must accurately disclose pre-existing medical conditions when applying for cover. Failure to disclose such information may result in claims being refused or the policy being voided entirely.

These examples illustrate a clear pattern: insurance is not designed to cover reckless or preventable losses. Instead, it assumes that the insured party will act responsibly, follow mandatory controls, and reduce the likelihood of avoidable claims.

Economic viability of AI insurance

For AI insurance to succeed, the policy must balance two interests: it must be affordable and worthwhile for the business while remaining sustainable and profitable for the insurer. Without this balance, either premiums will be prohibitively high, or insurers will withdraw cover altogether.

For businesses:

  • Predictable costs – Companies want insurance to protect against catastrophic or unexpected AI failures, not to replace routine risk management. If premiums are too high, businesses will either self-insure (absorbing risks internally) or avoid purchasing cover altogether.
  • Fair pricing for good governance – Businesses that can demonstrate alignment with ISO/IEC 42001, robust AI policies, and effective lifecycle controls should be rewarded with lower premiums. This mirrors the way homes with burglar alarms or cars with immobilisers qualify for discounts.
  • Avoidance of hidden gaps – Businesses need clarity on what is excluded. “Silent AI exclusions” in cyber, professional indemnity, or liability cover can lead to gaps that make coverage ineffective. A policy is only economically viable if it provides genuine protection against the risks the company faces.
  • Support for compliance – As AI regulations tighten (e.g., under the EU AI Act), insurance aligned with those requirements helps companies offset compliance costs by reducing the financial impact of failures or incidents. This strengthens the value proposition of cover.

For insurers:

  • Risk selection and underwriting discipline – Insurers need assurance that customers are acting responsibly. Without this, claims could spiral in frequency and size, making AI policies unprofitable. Structured frameworks like ISO/IEC 42001 or mappings to the NIST AI Risk Management Framework provide underwriters with measurable checkpoints.
  • Loss prevention through controls – By requiring baseline precautions, such as model validation, bias testing, and incident response plans, insurers reduce the likelihood of avoidable losses. This preserves the claims ratio and ensures premiums reflect true residual risk rather than gross exposure.
  • Pricing for uncertainty – AI carries novel risks such as systemic bias, cascading errors, or IP infringement at scale. Insurers must price in these uncertainties while avoiding premiums so high that they deter buyers. The only way to narrow the pricing gap is to insist on strong governance evidence from policyholders.
  • Limiting systemic exposure – Some AI failures could create correlated losses across many policyholders (e.g., reliance on the same third-party model provider). To remain viable, insurers may cap aggregate exposures, share risk through reinsurance, or demand that businesses diversify suppliers.

The shared value proposition rests on balance. For businesses, AI insurance must deliver affordable premiums, meaningful protection, and the reassurance that one failure will not destabilise financial resilience. For insurers, it must reduce uncertainty, limit avoidable claims, and ensure that payouts respond to unforeseen incidents rather than preventable negligence. The bridge between these interests is demonstrable governance. An AI policy aligned with ISO/IEC 42001, not just written but embedded across the organisation, provides the evidence of responsibility that makes the economics viable. It is this visible governance that enables insurers to underwrite sustainably and allows businesses to access cover with confidence.

Artificial Intelligence Impact Assessment (AIIA)

Insurance ultimately exists to protect people, not just balance sheets. For AI, that means insurers will increasingly look beyond technical controls to the human and societal impacts of algorithmic decisions. An Artificial Intelligence Impact Assessment provides the missing bridge:

Individuals:

  • How are people affected if the AI system fails?
  • Are there safeguards against discrimination, wrongful denial of services, or reputational harm?
  • Insurers will expect evidence of user-centric risk analysis and redress mechanisms.

Groups:

  • Does the AI system disadvantage certain demographics, communities, or professions?
  • Could biases compound existing inequities?
  • Demonstrating proactive bias testing and mitigation will be essential for cover.

Society:

  • Could widespread adoption of the system cause systemic harm (e.g., destabilising financial markets, spreading misinformation, or eroding trust)?
  • Insurers may cap exposure to such risks, but businesses must still show they have mapped and mitigated them.

The business:

  • Beyond compliance, how would a failure impact on brand, trust, and financial resilience?
  • AIIAs document these scenarios and demonstrate to insurers that risks are both understood and actively managed.

By embedding the AIIA into governance, businesses not only strengthen their insurance case but also align with regulators, stakeholders, and the broader social licence to operate.

Clearly defined AI policy

Insurers will expect companies to have a formal, leadership-approved AI policy that sets the tone for safe and responsible use of artificial intelligence. The policy should not only exist on paper but must be embedded into daily practice, communicated across the business, and reinforced by training and accountability.

  • Leadership-approved and organisation-wide.
  • Sets out principles of safe, lawful, and ethical AI use.
  • Communicated to staff and backed by regular training.
  • Assigns clear roles and responsibilities for oversight.

Defined scope and AI inventory

An organisation cannot manage what it does not know it has. Insurers will look for a clear inventory of AI systems that captures ownership, purpose, and risk classification. This inventory should help demonstrate that the business understands which systems are critical, which are high-risk, and how external stakeholder or regulatory expectations are being addressed.

  • Comprehensive list of AI systems in use.
  • Ownership and accountability documented.
  • Risk categories assigned (low, medium, high).
  • Stakeholder and regulatory requirements considered.

Structured risk assessment

Insurance is built on understanding risk, and insurers will expect businesses to do the same for their AI systems. Companies must carry out structured assessments of legal, ethical, operational, and security risks and ensure these are revisited whenever a model is retrained, repurposed, or deployed in a new context.

  • Formal AI risk assessments completed and updated.
  • Legal, ethical, operational, and security risks identified.
  • High-risk systems subject to enhanced safeguards.
  • Regular reviews triggered by system changes or retraining.

The Artificial Intelligence Impact Assessment (AIIA) covered earlier will also include risks to individuals, groups, society, and the business.

Operational resilience

Insurers will expect to see strong operational controls that span the entire lifecycle of AI systems, from design to retirement. This means having processes for responsible development, secure data handling, rigorous validation, controlled retraining, and clear points where human judgment is required.

  • Documented lifecycle processes for design, training, deployment, monitoring, and retirement.
  • Strong data governance (provenance, rights, privacy, and security).
  • Validation for accuracy, bias, robustness, and reliability.
  • Change control processes and retraining safeguards.
  • Clear rules on human oversight and intervention.

Monitoring and incident readiness

Continuous monitoring is critical for keeping AI systems safe and reliable. Insurers will expect organisations to track performance, fairness, and compliance on an ongoing basis and to have an incident response plan ready for when things go wrong. That plan must be tested, so staff know how to act in real scenarios.

  • Ongoing monitoring for accuracy, bias, and compliance.
  • Documented incident response plan for AI-specific failures.
  • Defined escalation procedures and responsibilities.
  • Evidence of rehearsals or drills of the response plan.

Evidence and documentation

In disputes or claims, evidence matters. Insurers will expect companies to maintain detailed records that prove risks were managed responsibly. This includes documentation of risk assessments, testing results, monitoring reports, corrective actions, and audit trails of significant model changes.

  • Documented records of all risk and control activities.
  • Audit trails for model changes and decision-making.
  • Retention of evidence to support claims or investigations.
  • Corrective actions recorded and tracked to closure.

Leadership and accountability

Responsible AI cannot be delegated entirely to technical teams. Insurers will expect senior leadership to be visibly involved in oversight, ensuring that governance is a cultural priority. Named individuals must be accountable for AI risks, and there should be evidence that leaders actively drive a culture of responsibility.

  • Senior management visibly engaged in AI oversight.
  • Named individuals accountable for risk and compliance.
  • Oversight committees or equivalent governance structures.
  • A culture of responsibility supported from the top down.

Regulatory alignment

Compliance is no longer optional; it is a condition for trust. Insurers will expect companies to demonstrate that their AI systems meet legal obligations, such as data protection, equality laws, and sector-specific requirements. Businesses should also show they are prepared to adapt quickly as new regulations emerge.

  • Systems designed to comply with data protection and discrimination laws.
  • Alignment with industry-specific regulations.
  • Processes in place to track regulatory change.
  • Adaptation mechanisms to meet new legal requirements.

Competence and training

Even the best policies fail if staff lack the knowledge to follow them. Insurers will expect evidence that employees at all levels are trained to use AI responsibly. This includes broad AI literacy for general staff and deeper, role-specific training for developers, auditors, and risk managers.

  • AI literacy training for all employees.
  • Specialist training for developers, auditors, and oversight roles.
  • Mandatory training tied to AI policy requirements.
  • Records of training completion and effectiveness.

Ongoing review and improvement

AI governance must evolve with the technology itself. Insurers will expect companies to review their systems regularly, track and close corrective actions, and commit to continual improvement. This shows that businesses are not just meeting today’s standards but preparing for tomorrow’s risks.

  • Regular reviews of AI risks, incidents, and performance.
  • Documented management reviews with action tracking.
  • Corrective actions logged and closed.
  • Commitment to continual improvement and adaptation.

From risk transfer to building trust

Insurers will expect companies to show that they are treating AI with the same care as any other business-critical system. Strong policies, structured risk management, lifecycle controls, and a culture of responsibility will be seen as the “locked doors and smoke alarms” of AI, essential for affordable and dependable coverage.

AI insurance is not just about transferring risk, it is about building trust in a technology that is both powerful and unpredictable. Just as locked doors and smoke alarms became shorthand for responsible home ownership, AI policies, impact assessments, and lifecycle controls will become the baseline for responsible AI adoption. Insurers, businesses, regulators, and society all stand to benefit if these foundations are laid early.