The comfortable myth of “the hack”

When that memo lands announcing an “unauthorised access incident”, most of us imagine a dramatic cyber-attack with dark figures in hoodies, exotic malware and dramatic system lock-downs. In reality, many so-called breaches look very different. Data is often exfiltrated slowly, using stolen or reused credentials rather than flashy zero-day exploits, and detection can lag for months as attackers lurk undetected in systems. Statistics from 2025 show stolen credentials now account for a significant proportion of breaches and extend dwell time deep into networks before discovery.

This article starts with a provocative premise that most corporate data theft is not a technical surprise, it is a managerial inevitability. It’s not a manual on firewalls and patches, but a mirror held up to boards and executives.

Despite heavy regulatory pressure in the EU and UK under the UK/EU General Data Protection Regulation (GDPR) and Network and Information Security 2 (NIS2) regimes, and rising enforcement by the Information Commissioner’s Office (ICO), incident reports continue to rise and breaches are accelerating. This can no longer be a myth; it is the new competitive risk reality.

Hack …. or Management Decision

When corporate data is stolen, calling it “a hack” often obscures the real narrative which entailed a series of everyday leadership choices that created the conditions for the theft. Too often, decision-makers prioritise speed and convenience over disciplined access control. Rapid digital transformation and cloud migrations have become business imperatives, but without rethinking who needs access to what and why, organisations can end up exposing their most sensitive information almost by design.

Take the 2022 LastPass breach, where a senior engineer’s personal device and linked credentials gave attackers access to backup vaults of user data. Investigations later concluded that organisational choices around account linking and device access were significant contributors to the breach. The UK data protection regulator imposed a substantial penalty for these failures in organisational and technical controls. Similarly, the 2023 Capita breach, which saw personal data on millions of pension holders exfiltrated, highlighted long-standing weaknesses in security operations and oversight that had not been addressed despite prior warnings.

These are not exotic exceptions. They are emblematic of a broader pattern where real-world business decisions, such as how access is granted to third parties, how sales incentives encourage data sharing, and how merger and acquisition (M&A) integrations inherit legacy systems, quietly accumulate “decision debt”. This debt creates vulnerabilities that only become visible when data slips out of the organisation.

What’s missing in many boardrooms is a fundamental shift in question-framing. We should ask not merely “Are we compliant?” but “Where could critical data leave without us noticing?” Addressing this cultural blind spot is the first step to moving beyond reactive security models toward resilient governance.

Your Biggest Cyber Risk Has a Staff Pass

Most organisations picture insider data theft as a morality play, with the disgruntled employee, the rogue contractor and the malicious mole. In reality, most data loss involving insiders is neither malicious nor dramatic. It is the predictable outcome of systems designed for speed, collaboration and trust, operating exactly as intended.

Consider the departing employee who still has access to shared drives, customer relationship (CRM) systems and cloud storage until weeks after their notice period ends. Numerous breach investigations show that data is frequently copied or synchronised during this window, not out of criminal intent, but to “take work with them” or ensure continuity. The Verizon Data Breach Investigations Report 2024 confirms that misuse of legitimate credentials remains one of the most common pathways for data exposure.

Contractors and consultants pose a similar challenge. Short-term commercial relationships often come with long-term access, especially where projects span multiple systems. The UK National Audit Office has repeatedly warned that third-party access is poorly monitored across both public and private sectors, creating accountability gaps that attackers later exploit.

Then there is shadow data. Collaboration platforms, shared links and personal cloud accounts quietly create parallel copies of sensitive information outside formal governance. Add generative AI tools and personal productivity software, and the problem accelerates. Microsoft’s Work Trend Index 2024 highlights a sharp rise in employees pasting corporate data into AI tools to boost productivity, often without understanding where that data is stored or reused.

This is what security professionals now call benign exfiltration, that is data leaving the organisation without malicious intent, but with potentially enormous strategic impact. Hybrid and remote work have normalised portability, while flexible labour markets across the EU and UK have weakened traditional assumptions about loyalty and long tenure.

Traditional insider-threat models, built around intent and detection, struggle to cope with this reality. The risk no longer wears dark glasses and a disguise, but carries a staff pass, logs in legitimately, and walks out the door one synchronised folder at a time.

Funding the Wrong Defences

In many organisations, cybersecurity investment is highly visible and at the same time oddly ineffective. Budgets flow towards perimeter tools, glossy dashboards and annual awareness training, all designed to demonstrate action rather than deliver resilience. This is cybersecurity theatre with activity that reassures boards without materially reducing the likelihood or impact of data theft.

Consider how success is often measured. Boards are shown neat dashboards tracking patching rates, phishing simulations and compliance scores. These metrics create comfort, yet they say little about how quickly sensitive data could be copied, reused or exfiltrated once an attacker or a legitimate user is inside. The UK Cyber Security Breaches Survey 2024 notes that organisations reporting high confidence in their controls still experience significant incidents, suggesting a disconnect between reported maturity and real exposure.

Annual training is another ritual. Staff click through generic modules, pass a quiz and then return to unchanged behaviours. Evidence from the National Cyber Security Centre shows that awareness alone rarely translates into sustained behavioural change, especially when productivity pressures reward shortcuts.

Compliance plays a central role in sustaining this illusion. Regulatory alignment with GDPR or NIS2 is treated as reputational insurance and of proof that “reasonable steps” were taken. Yet the European Union Agency for Cybersecurity (ENISA) has warned that maturity models and tick-box assessments can obscure systemic weaknesses, particularly around data flows and access sprawl. Vendor assurances compound the problem, offering confidence based on tool coverage rather than organisational reality.

At heart, this is a governance failure. Cyber risk is framed as an IT cost centre or a quarterly reporting item, delegated and deferred. Rarely is it treated as an organisational design issue, or as a leadership accountability question about how work actually gets done. Until boards shift their focus from visible defences to structural resilience, the theatre will continue and so will the breaches.

Reframing responsibility

Corporate data theft is rarely accidental. As this article has shown, it is usually the foreseeable outcome of organisational choices about access, incentives and oversight, not a bolt-from-the-blue technical failure. That is why responsibility cannot be delegated to IT or outsourced to vendors. Leadership creates the conditions in which data is either contained or quietly allowed to drift. Guidance from the National Cyber Security Centre and ENISA increasingly stresses this organisational dimension, yet practice still lags behind insight.

That’s worrying, but the next question is more uncomfortable still: what happens after data is stolen? Who ultimately benefits, and why is the damage so often invisible but permanent? Article 2 in this two-part consideration of Corporate Data Theft will widen the lens from how data is lost to what that loss really means in competitive, geopolitical and strategic terms.

And what about you…?   

  • Which everyday business decisions we’ve made for speed, growth or convenience have quietly increased our exposure to data theft?
  • How confident are we that our assumptions about trust, loyalty and access still reflect how people actually work today?