2025-10-21
Default passwords and unmanaged routers remain weak points in modern networks. Locking them down protects the backbone of business operations.
Image credit: Created for TheCIO.uk by ChatGPT
Every sophisticated breach starts somewhere simple — a forgotten router, a shared admin password, or an unpatched firewall. Network hardware may not grab attention, but it’s where compromise begins.
Locking down network equipment is awareness in action. Change default credentials. Disable unused ports. Keep firmware current. Restrict management interfaces to known IPs.
Awareness isn’t just about people; it’s about the systems they depend on. When the infrastructure is hardened, human mistakes carry less consequence.
Review network devices as seriously as you review policies. Each forgotten setting can become an open door. The difference between exposure and resilience often comes down to what’s running quietly in the corner.
What’s your take? How often does your team review network devices — and who owns that responsibility?
Let’s share the good, the bad and the messy middle of protecting the hardware behind the headlines.
2025-10-20
When an incident hits, clarity beats complexity. A one-page plan everyone can understand works better than a manual no one reads.
Image credit: Created for TheCIO.uk by ChatGPT
When something goes wrong, people reach for what they remember, not what’s buried in a policy document. A short, plain-language incident guide can save hours when clarity matters most.
The best plans fit on a single page. Who to call. What to contain. When to escalate. No jargon, no acronyms, no delay. It’s a tool for humans under pressure, not auditors after the fact.
Cyber incidents are messy. Plans shouldn’t be. Write them for the people who will actually use them. Strip away what’s nice to know and keep only what’s needed in the first thirty minutes.
When the guide is printed, pinned, and shared, it becomes part of the culture. Awareness stops being theoretical. It becomes a muscle memory that helps the organisation recover faster.
What’s your take? Does your team have a one-page response guide ready or a binder nobody opens until it’s too late?
Let’s share the good, the bad and the messy middle of preparing for real incidents.
2025-10-18
The second week of Cyber Awareness Month focused on secure design — making safety automatic and the right action the easy one. From MFA to password managers, the message was clear: defaults decide culture.
Image credit: Created for TheCIO.uk by ChatGPT
Week Two of Cyber Awareness Month was about defaults and design — the invisible choices that make safety easy or impossible. Where Week One focused on leadership and example, this week turned to the systems and settings that shape behaviour.
It began with MFA as the baseline, the simplest and most reliable control against account takeover. If multi-factor authentication isn’t everywhere, it’s not enough. Making it mandatory rather than optional closes one of the most common gaps.
Next came safer sharing and least privilege. Open-by-default tools make exposure inevitable. Tightening access controls and flipping defaults to private turns caution into normality rather than effort.
Midweek, we looked at email friction that protects. Simple design tweaks — external sender banners, delay-before-delivery, and visual warnings — create moments to pause. Awareness becomes a feature of the system, not just a state of mind.
Then came auto-update and secure browsers. Attackers exploit delay, not mystery. Systems that update automatically close windows of opportunity before exploits spread. Automation isn’t a luxury; it’s hygiene.
Finally, password managers by default wrapped up the week with a reminder that user experience and security aren’t opposites. When the password manager is built-in, people stop reusing passwords and start using the tools that protect them without needing to think about it.
Together, these stories show that awareness isn’t only about people paying attention, it’s about systems that support the right behaviour. Secure design is awareness embedded in workflow. Every safe default is one less decision that depends on memory or luck.
As we head into Week Three, Building resilience, the focus shifts from prevention to preparation. Incidents will still happen. The question becomes: how quickly do we see them, how well do we respond, and how ready are we to recover?
What’s your take? Which small design change has made the biggest impact on your organisation’s security behaviour?
Let’s share the good, the bad and the messy middle of making safety the default setting.
2025-10-17
Standardising on a password manager removes friction and stops reuse. Make the secure choice the easiest one.
Image credit: Created for TheCIO.uk by ChatGPT
Password complexity rules don’t protect anyone. They frustrate users, encourage reuse and invite workarounds. A good password manager fixes that by design.
When organisations provide a managed password manager, they remove the biggest cause of weak security: human memory. It’s faster, safer and easier to audit. Most importantly, it builds consistency across systems and teams.
Making password managers the default shifts awareness from caution to confidence. People stop inventing passwords and start using strong, unique credentials automatically. The secure option becomes the natural one.
What’s your take? Has your organisation standardised password management — or are staff still left to figure it out alone?
Let’s share the good, the bad and the messy middle of making safety simple.
2025-10-16
Attackers exploit the lag between patch and deployment. Auto-update closes that window and keeps protection ahead of threats.
Image credit: Created for TheCIO.uk by ChatGPT
Every missed update is an open door. Attackers don’t find new exploits every day, they use the old ones that still work because updates were delayed or disabled.
Auto-update removes that choice. It ensures the latest security fixes land before attackers use them. Combined with managed browsers and password managers, it reduces human error and gives users fewer decisions to make.
If updates rely on people remembering to click “install”, you’re already behind. Build automation that patches quietly in the background. Every system that updates itself is one less risk you have to chase.
What’s your take? Does your organisation enforce auto-update — or still rely on reminders and good intentions?
Let’s share the good, the bad and the messy middle of automating resilience.
2025-10-15
A little friction goes a long way. Banners, warnings and short delivery delays create a pause that stops costly mistakes.
Image credit: Created for TheCIO.uk by ChatGPT
Most email risks aren’t technical, they’re human. A message arrives, looks urgent, and gets actioned without pause. The fix is to make that pause impossible to skip.
Email friction helps. External sender banners, display-name checks and short delivery delays for unknown domains all add seconds that save hours. They break the automatic response loop that attackers rely on.
It’s not about slowing people down. It’s about building space to think. A small, deliberate delay before delivery stops a message from reaching the inbox at the worst possible moment, when pressure meets distraction.
Good design protects people from their instincts. Email friction isn’t an inconvenience; it’s insurance for your attention.
What’s your take? Has your organisation built friction into communication, or left staff to rely on luck and caution?
Let’s share the good, the bad and the messy middle of designing safer workflows.
2025-10-14
The latest NCSC Annual Review makes one point clear: the threat picture is intensifying. Severity is rising, ransomware remains the top disruptor, and secure-by-default behaviour has never mattered more.
Image credit: National Cyber Security Centre / LinkedIn
Cyber is no longer a side issue. The NCSC’s Annual Review 2025 makes clear that the threat picture has intensified and that action must be both immediate and measurable. The report is practical: it shows where incidents are rising, which weaknesses are being exploited and what simple steps can protect the majority of organisations.
It also comes with a strong message from NCSC CEO Dr Richard Horne, shared on LinkedIn as he launched the new Cyber Action Toolkit:
“Cyber attacks aren’t just a matter of computers and data. They impact growth and prosperity. Safety and national security. Reputations, operations, bottom lines. Lives and livelihoods.” LinkedIn post
Horne emphasised that every organisation, not just critical infrastructure, needs to act now. The Cyber Action Toolkit is designed for sole traders and small businesses, helping them take straightforward, effective steps against cybercrime .
Nearly half of all incidents handled by the NCSC last year were nationally significant, and 4 percent were highly significant, a 50 percent rise and the third consecutive annual increase.
The NCSC managed 429 incidents, up from 289 in 2024, with 204 nationally significant.
A small number of vulnerabilities created disproportionate damage. Three CVEs affecting Microsoft SharePoint, Ivanti Connect Secure, and Fortinet FortiGate were linked to 29 major incidents.
Action for CIOs
Treat vulnerability management as a strategic discipline. Report mean time to remediate at board level. Treat unpatched legacy systems as a resilience liability, not technical debt.
Ransomware remains the dominant and most disruptive threat across the UK economy.
The top reporting sectors were academia, finance, engineering, retail and manufacturing.
Retail incidents including Co-op and Marks & Spencer feature in the review’s timeline .
Action for CIOs
Only 14 percent of UK firms reviewed their immediate suppliers for cyber risk .
The NCSC, UK banks and government now urge supply chains to adopt Cyber Essentials and use the IASME bulk lookup to verify certification .
The review also calls for radical transparency - clear, factual information on software versions, update posture and internet exposure .
Action for CIOs
The review details sustained campaigns by China, Russia, Iran and North Korea.
A China-linked botnet of more than 260 000 devices was disrupted .
Russian GRU operations targeted Western tech firms; Iranian activity tracked regional conflict; DPRK actors continued revenue-driven attacks and crypto theft .
Action for CIOs
AI has accelerated attack tempo, not rewritten the rules. Adversaries are using it for reconnaissance, phishing and exploit discovery .
The UK responded with the AI Security Code of Practice and launched the Lab for AI Security Research (LASR), which has delivered its first full year of operational work .
Action for CIOs
The scheme is ten years old and still working. Certification volumes rose 17.5 percent for Cyber Essentials (CE) and 17.3 percent for CE Plus last year.
Evaluation shows higher senior engagement and customer trust. More than 850 organisations have been funded through the government support programme.
Action for CIOs
The NCSC is urging adoption of passkeys and phishing-resistant authentication, moving towards digital credentials and wallet-based identity,
Action for CIOs
Dr Richard Horne’s message is unambiguous: cyber risk is economic and social, not just technical. The NCSC’s goal is to make secure behaviour easy; through free toolkits, standard frameworks and protective services .
For CIOs, this is the moment to align resilience with growth. The organisations that implement these steps will be faster, more reliable and easier to trust.
Sources: NCSC Annual Review 2025 (PDF); NCSC LinkedIn post, October 2025.
2025-10-14
Open-by-default systems make exposure inevitable. Restricting access and sharing to what’s needed reduces both risk and noise.
Image credit: Created for TheCIO.uk by ChatGPT
Open-by-default collaboration is a design flaw, not a feature. Most data leaks start with a well-intentioned share. Someone gives “everyone” access to a folder, sends a file to the wrong email address, or leaves a shared link active long after a project closes.
These aren’t malicious acts, they’re symptoms of systems that prioritise convenience over control. If it’s easier to make something public than to share it correctly, people will take the path of least resistance.
The modern workplace runs on collaboration. SharePoint sites, Teams channels, Google Drives, Slack links, all designed to make information flow. Yet every open folder, unrestricted link or inherited permission is a potential breach waiting to happen.
When an external consultant joins a project, how many shared folders do they automatically see? When someone changes role, how long before their old access is reviewed or removed? When teams grow fast, these questions often go unanswered until there’s an incident.
The problem isn’t bad people, it’s bad defaults.
Least privilege is one of the oldest principles in cyber security, but it’s often misunderstood as a blocker. In reality, it’s a productivity tool. The fewer distractions, duplicates and irrelevant folders people see, the easier it is to find what matters.
It’s also a form of digital hygiene. Every extra permission is a door left ajar. Each open link increases the chance that data will end up in the wrong hands. When defaults are private, people must make a conscious choice to share, and that moment of intent builds awareness.
Leaders can make this cultural. Model the behaviour by asking: Who really needs access to this? Encourage teams to review shared folders quarterly. Make it normal to remove old access, not awkward.
Technology can help, but only if configured with purpose.
Automation can take the pain out of good practice. Alerts when files are shared externally, dashboards showing over-shared content, or automatic expiry of guest accounts can all help maintain control without relying solely on user memory.
Executives set the tone. If leaders habitually request “open access so I can take a look,” others follow suit. When leaders instead ask, “Can you give me access to the part I need?”, it reframes the expectation entirely.
Least privilege isn’t about saying no, it’s about defining yes. It draws a line between transparency and exposure, between collaboration and chaos. It’s the difference between sharing and leaking.
The future workplace isn’t one where everything is open. It’s one where openness is intentional, controlled, and reversible.
What’s your take? Are your collaboration tools set to share safely, or still built for convenience first?
Let’s share the good, the bad and the messy middle of securing access without slowing teams down.
2025-10-13
Vodafone confirms a major UK outage as mobile and broadband users report widespread disruption across the country.
Image credit: Created for TheCIO.uk by ChatGPT
Vodafone has confirmed a major network outage affecting broadband and mobile data across the UK. The company says engineers are working urgently to restore service after reports spiked from around 14:30 BST.
More than 130,000 users reported problems within the first hour, according to outage trackers. Most reports relate to home broadband and mobile data, with coverage gaps seen across London, Birmingham, Manchester, Cardiff and Glasgow. Some users also noted intermittent call failures and access issues to Vodafone’s own website.
The disruption extends to VOXI, Lebara, Asda Mobile and Talkmobile — smaller mobile brands that rely on Vodafone’s core network.
While Vodafone has yet to confirm a cause, early data points to a network routing or DNS failure. Independent monitoring shows certain Vodafone DNS servers and peering routes dropping from public visibility, which would explain the widespread connectivity loss and unstable app access.
Under Ofcom’s automatic compensation scheme, fixed broadband customers may be eligible for payments if the outage extends beyond two full working days. Mobile services are not covered under this policy.
Vodafone said in a statement:
“We’re aware of an issue affecting some mobile and broadband customers. Our engineers are investigating as a priority and we’ll share updates as soon as possible.”
The outage remains active at the time of writing. This page will update as Vodafone provides further information.
Are your systems affected?
Share how your organisation is routing around the Vodafone outage — what’s working, and what lessons are emerging?
2025-10-13
Multi-factor authentication is the simplest, strongest defence against account compromise. If it’s not everywhere, it’s not enough.
Image credit: Created for TheCIO.uk by ChatGPT
Every breach story starts the same way, a stolen password, reused credential or missed warning. The simplest fix remains the most effective: multi-factor authentication (MFA).
If MFA isn’t everywhere, it’s not doing its job. It closes the easiest door attackers use and turns stolen passwords into useless data. Yet many organisations still treat MFA as optional or limited to high-risk systems.
This month is a reminder that MFA needs to be the baseline. Apply it across accounts, platforms and remote access tools. Make it default on every new system. Remove the option to skip setup. When it’s universal, it becomes invisible, just part of how work starts.
MFA doesn’t solve everything, but it forces attackers to work harder. It buys time. It stops opportunistic breaches before they start. In security terms, that’s a win you can measure.
What’s your take? Has your organisation made MFA universal yet — or are exceptions still the rule?
Let’s share the good, the bad and the messy middle of securing access by default.
2025-10-11
From CEO messages to visible habits, week one of Cyber Awareness Month showed that leadership sets the tone. Awareness starts with example, not instruction.
Image credit: Created for TheCIO.uk by ChatGPT
The first week of Cyber Awareness Month focused on leadership, visibility and tone from the top. Across organisations, IT and security teams reminded leaders that awareness isn’t a memo — it’s a behaviour.
It began with the CEO message, setting the agenda for the month ahead. The strongest examples weren’t about policy; they were about priorities. Clear direction from the top made it easier for teams to see where security fits into everyday work.
Next came modelling behaviour over instruction. Leaders who showed how they challenge a suspicious payment or verify a change request did more for culture than another training module. Demonstration beat direction. Staff watched, learned and copied.
Managers were also central. Line managers turned strategy into daily action. When they brought awareness topics into team huddles, culture started to embed itself. The missing link between policy and practice was finally visible.
Then came the lessons from real incidents. Teams shared anonymised examples of what nearly went wrong and how quick action prevented a breach. Real stories connected risk to reality and made policy personal.
The week closed by celebrating stories instead of slogans. Organisations recognised the people who noticed something, spoke up, or stopped a threat early. Recognition turned awareness into pride rather than pressure.
The lesson from Week One is simple: awareness grows where leadership attention goes. When executives model caution, managers repeat it, and teams follow. Culture starts at the top, but it spreads from the middle.
What’s your take? What worked best in your organisation’s first week of Cyber Awareness Month?
Let’s share the good, the bad and the messy middle of turning awareness into leadership practice.
2025-10-10
Nominet suspends a hijacked domain printed in Andrew Cope’s Spy Dog books after it began serving explicit content. Puffin pauses sales and schools pull copies.
A website address printed inside several editions of Andrew Cope’s Spy Dog, Spy Cat and Spy Pups children’s books has been suspended after it was found to host explicit material. UK registry operator Nominet confirmed the takedown, citing a breach of terms and a failure to implement suitable age verification required under the Online Safety Act. Puffin has paused sales and distribution of the affected books and schools have issued safeguarding alerts.
The link originally pointed to bonus content for readers but the underlying domain lapsed and was later acquired by an unrelated third party who replaced the content with adult material. Schools and local authorities urged parents to remove the books from homes and return borrowed copies, while Puffin and the author asked the public not to visit the site.
This is a classic domain expiration risk. The harm here was amplified by print. A printed URL has a long shelf life, is trusted by children and parents, and cannot be hotfixed. Legal and compliance dimensions are evolving too. Under the UK’s Online Safety Act, services that make pornographic content available to UK users must prevent children from accessing it, typically through effective age assurance. That obligation is now live and enforceable.
Printed links and QR codes are long lived. If you publish books, reports, packaging, manuals or classroom resources, a single expired domain can become a reputational and safeguarding incident years later. The same risk exists in corporate contexts where old campaign URLs are printed on product boxes or equipment labels. The regulatory bar on age assurance has also moved. If you operate or procure services that could be classified as adult or user generated content, you now have to evidence age checks and remove access if controls are inadequate.
Block and brief
Ensure filters block the suspended domain and obvious variants on school and corporate networks. Issue a parent safe notice that avoids clickable links.
Audit printed links
Create a register of all printed URLs and QR codes across books, worksheets, packaging, manuals and PDFs. Record owner, registrar and expiry date.
Protect domains properly
Enable auto renew, extend registrations to 5 to 10 years for anything in print, turn on registry lock for flagship domains, and require MFA with role based contact emails.
Standardise on a controlled short domain
Publish all printed links through a short domain you own and manage so destinations can be updated or killed without changing the print.
Monitor and prepare
Set expiry and DNS change alerts, watch for lookalikes, and keep a takedown and comms playbook ready. Review any exposure that might require age assurance.
This incident fits a wider pattern of opportunistic domain capture and link rot, now colliding with strengthened child safety rules. The combination raises the cost of letting domains lapse and increases the need for governance around how printed links are created, renewed and retired. In short, if it goes into print, it must be managed for as long as the print exists.
What is your take? Have you already moved printed links to a controlled short domain with auto renew and registry lock, or is this the nudge to do it now?
2025-10-10
Cyber awareness sticks when we celebrate the people who got it right. Real stories of quick thinking and early reporting shape culture more than any campaign slogan.
Image credit: Created for TheCIO.uk by ChatGPT
Every organisation has people who stop incidents before they start. The finance assistant who double-checked a change request. The engineer who spotted an unusual login. The manager who reminded their team to verify a link before clicking. Yet too often, those actions go unnoticed.
Cyber awareness culture grows faster when we celebrate those wins. Staff remember stories about real people, not slogans on posters. Recognition reinforces that good security behaviour is valued and visible.
When you highlight an example of quick reporting or careful action, you do two things at once. You thank the person who acted, and you show everyone else what “right” looks like. Culture moves where attention flows. What leaders praise, people repeat.
The best stories are specific and short. Focus on what happened, what was noticed, and what it prevented. Avoid technical detail that only a specialist understands. Instead, make it relatable: someone noticed something off, paused, and checked before it became a problem.
Share these stories across internal channels, team meetings and company updates. Treat them like success metrics, not anecdotes. Each one is a signal that awareness is working.
Campaign slogans fade. Stories stay. The more you celebrate behaviour that prevents risk, the more that behaviour spreads. Awareness becomes recognition, and recognition becomes routine.
What’s your take? Does your organisation celebrate the people who caught issues early — or only the ones who fix them later?
Let’s share the good, the bad and the messy middle of recognising the right security habits.
2025-10-09
The most effective awareness training doesn’t come from theory but from the real events that nearly went wrong, and the lessons they leave behind.
Image credit: Created for TheCIO.uk by ChatGPT
Most organisations have a list of near misses that never make the headlines. The payment change that was caught just in time. The lost laptop recovered the next morning. The phishing message reported before it spread. Each one contains a better lesson than any training slide.
Real incidents show how risk feels when it happens - not as a policy checklist, but as a chain of quick decisions, distractions and assumptions. They reveal where controls are strong, where pressure breaks process, and where communication gaps turn small errors into real exposure.
Telling these stories inside the organisation is one of the most powerful awareness tools available. When people see how colleagues spotted an attack or recovered from an error, they start to understand what matters and why. It builds shared ownership of security rather than fear of failure.
The key is how you tell it. Strip out blame. Focus on the decisions that made a difference. Use real timelines, real messages, and the exact moments where a pause or question changed the outcome. Authenticity drives retention. Staff remember what actually happened, not what could have.
Security teams often hesitate to share detail, but transparency pays back in better vigilance. The more real your examples, the faster behaviour improves. It’s not about pointing fingers, it’s about showing the system working.
Every incident is an unplanned fire drill. Treated well, it becomes part of your culture rather than a moment of embarrassment. That’s what resilience looks like in practice: learning before the next one arrives.
What’s your take? How does your organisation turn near misses into lessons worth sharing?
Let’s share the good, the bad and the messy middle of learning from what really happens.
2025-10-08
Asahi and Jaguar Land Rover are recovering from crippling cyberattacks. Their different paths back to production reveal shared lessons in resilience, leadership and the new realities of industrial cyber risk.
Image credit: Created for TheCIO.uk by ChatGPT
When Japan’s favourite beer vanished from shelves and Britain’s best-known luxury cars stopped rolling off production lines, it became clear that cyberattacks are no longer confined to data loss or office IT.
In late September, Asahi Group and Jaguar Land Rover were each forced to shut down core operations — one brewing, the other automotive — as ransomware groups disrupted systems that keep supply chains moving.
Now both are clawing their way back. Their experiences expose the fragility of industrial technology and the growing leadership challenge of turning cyber resilience into business survival.
On 29 September, Asahi Group Holdings confirmed a cyberattack that paralysed its internal networks, hitting order, shipment and customer support systems.
The impact was swift. Brewing operations halted at most of Asahi’s 30 plants. Supermarket shelves and vending machines across Japan began to run dry of Asahi Super Dry, the country’s top-selling beer.
By early October, the company began partial recovery — reverting to manual order processing and faxed invoices while engineers rebuilt damaged systems.
A ransomware group calling itself Qilin claimed responsibility, boasting of stealing 27 GB of data across 9,000 files. The company confirmed unauthorised access but has yet to verify what was taken.
For Japan’s hospitality sector, the outage was more than an inconvenience. A week without deliveries rippled through bars, restaurants and convenience stores, reminding executives that operational technology (OT) is every bit as exposed as corporate IT.
Across the globe, Jaguar Land Rover (JLR) faced a longer ordeal. A cyber incident reported at the end of August forced the company to halt production at key UK sites, including Solihull and Wolverhampton.
Engine manufacturing, body and paint shops, and logistics systems were all offline. The company’s carefully balanced just-in-time model turned fragile overnight.
Initial statements described the impact as “severe”. Thousands of employees were sent home, suppliers paused deliveries, and vehicle production stalled.
As investigations progressed, JLR confirmed that some data had been compromised but stopped short of attributing the attack to a specific group.
Restarting such a complex operation is far more difficult than rebooting servers. JLR had to verify each line of code and machine interface before allowing production to resume.
By 7 October, the company announced a phased relaunch of engine and parts production, with final assembly expected to follow within days. Around 33,000 UK staff are now gradually returning to work.
To support smaller suppliers strained by the shutdown, JLR introduced accelerated payment schemes, and the UK government approved a £1.5 billion loan guarantee to protect the wider supply chain.
While Asahi’s recovery took days and JLR’s spanned weeks, both cases underline similar truths:
These patterns will define the next generation of industrial resilience strategies.
Beer is brewing again, and cars are once more rolling off the line. But beneath the optimism lies a deeper shift: two of the world’s most recognisable manufacturers are proving that recovery is an organisational skill, not just a technical one.
For Asahi, agility meant reverting to analogue systems while digital recovery took place. Orders were handwritten, deliveries coordinated by phone, and stock monitored manually.
For JLR, the priority was precision — bringing each plant back under a “controlled restart” to ensure safety, quality, and security before speed.
Both confronted the same balancing act: when to switch from containment to continuity.
Too soon, and attackers could re-enter. Too late, and the business bleeds cash, market share and trust.
Out of that struggle, a shared recovery blueprint emerges:
These incidents mark a turning point for industrial firms. Cyber resilience can no longer sit inside a security team’s risk register — it belongs at the centre of corporate strategy.
The Asahi and JLR attacks exposed how digital disruption instantly becomes a production, finance and reputation crisis.
Recovery at this scale depends on coordination between IT, operations, communications and finance.
That coordination only works when leaders understand how their technology stack supports — and can break — physical output.
As both firms return to full operation, they are likely to trigger wider change across their sectors:
The Asahi and JLR recoveries offer a preview of what resilience looks like under pressure.
For IT leaders across industries, three lessons stand out:
Both companies are now back in business — but the wake-up call remains. Industrial resilience is not built in crisis; it is built in preparation.
What’s your take?
Do you see your organisation rehearsing recovery as actively as it builds defence?
Let’s share the good, the bad and the messy middle of building operational resilience.
2025-10-08
Awareness spreads through line managers faster than corporate comms. They turn policy into practice and set the daily tone for how teams handle risk.
Image credit: Created for TheCIO.uk by ChatGPT
Most awareness campaigns aim straight at staff. Emails, posters and eLearning modules target the individual. But the real amplifier of behaviour sits between the message and the front line: line managers.
Managers translate strategy into daily action. They decide what gets rushed, what gets reviewed, and what gets rewarded. If they reinforce the right security habits in team meetings and model them themselves, culture moves quickly. If they ignore them, awareness fades by Friday.
Managers are the point where intentions meet workload. They know when a project deadline collides with a process requirement. They hear the excuses, see the shortcuts, and understand when policies are too heavy to follow. That perspective makes them the best advocates for secure-by-design workflows, if they are included and equipped.
The problem is that many awareness programmes bypass them. Security teams talk directly to all staff, but not through the people who shape team routines. That’s a missed opportunity. A short monthly manager briefing, with one real scenario and one clear message to reinforce, does more than a dozen email nudges.
Empowering managers turns awareness into something self-sustaining. It normalises a conversation about risk that fits the pace of real work. It means a new starter learns safe habits from their line manager on day one, not from a course they click through later.
If your organisation treats cyber awareness as a leadership responsibility, line managers are the missing link between vision and behaviour.
What’s your take? Do your line managers have the tools and confidence to reinforce cyber awareness in their teams?
Let’s share the good, the bad and the messy middle of building culture from the ground up.
2025-10-07
Cyber awareness works best when leaders show what good looks like. Demonstration beats direction, and example beats enforcement.
Image credit: Created for TheCIO.uk by ChatGPT
Cyber awareness training often assumes that knowledge drives change. Send a message, post a tip, and behaviour will follow. But real change rarely comes from instruction alone. It comes from imitation.
People copy what they see, not what they are told. When senior leaders show how they handle risk, the message lands faster and sticks longer than any campaign slogan. A short clip of a finance director challenging a fake payment request, or an operations lead verifying a suspicious email, does more for culture than another policy document. It turns the abstract into the practical.
Leaders also set the tone and tempo for response. If the CFO takes thirty seconds to double-check a bank change and explains that process out loud, others will copy it. If a manager reports a phishing email straight away, the team learns that quick reporting is valued. Culture moves by example, not decree.
Showing fallibility helps too. When executives talk openly about their own near misses, it builds psychological safety around speaking up early. Staff stop worrying about blame and start focusing on prevention.
The best awareness programmes aren’t communications projects, they’re leadership habits on display. Cyber resilience is contagious when people can see what good looks like.
What’s your take? Have you seen a leader model the right cyber behaviour in a way that changed your team’s habits?
Let’s share the good, the bad and the messy middle of leading by example.
2025-10-06
The most powerful awareness message comes not from posters or eLearning, but from the CEO speaking plainly about risk and what staff must do.
Image credit: Created for TheCIO.uk by ChatGPT
Cyber Awareness Month always risks becoming a theatre of posters and slogans. Yet the most powerful signal does not come from design work or eLearning modules. It comes from the chief executive.
When the CEO talks plainly about risk, people listen. Not because they are suddenly fascinated by phishing techniques, but because the message is tied to what matters most: protecting revenue, protecting customers, protecting jobs. The words of a leader set the tone for how seriously a company takes resilience.
That is why today's focus is not a technical control, but a leadership act. A short, specific message from the CEO should highlight the top three risks the business faces and the single action that staff should take in each case. In practice, that often means:
It does not need corporate language. In fact, the plainer the better. A direct note, signed by the CEO, shows that awareness is not just an IT exercise but a business priority. It is the difference between compliance theatre and cultural shift.
Cyber awareness begins at the top. If staff see the CEO modelling caution and giving attention to risk, they will understand it is part of how the business runs, not an optional add-on.
What’s your take? Does your CEO speak directly to staff about cyber risks, or is the message delegated down the chain?
Let’s share the good, the bad and the messy middle of leadership voice in awareness culture.
2025-10-05
A supplier breach that exposed customer details at Renault Group UK is a reminder that modern attacks often land one step removed from your own network. Here is how to measure, manage and reduce supply chain cyber risk in practical terms that boards, legal teams and engineers can act on today.
Image credit: Created for TheCIO.uk by ChatGPT
Context: This analysis follows our brief news report: Renault Group UK warns customers after third party cyber attack.
Renault Group UK warned customers that a third party data processor had been attacked, with some personal and vehicle details potentially affected. The company said its own systems were not compromised and that passwords and payment information were not believed to be involved. On the face of it that is a narrow incident. In reality it reads like a blueprint for how most modern breaches unfold.
This is not isolated. In June 2023 the MOVEit software flaw hit UK payroll provider Zellis, exposing staff data at brands including the BBC, British Airways and Boots. The mechanism was classic third party concentration. One supplier. Many household names downstream. The breach may sit outside your estate yet the public will hear it as your story because they know your brand and do not know your vendors.
Criminals go where the controls are lighter and the data is concentrated. Supplier platforms and brokers often hold data for many clients at once. A single compromise can create a long tail of exposure across brands that consumers know, trust and contact directly. That is why incidents that begin outside your perimeter become your problem within hours. Your logo will be on the customer email. Your call centre will carry the load. Your regulator will expect a joined up response.
This piece uses the Renault notification as a jumping off point to examine supply chain security for UK organisations. The goal is not to point fingers. The goal is to help leaders translate a vendor incident into concrete action on governance, contracts, architecture and day to day operations.
Traditional risk thinking is still shaped by neat boundaries. Our network. Our devices. Our data centre. In practice, most organisations now run a web of shared services that moves data across legal entities and jurisdictions. That web turns risk in three ways.
First, concentration. Data processors, marketing platforms, payment gateways, vehicle connectivity platforms, telematics and finance brokers aggregate many clients into one technical estate. A single foothold can yield very broad pickings.
Second, opacity. You can audit your own servers and code. You have less visibility into a supplierâs architecture, patching cadence and detection capability. You have even less visibility into their own sub processors. The attack surface becomes nested and the chain of custody gets murky when the clock is ticking.
Third, accountability. Customers, journalists and regulators will come to you first. The attacker will use your brand details to craft convincing lures. The contract will matter, but reputational gravity means you carry the narrative.
The Renault notification described a data processor breach that touched personal and vehicle details, while stating that Renaultâs own systems were not compromised. That split matters. It is helpful to separate two broad outcomes when you design controls and exercises.
Data exposure drives privacy risk, fraud and a long period of phishing and social engineering. The immediate response is about containment, customer communication, and hardening of fraud controls. The long tail is about credit file monitoring, minimisation of exposed data fields and anything that closes the door on re use of leaked details.
Operational disruption is different. In that world you are dealing with ransomware on a supplierâs platform that your business depends on for daily service. Your ordering system stalls. Your dealer network cannot access parts. Your finance portal cannot create agreements. The focus is continuity, manual workarounds, alternative routes and a plan to restart safely without re importing the attackerâs foothold.
UK healthcare offered a stark example in June 2024. The Synnovis ransomware attack on a pathology supplier forced London trusts to cancel operations and appointments while mutual aid was stood up. That was a supplier outage that became a patient care problem. In the automotive world, CDK Globalâs ransomware incident in June 2024 forced thousands of North American dealers to fall back to paper based processes for days. Different sector. Same pattern. Provider fails. Retailers and customers feel it.
Most incidents show a blend of both. Your playbooks and contracts need to recognise the difference and bridge the gap.
The perimeter is no longer a meaningful boundary. Even where you have a strong internal control set, the supplier that processes your customer journeys or your fleet data may run a very different stack. You may have no multi factor on their support portal. You may have no conditional access rules for their contractors. They may have a different approach to patching or a backlog that gives attackers time.
An identity provider case study underlines the point. In 2023 Okta confirmed that its customer support system had been compromised. Attackers accessed support files that in some cases included tokens which were then reused. Okta later said all support system customers were affected by the data exposure. This was not a breach of customer tenants, but it shows how a supplier side system can create risk for many clients in one move.
None of this is written to excuse or shame. It is written to focus investment where it moves the dial. The fastest way to reduce external risk is to minimise what a supplier can see and to limit how far an attacker can pivot if that supplier is breached.
If a vendor is compromised tonight, what can be taken or disrupted in the first hour. In the first day. In the first week. This is the discipline of blast radius mapping. It is unglamorous and it is priceless when the phone rings.
Work from your customer journeys and critical processes backwards. For each supplier, write down three things in plain English.
The output is not a spreadsheet for the drawer. It is a short dossier per supplier that your legal team can tie into the contract, your technical teams can enforce in configuration, and your communications team can use when they get their first media query.
Production incidents prove the value. In February 2022 Toyota suspended production across all 14 plants in Japan after supplier Kojima Industries suffered a suspected cyber attack. One supplier failure stalled a national manufacturing footprint. That is the definition of blast radius.
Procurement has long lists. Legal has long clauses. What matters in a breach is surprisingly simple.
Keep the language crisp. Avoid optimism in place of guarantees. If a supplier cannot agree the basics, you have learned something valuable before the crisis arrives.
Contracts are the start. Architecture is the finish. Here are controls that reduce risk without waiting for a supplier to change their stack.
Least privilege for data feeds
Trim feeds to the minimum fields needed for the job. Replace full postcodes with outward codes where possible. Replace dates of birth with age bands where a marketing service only needs a segment. Tokenise vehicle identifiers where possible and keep the token map inside your boundary.
Broker data with controlled interfaces
Stand a gateway between your core systems and third parties. Issue expiring credentials. Put anomaly detection on outbound data volumes. Enforce schema validation to reduce the chance that excess data leaks into a supplier by mistake.
Network segregation and brokered access
If a supplier has remote access into your estate, terminate that access in a segregated zone with recorded sessions, command filtering and just in time elevation. Disable always on admin accounts. Turn access into a ticketed workflow that expires.
Secrets hygiene
Rotate API keys on a schedule you control. Avoid long lived credentials. Use scopes that expose only a narrow slice of functionality. Monitor for key abuse and spike detection rather than waiting for daily reports.
Cloud concentration and contractor risk
The 2024 Snowflake customer incidents highlighted how stolen credentials from contractor or employee endpoints can be reused at scale when multi factor controls are missing. Mandiant reported hundreds of exposed customer credentials harvested by infostealer malware, with multiple well known brands impacted. Controls like mandatory MFA on third party access, short lived tokens and contractor device hygiene blunt this class of attack.
Fraud and comms readiness
If customer data fields are exposed, the first follow on risk is targeted fraud. Pre wire your comms and CRM to flag contacts who were in the affected dataset. Add banners to their records for a set period. Challenge changes to contact details or bank details with out of band verification. Train support teams on the scenario script in advance.
Immutable backups for supplier hosted workloads
Where suppliers run workloads on your behalf within your cloud, enforce immutable snapshots and separate credentials for restore. Ransomware thrives on shared admin roles. Separation denies that path.
When a third party calls you with bad news, your first instinct will be to start an internal hunt. Do it, and do not stop there. You need joint analysis.
Set up a secure channel to share indicators in both directions. Agree a reference clock for timeline building. Clarify who is driving the forensic investigation for each affected system. Assign one person to document decisions. Assign one person to manage stakeholder updates. The biggest mistake in supplier incidents is to let ten smart people tackle the same problem while no one owns the narrative.
Run daily stand ups with technical, legal and communications in the room. Publish a single source of truth for internal teams. Give your customer facing staff a clear holding line that sets expectations without promising facts you do not yet have. People forgive a careful statement. They do not forgive a confident statement that turns out to be wrong.
In the UK your obligations depend on the nature of the data, your role as controller, and the risk to individuals. You will work with your legal team on breach assessment and notifications. What helps legal the most is timely facts. Which fields were in the affected feed. Which date ranges are in scope. How many individuals are involved. Whether you have evidence of misuse. That is why the blast radius dossier matters. It gives your lawyers the building blocks to make good calls on notification and customer messaging.
If you operate across the UK and EU you may find overlapping duties. Keep your regulatory communication simple, factual and consistent. Avoid speculation. Commit to updates at a predictable cadence.
Every sector has its quirks. Automotive is complex because the customer experience crosses brands, dealers, finance partners, telematics and insurers. The dataset is unusually rich. Vehicle registration, VIN, service history, warranty status, finance status and contact details can be combined to create very convincing lures.
A vendor data exposure at scale illustrates the point. In 2021 Volkswagen Group of America disclosed that a vendorâs unsecured database exposed information on more than 3.3 million people, the majority Audi customers or prospects. The breach originated at the supplier, but the communications burden and reputational risk sat with the marques consumers recognise.
That means the practical defences carry extra weight. If you run an automotive business, keep a standing watch for scams that reference real vehicles, real service events and real finance products. Teach frontline teams to treat those specifics as a risk signal, not as a sign of legitimacy.
Myth one: we are fine because our systems are secure.
Supply chain incidents start outside your systems. Your controls still matter, but they are not the whole story.
Myth two: our contract will save us.
Contracts are necessary. They do not handle a live press inquiry, a customer queue or a determined attacker. Practise the human parts.
Myth three: we cannot influence a supplierâs security.
You can. Limit the data you send. Limit the access they hold. Make good security a condition of doing business. Choose the right partners.
Myth four: if passwords and cards are safe, the risk is low.
Names, dates of birth, contact details, registrations and VINs are powerful ingredients for fraud. Treat them seriously.
This is a plan you can run without new headcount. It will not solve everything. It will measurably lower risk and raise your readiness.
Days 1 to 30
List your top twenty suppliers by operational impact and by personal data volume. For each one, complete the blast radius dossier. Trim data feeds to the minimum required fields. Put time boxed credentials behind your supplier access. Stand up a joint incident channel template and test it with a tabletop exercise that includes legal and comms.
Days 31 to 60
Update contracts for those top suppliers with clear notification and cooperation clauses. Move supplier credentials to just in time workflows. Add anomaly alerts to outbound data volumes. Bake a fraud response playbook into CRM with flags and scripts for exposed cohorts.
Days 61 to 90
Run a second exercise that starts at the supplier and lands on your customers. Publish a short internal guide on how to recognise and report third party incidents. Create a management dashboard with three measures that matter: number of high impact suppliers with complete dossiers, percentage of feeds that carry only minimum fields, and percentage of live supplier access routes that are time bound.
If a supplier called you right now to say they had an incident, you would want a short, concrete list. Use this and adapt it to your world.
Boards hear about supply chain risk all the time. What they need from you is clarity that feels actionable.
If you frame the conversation in those terms, investment questions become easier. You can propose a small budget for data brokering, access controls and exercises, and tie it directly to reduced exposure.
The Renault alert is the pattern, not the outlier. A supplier holds a slice of customer data. An attacker finds a route in. The brand never loses control of its core estate, yet it still faces a customer impact. That is the normal shape of cyber risk now. You cannot remove it. You can make it smaller. You can make it shorter lived. You can be ready to explain it with plain facts.
The real differentiator is speed and tone. Customers accept that attacks happen. They expect early notice, straight language and clear steps to protect themselves. They expect the brand they deal with to take responsibility for the relationship, even when a third party sits in the middle. If you get that right, an incident does not have to become a crisis.
What is your take. Where have third party risks surprised you this year.
Let us share the good, the bad and the messy middle. The comments will help others avoid dead ends and discover what works.
2025-10-05
AI is speeding up social engineering, deepfakes are turning controls into losses, and quantum migration has moved from theory to timetable. Here is what has changed, what matters, and what to do now.
Image credit: Created for TheCIO.uk by ChatGPT
The cyber threat picture has changed again. Not because criminals found a new class of zero day across every platform, but because they supercharged what already works. Generative AI lowers the cost of persuasion at scale. Deepfakes have moved from headline novelty to ledger entries. A very different risk develops in the background. Quantum computing has not broken today’s cryptography yet, but the replacement standards are final and migration is now a programme, not a thought exercise. None of this is hype. All of it is playing out in British networks and boardrooms.
This article sets out what is new, what matters, and what to do next. It draws on guidance from the National Cyber Security Centre, Ofcom’s action on spoofed calls, UK Finance fraud data, public service announcements from US law enforcement, ENISA’s threat landscape, and NIST’s post quantum standards. The goal is practical steps for leaders, not slogans.
AI does not conjure new magic. It reduces time and skill needed to run proven crimes. The NCSC’s assessment of AI to 2027 makes the point in plain language. Expect faster, more convincing phishing, quicker exploitation of known flaws, and cheaper impersonation in voice and video. Lower barriers to entry and higher throughput change the economics in favour of criminals.
You see the same message in the NCSC’s guidance for non technical leaders. Generative models improve the quality and volume of persuasion. The copy reads like a native speaker. The voice on the phone sounds like your finance director. A video meeting can show what looks like colleagues who nod along to an urgent request. Traditional checks suffer when the signals that people trust are easy to fake.
Fraud remains a stubborn constant. Losses to unauthorised and authorised scams in the UK have hovered around the billion pound mark for two years. Even as banks improve prevention, case volumes keep rising. Most authorised push payment cases begin online. A significant share begins on telecoms. This is the surface where AI persuasion is most effective.
If you still file deepfakes under future risk, revisit the well publicised case of a multinational engineering firm that was duped by a deepfake video call purporting to be senior colleagues. Multiple transfers went through. Losses ran to tens of millions in local currency. This is not a tabletop exercise. It is a completed theft at a reputable firm, and it has reset assumptions for finance and treasury teams around the world.
Public guidance has kept pace. Law enforcement has warned that criminals use generative AI to expand fraud at scale and to impersonate senior figures through text, voice and video. The message to boards is simple. Treat voice and video as untrusted inputs unless you add independent verification.
British regulators have moved on caller ID spoofing. Ofcom now expects providers to block more international calls that present as UK numbers. That reduces background noise for call handling teams and raises the cost for criminals, although it does not remove the need for strong identity and challenge procedures inside your organisation.
The countermeasure is not a single tool. It is a redesign of everyday decisions that move money or expose data.
Make payments boring again. Require dual control for new payees, revived dormant payees and any change to bank details. Build a timed delay for approvals so no one is forced into a snap decision. Enforce an out of band check on a number taken from your directory, not the email. These are simple habits that break the deepfake kill chain and align with bank expectations for authorised push payments. Measure how often staff challenge and block. Then celebrate the pause.
Move verification off channels that are easy to fake. Voice is not proof. Video is not proof. Use a separate contact route and a code word for sensitive approvals. Use hardware backed factors for account recovery and remote access, so a cloned voice cannot reset an identity. This is a mindset shift more than a technology project.
Treat executive calendars and meeting links as part of your attack surface. If a criminal knows when your CFO is in a board meeting, they can time a fake call that looks plausible to an assistant. Reduce public calendar detail, review who can see meeting links, and brief assistants on the call back habit before money moves. That is operations security for the age of synthetic media.
There is a second class of risk inside our own systems. Many organisations are piloting or deploying AI assistants and copilots. The attack surface moves from phishing in and malware out to content in and actions out. That brings prompt injection, insecure output handling, model denial of service and supply chain risks into everyday engineering. The OWASP Top 10 for LLM applications sets out these issues and gives a shared language for due diligence. If you build or buy AI enabled products, ask vendors how they mitigate these specific risks and insist on evidence.
Public bodies have converged on secure deployment patterns. Cyber security agencies have published guidance for deploying externally developed AI systems, and best practice on AI data security. The message is straightforward. Restrict what models can see and do. Validate outputs before any action. Protect training, fine tuning and retrieval data with the same care you apply to code. Log prompts and responses so you can investigate abuse.
Why focus on data. Because poisoned or tampered knowledge creates unsafe behaviour. If your assistant learns from your wiki or a shared drive, a single malicious page or file can change how it answers. Treat prompts, policies and knowledge bases as change controlled assets. Require review for updates. Make rollback easy.
Keep an eye on cost of failure. Even a benign prompt injection can drive uncontrolled token use that creates a surprise bill. Model denial of service is a real risk. Add spend limits and alerts now. It is cheaper than explaining an invoice.
Industry data shows that criminals continue to steal roughly the same total sums even as institutions block more attempts. The tactic shifts towards high volume and low value attacks that target e commerce flows and one time passcodes. That aligns with what many organisations are seeing in service desks and finance teams: more noise, better grammar, stronger social context, and less of the broken English that once served as a reliable tell.
Ofcom’s strengthened guidance on caller ID improves the baseline, but your own processes must carry the load. Assume the caller ID can be spoofed and the voice can be cloned. Build challenge response checks into service desk scripts. Train to them. Test them.
Europe’s network of incident responders sees a converging threat landscape. Groups reuse proven tooling and playbooks. ENISA’s threat landscape publications highlight the growth of AI supported social engineering and continued focus on ransomware, data theft and availability attacks. The pattern is familiar. The same criminal groups, a little faster and a little slicker, applying the same pressure across multiple sectors.
The practical read across is simple. Invest in controls that work at scale and under pressure. That means strong identity for staff and suppliers, robust patch and configuration hygiene at the edge, and a payment process that cannot be rushed by a video call. It also means supplier assurance that asks hard questions about AI attack surface, not just a generic security checklist. OWASP’s Top 10 for LLM applications is the right reference point for those conversations.
Quantum computing remains a research field today. It has not broken TLS or VPNs on the open internet. That is often used to justify waiting. It should not be. In 2024 the first post quantum cryptography standards were finalised for key establishment and signatures. Those standards are now shaping vendor roadmaps for identity, network and device platforms.
The NCSC has published a timetable for migration with clear expectations. Identify where you use public key cryptography and plan now. Prepare pilots and hybrid modes this decade. Complete migration by the mid 2030s. The important idea is crypto agility. You want to be able to change algorithms and parameters without ripping out whole systems.
There is another reason to start. Adversaries can steal encrypted data today and read it later. This is harvest now and decrypt later. If your secrets must stay secret for a decade or more, treat them as at risk now and design controls accordingly. That may include using post quantum and classical algorithms in hybrid, tightening data retention, and reducing where you store highly sensitive archives.
Run a deepfake drill in finance. Simulate a video call from a senior executive that requests an urgent transfer or a change to bank details. Watch how staff apply dual control, out of band checks and timed delays. Fix gaps. Tie any changes to the expectations your bank will have if you need to report an authorised push payment case. This builds muscle memory and a clear message. There is no such thing as an emergency that bypasses controls.
Review caller authentication across service desks and contact centres. Assume the voice may be fake and the caller ID may be spoofed. Use challenge response checks that rely on account data points or registered device possession, not tone of voice or job title. Embed this in scripts and coach to it. Regulatory guidance supports the direction of travel but does not remove the need for internal challenge.
Put guardrails around every AI assistant that can see sensitive data or use tools. Grant least privilege. Restrict training and retrieval sources. Force human approval before any live write action. Instrument and log prompts and outputs. Align your internal standards and supplier assurance with the OWASP Top 10 for LLM applications so everyone speaks the same language.
Start a post quantum working group. Inventory where you use public key cryptography across identity, TLS termination, VPNs, device management, code signing and payments. Engage suppliers about roadmaps for post quantum key establishment and signatures. Set up a small test lane to evaluate hybrid modes as vendors ship support. This is discovery and design, not a big bang change. The national guidance provides the frame.
Make social engineering controls visible to the board. Report on how many payment requests were challenged and how many were stopped or corrected by dual control. Provide industry fraud data as context for why this matters and how the organisation compares. The aim is to normalise the pause. It shows that people follow process under pressure.
Expand incident response plans to include synthetic media. Your communications team needs prepared language that explains what happened, how deception worked and what will change. Your legal team needs a playbook for when manipulated clips of staff appear online. Your HR team needs guidance for supporting colleagues who become the face or the voice of a scam they did not commit.
Bake AI security into procurement. For every AI enabled product, ask vendors to evidence defences against the OWASP risks. Ask how they restrict model privileges. Ask how they validate outputs before actions. Ask how they monitor for prompt injection. Ask how they secure the data sources that shape responses. Require specifics and test them in proof of concept.
Move quantum readiness from theory to pilots. Work with identity and network vendors on post quantum roadmaps. Begin code signing or firmware signing pilots where modern signature schemes are supported. Prepare for larger key and signature sizes in tooling, storage and network paths. These are practical engineering tasks that reduce risk later.
Government and regulators continue to push providers and platforms to do more at source, from telecoms measures against spoofed calls to secure by design guidance for AI. Those moves help. They do not remove the need for strong internal controls and a culture that backs staff who slow things down. Fraudsters ask for speed and secrecy. Your controls must ask for time and verification. Your people must feel safe when they pause.
Boards should track two timelines at once. The first is the monthly reality of AI enhanced fraud and social engineering. Measure challenged payments, blocked logins and near misses. The second is the multi year programme to reach post quantum readiness. Measure inventories complete, pilots run and suppliers aligned with the new standards. Both are leadership work. Both are measurable.
The threat has not become magical. It has become faster, more convincing and cheaper to scale. The defence remains the same mix that works elsewhere in technology. Clear standards. Boring controls. Visible metrics. The discipline to start long projects while the roof is not on fire.
If you take only three actions, take these.
What is your take. Where are you seeing AI persuasion or deepfake attempts in the real world. What would help your teams slow things down when it matters.
Let us share the good, the bad and the messy middle. If this helped, pass it to a colleague who approves payments or runs a service desk. They are the front line.
2025-10-04
Renault and Dacia have warned UK customers after a third party data processor was hit by a cyber attack. Personal and vehicle details may be affected. No passwords or payment data reported. No Renault systems compromised.
Image credit: Created for TheCIO.uk by ChatGPT
Renault Group UK has warned customers that a data processor it uses was hit by a cyber attack that led to theft of certain personal and vehicle details. The company says its own systems were not compromised. Passwords and payment information are not believed to be involved.
Renault and Dacia say the breach stems from a supplier system rather than their own networks. Early reporting indicates that data fields may include names, postal addresses, dates of birth, phone numbers, gender and vehicle details such as registration numbers and VINs. Officials have been notified and affected customers are being contacted.
Be cautious with unsolicited messages that reference your vehicle or service plan. Go directly to official channels rather than using links in emails or texts. Treat any request to change banking details as high risk and verify using a saved number. Watch for scams that reference your registration or VIN to build credibility. Consider setting credit alerts with UK agencies if you are concerned.
The automotive sector remains a prime target for criminals who exploit data rich supplier ecosystems. This incident appears focused on data theft rather than operations. The main risk for drivers is fraud and phishing that uses verified personal and vehicle details to create convincing lures.
2025-10-04
Cyber Awareness Month should be the start of a year of better habits, simpler processes and measurable risk reduction. This feature sets out a practical four week plan, the behaviours to model, and the metrics that prove impact.
Image credit: Created for TheCIO.uk by ChatGPT
If your awareness campaign still relies on stock posters and a phishing quiz, you are missing the point. October is a useful rallying point, but real resilience comes from habits, leadership attention and decisions that stack up across the year.
Cyber Awareness Month lands every October with good intentions. Many organisations schedule an email from the Chief Executive, a refreshed set of posters, and a compulsory eLearning module that everyone clicks through between meetings. The facts are familiar. Human decisions still sit at the centre of most incidents. Attackers need you to act quickly and thoughtlessly. Yet many campaigns are noisy, not effective. They start and finish in October, and they rarely change what people do when it matters.
This piece is for IT leaders who want Cyber Awareness Month to mean something. Not as a compliance exercise, but as a lever to build a practical security culture. That means making better choices easier. It means shifting from slogans to systems, from training events to redesigned workflows, and from vanity metrics to measures that tell you if risk is actually reducing.
Awareness months are tempting. They create a deadline, they gather attention, and they give you a platform. They also encourage a burst of activity that fades as soon as the calendar turns. The risk is that your organisation treats cyber awareness like fire drill day. People take part, they tick the box, and they go back to their normal habits unchanged.
The trap has three parts. First, campaigns over index on messages that tell people to be careful, rather than making it easier to be careful. Second, they rely on one off training that does not stick. Third, they measure participation, not outcomes. None of that reduces the chance of a payment being misdirected on a Friday afternoon or an attachment being opened in a hurry.
Compliance has its place. Policies matter. Standards force consistency. Audits reveal gaps. But compliance is not culture. Culture is what people do when the policy is not on their desk. Culture is the shared understanding of how we handle risk when time is tight and the stakes are high. If you want people to pause before they act, then you need to build that pause into how work is done and how success is judged.
A useful test is this. If your awareness month disappeared, what behaviours would continue anyway because they are built into your tools, your processes and your leadership routines. If the answer is very few, then you have a communications programme, not a culture.
The most effective campaigns begin with a simple map of likely harms. Pick the short list of real scenarios that hurt your organisation. Payment diversion after a convincing invoice. Account takeover after a password reuse incident. Sensitive data emailed to the wrong recipient. A contractor laptop lost on a train. Now work backwards. What decisions lead to those harms. Who makes those decisions. In what systems. Under what forms of pressure.
Once you have these paths in view, design your awareness effort to interrupt them. If payment diversion is a key risk, the priority is not another poster about phishing. It is a plain language policy for how bank details are changed. It is a mandatory pause in the finance system that asks for a second check on any change to payee details. It is a simple checklist for teams who speak to suppliers. It is a micro learning clip that shows a believable example of a fake change request and what a good challenge sounds like.
Security behaviour follows the leader. Staff pay attention to what leaders do, not only what they say. If executives do not use multi factor authentication, do not complete their own training, or insist that workarounds are fine when a deadline looms, then the culture learns a clear lesson about priorities.
During October, get senior leaders to model the habits you want. Ask them to narrate their own practice in a short video that everyone can see. How do they deal with a suspicious message. How do they handle sensitive documents while travelling. How do they hold their teams to account for risky shortcuts. Keep it specific. Keep it short. Behaviour spreads when people can see it.
Habits beat memory. You can improve habits by making the right action easier than the wrong one. That is the essence of secure by design. The following are practical ways to convert awareness into practice.
People rarely decide to be reckless. They get rushed. They are helpful. They are tired. Design deliberate pauses into the moments that matter. A second approval in the finance system for new payee details. A warning banner on external email. A short delay before messages from new domains are delivered. These are not silver bullets, but they catch a percentage of issues and they teach people what to notice.
Default settings are decisions. If your collaboration platform defaults to open sharing, expect accidental exposure. If it defaults to private team spaces with explicit sharing, exposure is less likely. If every new user is enrolled in multi factor authentication by default, adoption is near total. Awareness that rides on secure defaults becomes reinforcement rather than the only line of defence.
You want people to report suspicious activity quickly. That only happens if the reporting process is faster than ignoring the problem. Add a report button in email. Accept incomplete reports. Thank people. Share the outcome. When staff see that reporting helps colleagues and that it leads to real action, participation increases.
Clear, practical rules beat dense policy text. Provide a single, easy to find guide about what to do if a device is lost, what to do if data is sent to the wrong person, and who to call if an account looks compromised. Print it on one side of paper. Publish it in the intranet. The moment an incident begins is not the time to search for policy documents.
Most staff are not fascinated by cyber security. They want to do the right thing, but they are busy. Traditional training often treats attention as unlimited. Long modules. Repetitive content. Generic scenarios. The result is fatigue and low retention.
Switch to content that fits the reality of a working day. Ten minute learning paths. Two minute clips that model a real conversation with a fraudster. Single question nudges inside tools. Quarterly refreshers that focus on new techniques attackers are using. Invite teams to send in real examples. Use those examples, with details removed, to teach the company how to respond.
Threats evolve. Awareness must keep up. Three areas deserve space in any modern programme.
Criminals can now produce convincing voice clones and video forgeries that mimic senior leaders. Teach people a simple rule for high risk requests. If the request asks for money movement, access changes or data disclosure, then confirm it on a second channel that you already trust. A voice call that you initiate. A message in the corporate chat that you start. Do not trust the channel that made the request.
Incidents in partners and suppliers become your incidents quickly. Your awareness culture should extend into contracts and onboarding. Do your vendors know your rules on payment changes. Do they understand your reporting route if something goes wrong. Share your one page incident guide with them. Invite their teams to your short awareness sessions. Culture spreads along the supply chain when you send it there deliberately.
Privacy law is often framed as compliance. The better framing for awareness is care and consequence. Staff who handle personal data should be taught to ask two questions every time. Do I need this data. Who will see it. That reflex leads to less collection, cleaner retention, and fewer incidents where people learn lessons the hard way.
You cannot manage what you do not measure, but not all measures are equal. Participation rates in training tell you something. They do not tell you if the organisation is safer. Better measures look at behaviour change and incident outcomes.
Useful measures include time to report suspicious messages, the proportion of reports that turn out to be genuine, how often policy exceptions are requested, and how quickly incidents are contained. Watch for leading indicators. Are staff challenging unusual payment requests more often. Are false positives reducing as people learn. Are departments with clearer processes suffering fewer near misses.
A strong metric for finance processes is the rate of rejected payment change requests because the second channel confirmation failed. That number should rise after you introduce the control, then fall as attackers redirect effort away from your organisation.
Smaller organisations sometimes feel locked out of good awareness practice because they lack budget. The basics are achievable with limited tools. Use public guidance from reputable sources and adapt it to your context. Record short explainer videos on a laptop. Hold short town hall sessions where you review a real incident from the news and discuss how your organisation would have handled it. Focus on the two or three risk scenarios that matter most for your revenue and relationships. The goal is not a glossy programme. It is fewer mistakes in the moments that count.
For very small teams, set up a monthly rhythm. Ten minutes in a team meeting to look at a fresh example. One control improvement per month, like enforcing multi factor authentication on one more system, or tightening sharing defaults on your document platform. Over a year those steps add up.
If October is your launch pad, the real test comes in November and beyond. Treat Cyber Awareness Month as the start of a ninety day push, not an isolated event. Set three objectives for the quarter that follows.
First, redesign one high risk workflow. Pick a process where mistakes are likely and consequences are serious. Payment changes, privileged access, or customer data handling are candidates. Map it, simplify it, and add guardrails.
Second, upgrade your reporting loop. Make reporting easier, shorten the response time, and publicise the wins. Staff need to see that speaking up makes a difference.
Third, secure your supply chain touchpoints. Update contract templates to include your awareness expectations. Run a shared session with your top suppliers on the risks you are seeing and the controls you expect when money or data is at stake.
Many awareness programmes talk past the people who shape day to day behaviour. Line managers decide what gets rewarded, what gets rushed, and what gets reviewed. Give managers simple tools. Provide a deck they can use in team meetings with two slides per month. One real scenario. One practice to reinforce. Give them a channel to escalate concerns from their teams. Recognise managers who improve their team metrics. Culture moves through managers faster than it moves through corporate comms.
Stories move people more than policy pages. Tell the stories of staff who stopped an incident by challenging a request, or who reported a suspicious message quickly. Share the lessons from incidents without blame. If people only see consequences when things go wrong, they will hide mistakes. If they see that reporting is valued and that lessons are shared, they will act sooner next time.
Think of your awareness programme as a product that serves the organisation. It has users with needs, pain points and jobs to be done. It has a roadmap. It has feedback loops. It has performance goals. Run it with the same discipline you would apply to a customer facing service.
That mindset shifts the conversation. You are not shipping content for the sake of it. You are improving outcomes. You are removing friction where it does not help and introducing friction where it protects. You are prioritising features, not producing collateral.
If you need proof that awareness can reduce risk quickly, start in finance. The scams are common and the decisions are consequential. Run a short workshop with finance leaders and administrators. Map the payment change process. Identify the points where fraudsters insert themselves. Add a rule that any change to bank details must be confirmed on a second channel using a number from your system of record, not from the email that made the request. Configure the finance system to require the second check and to record it. Inform your suppliers that this is your process. Then measure.
Follow up with a practice drill. Send a simulated change request that is good enough to fool a careful person. See how the team responds. Debrief what worked and what did not. This is awareness as practice, not content as output.
The right tools amplify awareness. Email security that flags known impersonation techniques. Identity platforms that make strong authentication painless. Document platforms that default to private and make sharing explicit. Device management that reduces the burden on staff while keeping assets patched and recoverable.
Invest with a clear principle. Tools should remove routine decisions from people and reserve human judgement for the cases where it adds value. If a system can make it impossible to send sensitive data to the wrong domain by default, do that. If a tool can quarantine a suspicious login while you check it, use it. Awareness then becomes the story you tell about why the tool behaves as it does, not a plea to be careful despite poor design.
Boards want assurance. They need to know that risk is understood and managed. Awareness reporting should avoid theatre. Instead of slides full of courses completed, present a simple picture of behaviour and outcomes. How quickly do staff report suspicious messages. What proportion of high risk requests are confirmed on a second channel. How many policy exceptions were sought and why. What changed in the last quarter as a result of what you learned.
When boards see that awareness is connected to real risk reduction, funding follows. When they see that your programme is changing how people work, they will champion it in their own areas.
Customers and partners draw conclusions from how you talk about security in public. Use October to publish a short note on your website that explains your approach. Describe the controls you expect on payment changes. Explain how to report suspicious communications that claim to be from your organisation. Share how you protect customer data. You are not revealing secrets. You are setting expectations and making it harder for attackers to imitate you.
For sectors that serve vulnerable people, such as education or healthcare, go further. Communicate in plain language with the families or patients you serve. Explain how you will contact them and what you will never ask them to do. Invite them to report suspicious messages. Awareness then becomes part of your brand promise.
If you need a starting plan for this month, use this as a blueprint.
Week one. Publish a simple message from the Chief Executive that describes the top three risks in your context and what staff should do in each case. Short and specific. In the same week, release a two minute video from a senior leader modelling how to challenge a payment change request.
Week two. Run a finance drill and a privileged access drill. For finance, simulate a bank detail change. For privileged access, simulate an urgent request to grant access out of hours. Measure response time and quality of challenge. Debrief openly. Fix gaps quickly.
Week three. Launch improvements to your defaults. Enrol remaining users in multi factor authentication. Tighten external sharing defaults in your document platform. Add an email warning banner for external senders if you do not already have one. Announce the changes with short, plain guidance about why they help.
Week four. Hold a short town hall. Share wins, lessons and the plan for the next quarter. Recognise colleagues who reported issues early or who improved a process. Publish your one page incident guide and your payment change rules in a place everyone can find.
The effectiveness of Cyber Awareness Month is judged in December, not at the end of the month. The real prize is a culture where awareness is obvious in the way work feels. Processes with sensible pauses. Tools that remove risky choices. Leaders who model the basics without drama. Metrics that tell a story of fewer near misses and faster recovery when something goes wrong.
If your organisation can say in December that payment change fraud attempts failed because people knew what to do, that incident reports arrived faster and more often, and that one risky workflow is now simpler and safer, then October did its job.
Cyber awareness is not a campaign. It is a set of design choices that make the safe path the easiest path. Use October to start a year of changes that matter. Pick the risks that actually threaten your organisation. Put leadership attention where it changes behaviour. Redesign the processes where errors happen. Measure the outcomes that show progress. Do those things and the posters become reminders, not the main act.
What is your plan for October. Which single workflow will you redesign first to reduce real risk.
2025-10-03
October is Cyber Security Awareness Month. The NCSC turns nine this year, and its guidance has never been more relevant.
Image credit: Created for TheCIO.uk by ChatGPT
October is Cyber Security Awareness Month, and it also marks the ninth birthday of the UK’s National Cyber Security Centre (NCSC). Since its launch in 2016, the NCSC has become central to the UK’s digital resilience, providing threat intelligence, guidance and practical tools for organisations of every size.
Its mission is simple but ambitious: to make the UK the safest place to live and work online. That means defending national infrastructure from advanced attacks while also equipping smaller firms and charities with practical protections. The NCSC’s Small Business Guide is a good place to start, with advice on passwords, two-factor authentication, software updates and secure backups. These are low-cost, high-impact steps that reduce everyday risks.
For larger organisations, the NCSC offers detailed guidance covering governance, risk management and supply chain resilience. Boards are encouraged to treat cyber security as a core business risk, not just a technical issue. With complex systems and wider attack surfaces, large organisations are also urged to test incident response plans and strengthen assurance across partners and suppliers.
As the NCSC turns nine, Cyber Security Awareness Month is the ideal moment for every organisation, large or small, to revisit its cyber priorities.
Read more for small businesses
Read more for large organisations
What’s your take? Is your organisation doing enough to apply both the basics and the board-level practices?
Let’s share the good, the bad and the messy middle of cyber resilience.
2025-10-01
Barclays’ Business Prosperity Index shows technology leaders now see Britain as the world’s most attractive place for growth, with AI investment surging and financial resilience strengthening, but ongoing government support still essential.
Image credit: Created for TheCIO.uk by ChatGPT
The United Kingdom’s technology sector has entered 2025 with a striking vote of confidence from industry leaders. Barclays’ latest Business Prosperity Index reveals that nearly two thirds of technology executives believe Britain offers a more attractive landscape for growth than the United States, Europe or Asia-Pacific. That finding may surprise those who expected post-Brexit uncertainty, global economic turbulence and geopolitical instability to blunt the UK’s appeal. Instead, it suggests that the country has reached a pivotal moment.
This is Britain’s tech moment. A convergence of factors is creating conditions that make the UK not just competitive, but magnetic to investors, entrepreneurs and global technology leaders. The reasons range from the depth of the customer base and the diversity of the talent pool to the rapid adoption of new technologies, especially artificial intelligence. Financial resilience is also bolstering confidence, with firms reporting stronger cash flow and reduced reliance on overdrafts.
Yet the Index also highlights a conditional note. Leaders remain clear that government support through targeted funding, fiscal incentives and grants will be essential to sustain momentum. The story of Britain’s tech sector is one of opportunity, but also responsibility — a chance to cement a position on the world stage, if both industry and policymakers can deliver.
The headline statistic is unambiguous. Sixty two percent of technology leaders surveyed believe the UK is the most attractive place for their company to grow. That figure surpasses confidence in the United States, traditionally seen as the beating heart of the global technology industry. It also exceeds expectations of growth potential in the powerhouse economies of Asia-Pacific and the large but fragmented market of continental Europe.
The UK’s allure is not new, but the scale of the shift is noteworthy. Just a decade ago many British firms looked to expand overseas in search of growth opportunities. The gravitational pull of Silicon Valley was particularly strong, and investors often urged start-ups to relocate or establish significant operations in California. Today the tide appears to be turning. Britain is not just retaining homegrown talent and investment but attracting global interest as a place to scale technology ventures.
Industry leaders point to several factors. The customer base in the UK is broad and digitally savvy, creating fertile ground for testing and scaling new products. The talent pool remains diverse and internationally connected, with universities and research centres producing skilled graduates in computer science, engineering and data analysis. The country also ranks highly in technology adoption, with businesses and consumers alike embracing innovations ranging from digital banking to healthtech applications at pace.
Artificial intelligence is the lightning rod for both investment and demand. Half of the companies surveyed by Barclays said they plan to increase their AI investment by at least twenty percent this year. This is not simply a case of experimenting with generative models or automating back-office processes. It reflects a strategic decision to embed AI deeply across products, services and operations.
The report highlights that ninety five percent of firms are seeing rising client demand for AI-enabled offerings. That figure is remarkable in its breadth. It suggests that AI has shifted from a niche capability to a mainstream requirement across multiple sectors. Financial services firms are deploying AI to enhance fraud detection and personalise customer experiences. Retailers are using it to optimise supply chains and predict consumer preferences. Healthcare providers are integrating machine learning into diagnostics and patient care.
The trajectory is clear. AI is not an optional add-on for businesses seeking to modernise. It has become a central component of competitive strategy. For technology companies based in the UK, this demand translates into significant opportunity to grow revenues, attract investment and expand internationally.
Confidence in growth is reinforced by a strong financial backdrop. The Business Prosperity Index shows that technology firms have improved their financial resilience over the past year. Cash flow positions are stronger, company savings have increased, and overdraft use has declined. These metrics indicate that firms are not just optimistic but are underpinned by tangible financial health.
In an era where economic uncertainty has become the norm, such resilience matters. Many technology businesses had to weather the dual challenges of inflationary pressures and fluctuating investor sentiment. That they are emerging with healthier balance sheets reflects prudent management and, in some cases, strategic refocusing. For scale-ups and mid-sized firms, stronger cash reserves provide the flexibility to invest in innovation and talent without overreliance on external capital.
The result is a sector that is not merely hopeful but demonstrably capable of sustaining growth.
Yet the report also underscores that optimism is conditional. Industry leaders repeatedly stress the need for continued government support. While private capital and entrepreneurial energy are essential, the framework provided by public policy can either accelerate or impede growth.
Leaders point to funding programmes, fiscal incentives and grants as pivotal to sustaining the UK’s competitiveness. Research and development tax credits remain particularly valued, especially for firms investing heavily in AI and advanced technologies. Targeted grants for innovation clusters in regions outside London are also seen as critical to ensuring that growth is not confined to the capital.
There is also a call for clarity. Shifting policies or inconsistent approaches to support risk undermining confidence. Companies need predictability to plan multi-year investments. They also need assurances that government will continue to back key infrastructure projects, from digital connectivity to skills development initiatives.
In short, the private sector is willing and able to drive growth, but leaders believe government partnership is essential to maintain momentum.
The UK’s diverse talent base is a core strength but also a potential bottleneck. The country benefits from world-class universities and a steady influx of international talent. London, Cambridge, Manchester and Edinburgh have become hubs of research and innovation, producing graduates with the skills needed to power technology companies.
However, the pace of technological change is relentless. As AI, quantum computing and cybersecurity evolve, the demand for specialist skills intensifies. Leaders warn that without sustained investment in training and upskilling, the advantage could erode. Programmes designed to reskill workers in digital competencies, coding and data literacy are vital. So too is maintaining an immigration framework that allows firms to access international expertise without excessive bureaucracy.
The sector’s future competitiveness depends on ensuring that the supply of talent keeps pace with demand. That means a focus not only on elite research but also on building a digitally confident workforce across industries.
Another striking theme is the growing importance of regional technology hubs. While London remains a global financial and digital centre, cities such as Manchester, Leeds, Birmingham and Bristol are establishing strong reputations in software development, fintech and creative industries. Scotland is gaining traction in data science and cybersecurity, with Edinburgh leading the way.
This regional diversification matters. It spreads economic growth more evenly and taps into local strengths. It also makes the UK more resilient by avoiding over-concentration of investment and talent in the capital. Government incentives and local partnerships are helping to fuel this trend, but sustained support will be necessary to ensure it continues.
Britain’s appeal must also be considered in the context of global competition. The United States still commands immense resources and a vast domestic market. Asia-Pacific, led by China, South Korea and Singapore, continues to push aggressively in technology investment and adoption. Continental Europe is striving to build its own digital sovereignty, particularly in areas such as data protection and AI regulation.
Against this backdrop, the UK cannot afford complacency. Its current momentum is significant, but it will be tested by both external competition and internal challenges. Regulatory clarity, international trade agreements and access to capital will all shape whether Britain consolidates its position as a global tech growth magnet or risks losing ground.
While much of the current optimism centres on AI, it also brings ethical and regulatory challenges. Firms are aware that rapid adoption without adequate safeguards risks public trust. Issues around bias, transparency and accountability in AI systems remain unresolved.
Leaders recognise that the UK has an opportunity to differentiate itself by setting robust but business-friendly standards for AI governance. Striking the right balance between innovation and regulation could position Britain as a leader in responsible AI. That would not only attract investment but also reassure clients and consumers that technology is being deployed ethically.
Beyond talent and finance, digital infrastructure will be a defining factor in sustaining growth. Leaders emphasise the importance of reliable connectivity, from 5G and fibre networks to emerging technologies such as edge computing. Businesses cannot build cutting-edge AI solutions or cloud-based platforms if the underlying infrastructure lags.
Investment in cyber resilience is also critical. As reliance on digital systems deepens, the cost of disruption rises. Firms want assurances that national cyber strategy is aligned with industry needs and that collaboration between government, regulators and private companies will continue to strengthen.
The technology sector’s growth is not just an industry story. It has wider implications for the UK economy. Technology firms are significant employers, taxpayers and exporters. Their products and services drive efficiency across multiple industries, from manufacturing to healthcare. The sector’s dynamism has a multiplier effect, stimulating demand for professional services, real estate and education.
If Britain succeeds in consolidating its position as a global tech growth hub, the benefits will ripple across the economy. Conversely, if momentum falters, the consequences will be felt far beyond the sector itself.
Despite the optimism, leaders are not blind to risks. Global geopolitical tensions could disrupt supply chains and dampen investor sentiment. Inflationary pressures remain, and access to venture capital can shift rapidly with market conditions. Talent shortages could intensify if training and immigration policies do not keep pace.
There is also the risk of over-reliance on AI as a panacea. While investment in artificial intelligence is crucial, firms caution against neglecting other areas of technology innovation, such as green tech, robotics and biotechnology. A balanced portfolio of investment will be key to long-term resilience.
Britain’s technology sector is entering a defining moment. Technology leaders see the UK as uniquely attractive for growth, citing strong customer demand, diverse talent and rapid adoption of innovation. They are investing heavily in artificial intelligence, buoyed by rising client demand and underpinned by solid financial health.
Yet this moment must be seized with care. Sustained government support, predictable policies, continued investment in talent and infrastructure, and a focus on ethical deployment of AI are all essential. Without these, the UK risks losing its edge in an intensely competitive global landscape.
For now, Britain’s tech moment is real. The question is whether it can translate confidence into lasting global leadership.
What’s your take? Do you see Britain’s tech moment as a lasting shift, or a temporary surge of confidence?
Let’s share the good, the bad and the messy middle.
2025-09-30
Hackers are no longer just battering firewalls. They are reaching out to employees directly with promises of life changing wealth. The insider threat has become the exploit of choice, and IT leaders must treat it as a frontline risk.
Image credit: Created for TheCIO.uk by ChatGPT
On the surface, the pitch was simple: hand over login details, collect a life changing sum of money, and walk away.
The reality was darker, more sophisticated, and closer to home than most organisations are willing to admit.
In late September, BBC cyber correspondent Joe Tidy revealed how he had been propositioned by a ransomware gang. Their offer was blunt. In exchange for handing over credentials to his BBC laptop, he would receive up to a quarter of any ransom collected. They assured him the payout would run into the millions. Their words were chilling in their confidence: “You would not need to work ever again.”
This was no phishing email blasted to thousands. It was a targeted, personalised approach. The criminal, going by the name “Syndicate” or “Syn”, claimed to represent Medusa, one of the most active ransomware as a service groups. Their strategy was not to batter the BBC’s digital defences but to quietly unlock the front door by persuading an insider to look the other way.
For organisations that have spent decades building stronger firewalls, multi factor authentication, and layered defences, the message could not be clearer. Hackers are shifting their attention to the weakest link of all: people inside the business.
The past decade of cyber security has been defined by arms races. As defenders built stronger tools such as intrusion detection systems, endpoint protection, and AI driven anomaly detection, attackers responded with more advanced malware, zero day exploits, and supply chain compromises.
But each new technical defence adds cost and complexity for attackers. Convincing an employee to open the door, by contrast, is relatively cheap. A one off conversation over Signal or Telegram can yield the same access as months of probing a network perimeter.
This shift is not hypothetical. Insider threats are becoming central to ransomware playbooks. In Brazil, just days before Tidy’s encounter, an IT worker was arrested for selling his credentials to hackers, costing his employer one hundred million dollars in losses. Other high profile attacks have followed similar paths.
For groups like Medusa, the logic is obvious. Why waste time coding exploits when employees can be persuaded to part with logins for a cut of the payout?
Medusa is not a lone hacker group but a service platform. Its operators provide the ransomware infrastructure such as encryption tools, negotiation channels, and leak sites, while affiliates carry out the attacks. In that sense, Medusa resembles a franchised business.
According to Check Point research, Medusa avoids Russian targets and operates heavily on Russian speaking forums. Its darknet site lists dozens of victims, from healthcare to emergency services. Affiliates can sign up, recruit insiders, and run operations almost like a start up.
For companies on the receiving end, this model creates unpredictability. The group may be highly professional in one case and reckless in another. Victims cannot rely on consistent behaviour. In Tidy’s case, the professional tone of early messages shifted to crude pressure tactics like multi factor authentication bombing, where the target is flooded with login requests.
That volatility is itself a danger. It demonstrates how gangs blend credible offers with intimidation, keeping targets off balance.
The anatomy of the attack against the BBC journalist follows a pattern that IT leaders should study closely.
Initial Outreach
The hacker made contact via Signal, suggesting reconnaissance had already taken place. They assumed Tidy had privileged access to BBC systems.
Financial Incentive
The initial offer was fifteen percent of ransom payments, quickly raised to twenty five percent of projected millions. Criminals use money not just as an incentive but as a way of inflating urgency.
Normalisation of Treachery
Syn claimed insider deals were routine, citing unnamed healthcare and emergency services breaches. The goal was to make betrayal feel less exceptional.
Escalation and Pressure
When persuasion stalled, the hackers shifted to coercion through multi factor authentication bombing, disrupting Tidy’s phone and daily work.
Deposit Guarantee
A promise of a half bitcoin “guarantee” served to give credibility. In reality, this was smoke and mirrors, since no guarantee could ever be enforced in such arrangements.
Exit or Ghosting
When persuasion failed, the group withdrew, deleting accounts to cover their tracks.
For IT leaders, this sequence illustrates how employees may be targeted over days or weeks. The blend of financial carrot and technical stick makes insider recruitment both dangerous and difficult to detect.
Cyber defences tend to focus on technology. Yet insider threats are human by definition. To understand them, we must understand the psychology at play.
Insider recruitment thrives on dissatisfaction. An underpaid employee with financial stress, a contractor with no loyalty to the brand, or a disillusioned worker feeling overlooked may all be more open to persuasion. The promise of never needing to work again plays directly into these vulnerabilities.
Criminals also exploit isolation. Reaching out on encrypted apps makes the conversation feel secret and detached from the professional environment. Employees may rationalise their behaviour as a victimless crime, especially when the attacker insists: “We do this all the time.”
For leaders, the lesson is clear. Cyber resilience is not just about patching servers but about creating cultures of loyalty, inclusion, and vigilance.
The attack Tidy experienced shows how criminals mix old and new tactics. Multi factor authentication was once heralded as a silver bullet against credential theft. Yet attackers now exploit its weakest feature: human fatigue.
By flooding a phone with push requests, criminals rely on mistakes, annoyance, or complacency. Uber’s 2022 breach followed this very route. Tidy’s experience shows it remains a favoured tool, capable of bypassing even hardened environments if vigilance slips.
That is the paradox of security. Each defence eventually becomes an attack vector in its own right.
The BBC case is high profile, but the vulnerabilities it reveals are universal.
Large, complex networks mean few leaders can map who has what access. Attackers exploit this ambiguity.
Hybrid working blurs the boundary between professional and personal devices, creating more attack surfaces.
Economic uncertainty makes insider payouts more attractive than ever.
Global ransomware operations now run like businesses, professionalising outreach to insiders.
Even companies with strong technical defences cannot assume immunity.
For IT leaders, the takeaway is urgent. Insider threats are no longer fringe concerns. They must be treated as primary risks alongside phishing, supply chain compromise, and zero day vulnerabilities.
That requires action on several fronts.
Cultural Resilience
Build trust and loyalty. Employees who feel valued are less susceptible to betrayal. Regular communication from leadership on security values reinforces this.
Technical Controls
Implement least privilege access. Audit accounts regularly. Monitor for unusual behaviour such as login attempts from unusual locations or devices.
Awareness Training
Teach staff that insider recruitment is real, not hypothetical. Role play scenarios to help them recognise approaches.
Incident Response
Have clear playbooks for suspected insider compromise, including rapid isolation of accounts as the BBC did with Tidy.
Multi Factor Authentication Hardening
Move from push based multi factor authentication to phishing resistant methods like hardware tokens or passkeys where possible.
Too many organisations treat insider risk as a compliance tick box. A policy document exists, so the issue is considered addressed. The Medusa case shows this is insufficient.
Real resilience means moving beyond compliance to culture. Employees should understand not just the rules but the purpose behind them. They should feel ownership of protecting their organisation’s reputation and mission.
Joe Tidy’s experience is more than an unusual journalistic anecdote. It is a warning sign. Hackers are openly, confidently, and aggressively recruiting insiders. They see this as the next frontier in cyber crime.
What makes the tactic so dangerous is its simplicity. No advanced exploit is needed, no months of reconnaissance. A Signal message, a promise of millions, and a few nudges may be enough.
For IT leaders and boards, the conclusion is unavoidable. Insider risk is no longer an abstract threat to be mentioned in the footnotes of risk registers. It is a frontline exploit, actively pursued by some of the world’s most prolific ransomware groups.
The question is not whether criminals will continue to make these approaches. They will. The question is whether employees are prepared to respond in the right way, and whether leadership has built cultures strong enough to withstand the most seductive of pitches: “You would not need to work again.”
What is your take? Should boards be treating insider threats as the number one cyber risk of the next decade? Or is this another scare story that will fade as defences evolve?
Let us share the good, the bad, and the messy middle.
2025-09-28
The hack of the Kido nursery chain, with criminals publishing children’s profiles and even calling parents, exposes a brutal new frontier in cyber extortion. It also shows why childcare providers and IT leaders must build data protection on empathy and discipline.
Image credit: Created for TheCIO.uk by ChatGPT
The attack on the Kido nursery chain is a line in the sand for the early years sector. A criminal group that calls itself Radiant claims to hold pictures and private data of thousands of children and their families. It has already posted profiles of children online and has released staff records that include home addresses and National Insurance numbers. Parents report that the criminals have telephoned them and demanded they pressure the nursery chain to pay. For a sector that runs on trust and care, this is a brutal shock.
BBC reporting by cyber correspondent Joe Tidy set out the facts. Twenty child profiles appeared on the criminals’ site within two days. The gallery included nursery photographs, dates of birth, birthplaces and details about household composition and contact points. Kido told parents that criminals accessed data that was hosted by a widely used early years software service called Famly. The company behind the software has condemned the attack in strong terms and has said it has found no breach of its own infrastructure. The Metropolitan Police is investigating. Ciaran Martin, former head of the National Cyber Security Centre, called the criminals’ behaviour absolutely horrible while urging calm, noting that the risk of direct physical harm to children is extremely low.
Alongside the main story, two short updates on LinkedIn added an unsettling twist. Tidy shared that the criminals told him they would blur the faces of children on their leak site after seeing the public reaction. They were still publishing data and still extorting, but now claimed to see that full images crossed a moral line. That change of presentation does not reduce the risk. It reveals a different point. The group is tracking the public response and shaping its tactics to maximise pressure and attention. That is exactly why leaders cannot feed the drama and why the response must place families first.
Radiant says it will publish more profiles unless it is paid. The group admits it is motivated by money. In messages to the BBC it even claimed to have hired people to make the threatening calls. That is unusual in data extortion. Pressure is usually applied to the institution rather than to individual families. It suggests the nursery chain is not complying and the criminals have chosen to step over another line.
The facts matter, but the context matters more. Early years providers collect and hold sensitive data about children and their families as part of daily care and learning. That includes images of activities, observations about development, names of relatives, addresses and emergency contacts. It can include health notes, allergy details and safeguarding records. In most settings, this information is entered by staff into a cloud platform that promises to make record keeping easier and to keep parents in the loop. Done well, this supports learning and strengthens the relationship between home and nursery. Done carelessly, it creates a single point of failure that can be abused for extortion and harassment.
The Kido incident matters because it shows three risks converging. The first is a supply chain dependency on software providers that are outside the direct control of the setting. The second is a habit across the sector of collecting more information and storing it for longer than is strictly required. The third is an escalation in criminal behaviour that deliberately seeks to frighten parents. The combination turns a technical breach into a community crisis.
Ciaran Martin’s assessment that the attack is absolutely horrible captures the public mood. Parents are angry and anxious. Staff are upset and defensive. It is a shock to see the faces of children dragged into a criminal spectacle. Yet it is also important to hold on to two truths. The first is that criminals are using fear to gain leverage. The second is that most of the harm we face here is emotional, social and reputational rather than physical. That does not make it trivial. Emotional harm can be profound and lasting. It does shape the response, because the right approach is to protect families, reduce attention for the criminals and restore trust through steady action.
Nursery data is particularly sensitive because it is an intimate record of daily life. A photograph of a play session that feels harmless in a private gallery takes on a different meaning when copied to a criminal site or shared out of context. A simple profile with a name, a date of birth and an address becomes a vector for fraud or harassment. That is why early years providers must treat digital records with the same seriousness as they treat physical safeguarding.
The LinkedIn screenshots are instructive. The criminals contacted a journalist to signal that images would be blurred. They wanted that change to be noticed. It reads like a performance. They still hold the data. They still publish personal details. They still call parents. The shift is about optics. The lesson for leaders is straightforward. The information environment around a live extortion attempt can be manipulated. Media teams and nursery leaders must avoid language or actions that confer legitimacy or feed the drama. Clear, factual updates to families and staff are necessary. Running commentary on criminal tactics is not.
Kido told parents that criminals accessed data hosted by a software service. Famly has said it has found no breach of its own systems and that no other customers were affected. Only the investigation can determine the full path the attackers took. Regardless of the technical findings, the big question for the sector does not change. How should a nursery select and govern a platform that holds the personal data of children and families every day?
There are several practical issues to consider. The first is identity control. A modern service should support single sign on for staff, strong authentication for all users and automatic removal of access when people leave. The second is visibility. A provider should give the nursery access to detailed audit logs that record logins, changes to permissions and bulk downloads. The third is retention. The platform should allow the customer to define how long images and observations are kept and to enforce automatic deletion. A fourth is export and deletion on exit. Nurseries deserve the guarantee that they can retrieve all records in a standard format and verify that copies are deleted when they leave a platform. These are not luxury features. They are essentials for a sector that holds sensitive records about children.
Parents quoted by the BBC describe concern and anger, but also sympathy for staff. That last point is important. Early years teams care deeply about the families they serve. They feel a personal responsibility when trust is shaken. Leaders should communicate with that in mind. Staff need clear guidance, reassurance and support. They also need practical steps that reduce the chance of future harm. Training should be short, regular and specific to daily tasks. The right camera app to use. The correct way to save images. The steps to take when a strange call or message arrives. The goal is to make safe behaviour the easy behaviour.
Families need clarity, not technical language. A parent who receives a threatening call needs to know three things. Do not engage. Capture details. Tell the nursery. The setting needs a process to collect those reports and pass them to the police with timestamps and any screenshots. It should also publish a simple explanation of what data is held, why it is held, how long it is kept and how parents can raise concerns. That transparency builds trust. It also creates permission to delete more and to hold less.
Data minimisation sounds like a policy slogan. In a nursery it is a set of practical choices. Start with a list of every field your platform collects about a child and a family. Mark which ones are essential to care and safety. Mark which ones are optional. Remove the optional ones. Next, separate sensitive notes from routine observations. Restrict access to the sensitive notes to a smaller group with enhanced logging. Then set retention rules that match real needs. A daily photograph of a craft table does not need to live for years. If it brings joy at the end of the week, that is enough. Delete by default unless there is a clear reason to keep.
There is also a place for simple technical tweaks that reduce risk without changing the experience. Use a workflow that stores original images in a restricted vault and serves a blurred or cropped version in the parent gallery for routine updates. Strip metadata such as precise location from all media. Add a small watermark. Use expiring links for shares. Limit downloads and encourage viewing in the portal. None of these steps remove the need for strong security. They do change the value of the data to criminals and limit how far it can spread if copied.
Many nurseries are small charities or small businesses. Budgets are tight. There is no dedicated security team. That does not mean improvement is out of reach. Priorities matter. Focus on the two or three controls that cut the most risk in the shortest time. Strong authentication based on an app or passkeys for staff and administrators is one. Removal of shared logins is another. Managed devices for staff who take and upload photographs is a third. Add regular backups with a tested restore, and insist on audit logs from your software provider. That short list will move you a long way.
The next layer is preparation. Create a one page plan for incidents that names your safeguarding lead, your nursery manager, your data protection lead and the trustee or owner who will act as a decision maker. Write three messages in plain language. Suspected incident. Confirmed incident. Recovery and support. Test the plan twice a year with a short tabletop exercise. Include a scenario where a parent receives a threatening call. This is not a bureaucratic ritual. It is rehearsal for a stressful day. Practice makes calm possible.
Criminals thrive on attention. A measured communication plan can deny them that attention while still keeping families fully informed. Avoid dramatic language. Stick to evidence. Share what you know, what you are doing and what you will do next. Explain how parents can help and how to get support. Provide a dedicated mailbox and phone number for queries during the incident so classroom staff can focus on care. Keep a record of all communications in case insurers or regulators need a timeline. Resist the temptation to speculate about the attackers, their nationality or their motives. Speculation almost always helps them more than it helps you.
Police advice remains the same in every major extortion case. Do not pay. Payment fuels the criminal ecosystem, brings no guarantee of deletion and often leads to further demands. Work with the police service to collect evidence. Keep copies of criminal messages and screenshots of any websites or social media posts. Preserve system logs and keep a note of times and actions. If the risk to the rights and freedoms of children or staff is high you must notify the Information Commissioner’s Office within seventy two hours of becoming aware of a breach. You must also communicate with those affected without undue delay if they face a high risk.
Insurers can provide practical help, but policies vary. Early years leaders should review cover and understand what is included, what triggers apply and which experts are available during a live incident. Clarify whether the policy provides an incident coach, legal counsel and forensic support. Ask how the insurer coordinates with the police. Preparation pays off here as well. On the day of an incident you will not want to read policy documents.
Responsible reporting is important. The BBC has stated it will not provide a running commentary on the criminals’ actions. That is a sensible stance. It reduces the oxygen that criminals seek while allowing the public to stay informed. Settings should follow the same principle. Share necessary facts and support, not blow by blow updates on criminal boasting. Where coverage draws public attention, use that moment to explain good practice and to offer guidance to parents about staying safe online. Do not engage with anyone claiming to represent the criminals. Pass information to the police and stick to your communications plan.
A nursery is not a bank. The controls must be proportionate and usable. There are still a few essentials. Every device used by staff for work should be encrypted and protected by a short screen lock. The use of personal devices for photographs or records should stop and should be replaced by managed phones or tablets that route images straight to a secure cloud folder. That folder should be in a major cloud service with versioning and immutable backups. Access should be controlled by roles and groups rather than by ad hoc sharing. The parent portal should offer modern authentication options and should allow the setting to restrict downloads and set retention automatically. Networking kit should be kept simple but effective, with separate networks for staff, parents and any internet connected cameras or door entry systems. Default credentials on cameras and other devices should be changed and firmware updates applied on a set schedule.
There is also value in simple detection. Turn on audit logs in every service you use. Forward them to a low cost log service. Set a few alerts for suspicious behaviour such as repeated failed logins or unusually high volumes of downloads from a single device. Create a handful of dummy records that act as canaries and alert if they surface outside the expected environment. If budget allows, consider a basic dark web monitoring service run by a trusted partner. These steps do not replace prevention. They give you an early signal that something is wrong.
Early years leaders often feel they have little leverage with software firms. That is understandable, but there is still room to insist on essentials. Ask for a short security overview that covers how the provider tests its software, how it stores and encrypts data, where its data centres are located and how it will notify you of incidents. Ask to enable single sign on through your identity provider. Ask for audit log export. Ask for a clear exit process that includes deletion of your data when you leave. If a provider cannot offer these basics, consider whether the convenience it offers is worth the long term risk.
A parent who receives a criminal call or sees their child’s details on a leak site needs human support as well as procedural guidance. Settings can prepare for that reality. Nominate a small team to handle parent conversations during an incident. Give them a short script that explains what the nursery is doing, what the police are doing, and what steps the family can take to reduce risk. That might include watching for suspicious emails or calls, changing passwords for any shared services, and speaking to older siblings about not amplifying content online. Provide links to independent advice from trusted bodies. Above all, listen. Parents want to be heard. Anger is a rational response. Treat it with respect.
The internet has a long memory. Even if criminals remove content, copies can persist. That is another reason to minimise what is stored and to limit where images appear. It is also a reason to avoid public speculation about the identity of any particular child in leaked data. Do not inadvertently confirm details that will live forever in search results. Think about the future child who grows up and searches their own name. Choices today affect that experience.
Leadership teams always ask what can be done in the next few weeks that will make a real difference. There is a sensible path. In the first week, enforce stronger sign in for staff and administrators, remove shared accounts and test a restore from backup. In the second week, map the data you hold and set clear retention rules for photographs and observations, then deploy a managed camera workflow that stores originals securely and serves safer versions in daily galleries. In the third week, review your contracts with your main software provider and insist on audit logs and a clear incident process, then publish a plain language notice to parents that explains what you hold and why. In the fourth week, run a short tabletop exercise with the leadership team, the safeguarding lead and the office team, and set up a simple process to collect and handle any threatening calls or messages. Four weeks of steady work will move your posture from hope to practice.
This incident is a warning to every organisation that works with children and families. Digital convenience has reshaped the classroom and the nursery. It brings real benefits. It also concentrates risk. When institutions depend on a few large platforms that store sensitive data for many customers, criminals have an incentive to probe and to extort. The answer is not to retreat to paper and polaroids. The answer is to combine careful procurement, strict data minimisation, simple technical controls and a culture that aims to do the right thing when under pressure.
The early years community is resilient. Parents and staff make trade offs every day in the interests of children. That same spirit can guide the digital response. Collect less. Share less. Keep less. Secure everything you keep. Practise your response. When something goes wrong, place children and families at the centre of every decision. Speak clearly. Support those who need help. Work with the authorities. Do not pay.
Protecting nurseries is about empathy and discipline. Empathy for families who trust the setting with their children’s images and stories. Discipline to minimise what the setting collects, to secure who can see it, and to practise what the setting will do on your hardest day. Childcare settings need to cut the risk sharply and put the needs of children and parents at the centre of the cyber posture.
What’s your take? How should nurseries balance the benefits of digital platforms with the risks of storing sensitive child data?
Let’s share the good, the bad and the messy middle.
2025-09-24
Airlines and airports face a sharp escalation in cyberattacks, shifting from data theft to operational disruption that strands passengers and dents trust.
Image credit: Created for TheCIO.uk by ChatGPT
The aviation industry is battling a new kind of turbulence, one not caused by weather or mechanical faults but by cyberattacks. From data breaches to large scale operational disruption, airlines and airports are facing an escalating wave of digital threats that are grounding flights and exposing millions of passengers to risk.
Between January 2024 and April 2025 the sector endured 27 ransomware attacks, a 600 per cent increase on the previous year. Already in 2025, at least ten major incidents have been reported, including the breach at Qantas which exposed records of up to six million passengers and the Collins Aerospace attack that took down check in systems at Heathrow, Brussels and Berlin.
The British Airways breach in 2018 was an early sign of aviation’s exposure. Almost 400,000 customers had their personal data compromised, and BA was fined £20 million by UK regulators. Back then, the prize for attackers was data. Now the priority has shifted.
Recent incidents reveal a preference for disruption. The Swissport ransomware attack in 2022 delayed flights across Europe, while SpiceJet in India suffered grounded services after its systems were hit. The Collins Aerospace outage this year underscored how fragile the sector’s reliance on third party systems has become. One supplier’s compromise rippled across multiple airlines and airports, leaving passengers stranded.
These attacks are not just about theft or extortion. They are a stress test for aviation’s resilience. British Airways, for example, was able to minimise disruption during the Collins Aerospace outage thanks to backup systems. Other airlines without such safeguards faced significant delays. The difference was preparation.
Regulators have acted on data protection, handing out fines for breaches, but operational resilience is harder to enforce. It demands investment in redundancy, better incident planning, and the recognition that cyber is no longer a back office issue. It belongs in the boardroom alongside safety and compliance.
Aviation’s reputation has always rested on its safety record. Passengers expect aircraft to be airworthy and airports to be secure. But safety is no longer defined solely by engines and runways. If the systems that plan flights or manage boarding are compromised, aircraft stay on the ground.
Cybersecurity has become as central to aviation as aircraft maintenance. With the pace of attacks accelerating, the industry must act decisively. Cyber resilience is not optional. It is now part of the licence to operate.
What’s your take? Where should aviation leaders focus first to build resilience without slowing operations?
2025-09-22
A cyber attack on Collins Aerospace software left Heathrow and other European airports struggling with manual check ins. The incident reveals how fragile aviation’s digital backbone has become and the wider lessons for IT leaders.
Image credit: Created for TheCIO.uk by ChatGPT
When Heathrow announced on Saturday 20 September that a technical issue had delayed flights across its terminals, the official language gave little away. Within hours it became clear that the world’s busiest international airport was caught up in a cyber incident that had rippled across Europe. The disruption was not limited to Heathrow. Brussels, Berlin and Dublin all reported knock on effects as airlines reverted to manual check ins and baggage handling.
The source of the problem lay not with the airports themselves but with Collins Aerospace, a division of RTX, whose MUSE platform is widely used to manage shared check in desks and boarding gates across multiple carriers. The company confirmed a cyber related disruption that affected electronic check in and baggage drop, and said the impact could be mitigated with manual procedures. For passengers stranded in long queues or sitting on aircraft without information, the distinction between technical outage and cyber attack mattered little. For IT leaders, however, the nuance is critical.
Modern aviation is less about planes and runways than it is about complex, interconnected systems. Reservation platforms, crew scheduling tools, baggage routing databases and departure control systems must all interoperate with almost no margin for error. MUSE, the multi user shared environment used by many airlines, is designed to streamline the customer journey by allowing carriers to share infrastructure within a terminal. When it fails, the efficiencies it creates turn instantly into vulnerabilities.
Overnight Heathrow said work continues to resolve and recover from the outage at the Collins Aerospace platform that underpins airline check in. The airport apologised for delays, said the vast majority of flights had continued to operate, and advised passengers to check flight status with their airline and not arrive earlier than three hours for long haul or two hours for short haul. That guidance helped reduce overcrowding in terminals while manual processes were in place.
"We are continuing to resolve and recover from the outage. We apologise to passengers who have faced delays. The vast majority of flights have continued to operate. Please check flight status with your airline before travelling and do not arrive earlier than three hours for long haul or two hours for short haul flights," Heathrow said in a public update.
Heathrow has also stressed that it does not own or operate the affected system, and that responsibility lies with Collins Aerospace.
Heathrow’s ability to continue operating, albeit with delays, reflected the resilience of having manual fallback procedures and some airlines maintaining their own contingency platforms. British Airways switched to a backup system that kept its flights running more smoothly than others. That capacity to adapt is the difference between crisis and catastrophe. But the scale of disruption, with hundreds of flights delayed across the continent, showed how dependent the sector has become on a handful of digital providers.
Behind every technical failure is a human story. Passengers at Heathrow spoke of hours queuing to check in, staff tagging luggage by hand, and digital boarding passes failing at the gate. Families missed connections, travellers sat on tarmacs without information, and some never reached urgent destinations. Airports added staff to manage queues and tried to prioritise certain flights, but the lack of clarity left many exhausted and angry.
For businesses, the financial toll of such disruption is severe. Airlines must handle compensation claims, reschedule flights and cover costs for food and accommodation. Airports lose revenue and suffer reputational damage. For governments, the sight of queues stretching through terminals is a reminder that cyber security is no longer an abstract IT concern but a national infrastructure priority.
By Sunday 21 September and into Monday 22 September, disruption was still being felt. Brussels Airport said 86 percent of flights were delayed by mid afternoon on Sunday, and requested airlines cancel around half of departures scheduled for Monday. More than 600 flights from Heathrow were disrupted on Saturday, though by Monday most flights there were operating close to normal with longer check in and boarding times. Dublin and Berlin continued to report knock on delays, and Cork saw a minor impact. Passengers at Brussels and Berlin in particular faced ongoing uncertainty as airlines sought to manage schedules with manual processes.
Aviation is one of the clearest examples of critical national infrastructure that now operates as a digital ecosystem. Cyber disruption at an airport is not merely an inconvenience for travellers. It has knock on effects for trade, supply chains and international relations. When ministers receive security briefings on cyber threats, aviation sits alongside energy grids, health systems and financial markets.
The National Cyber Security Centre said it was working with Collins Aerospace and affected UK airports, alongside the Department for Transport and law enforcement, to understand the impact of the incident. The NCSC also urged organisations to make use of its free guidance, tools and services to reduce cyber risk and strengthen resilience.
"We are working with Collins Aerospace, affected UK airports, the Department for Transport and law enforcement to fully understand the impact of this incident. We encourage all organisations to make use of the NCSC’s free guidance, tools and services to reduce cyber risk and strengthen resilience," an NCSC spokesperson said.
The European Commission said it was monitoring closely, while noting there was no indication that the incident was widespread or severe. That assessment may comfort officials, but the fact remains. A single compromise at a supplier cascaded into delays across multiple sovereign states within hours.
Collins Aerospace has said it is in the final stages of delivering secure software updates to restore full functionality of MUSE, though airlines have been warned disruption could continue for days as the updates are rolled out and tested.
Speculation quickly turned to who might be behind the attack. Some voices suggested Russian involvement, citing broader tensions. Yet most large scale cyber incidents of the past few years have been the work of organised criminal gangs, many of which operate from Russia or other former Soviet states. These groups are motivated by profit, using ransomware and extortion tactics to force victims to pay in cryptocurrency.
For now, Collins Aerospace has not confirmed whether ransomware was involved. Cyber experts note that such disruption can be caused by both criminal gangs and state sponsored actors. It is also possible that probing by hostile groups had unintended consequences. The uncertainty is itself damaging, fuelling rumours and undermining trust.
The aviation sector has recent experience of digital disruption. In July a faulty software update from a widely used security platform caused global IT crashes that grounded flights in the United States and delayed travel worldwide. That event was not malicious, but it demonstrated how fragile aviation’s digital backbone can be. The Heathrow incident, by contrast, appears to be a deliberate cyber attack. Taken together, the two episodes highlight the same point. Resilience is as much about anticipating digital fragility as it is about preventing hostile intrusions.
For IT leaders across sectors, the message is clear. Contingency planning cannot remain a compliance exercise. It has to become part of organisational culture. Having manual workarounds, tested regularly, ensures that when digital systems fail the business does not collapse. Heathrow, Brussels and Berlin all kept passengers moving, slowly, because staff could revert to phones and paper based methods. It was inefficient, frustrating and costly, but it worked.
The danger is that once the crisis passes, organisations slip back into complacency. Executives congratulate teams for getting through the disruption and carry on as before. True resilience requires institutional memory. It means treating every disruption as an opportunity to strengthen procedures, rehearse backup plans and invest in more robust architectures.
Aviation’s reliance on third party suppliers mirrors challenges faced across industries. From cloud computing to payment processing, organisations entrust critical functions to external vendors. The Heathrow incident underscores the importance of understanding those dependencies in detail. Leaders need to know which systems are run by suppliers, where those suppliers host their infrastructure, and what alternative providers or in house contingencies exist if one fails.
Too often boards are reassured by contracts and service level agreements without asking the harder questions about resilience. As the July crash showed, even reputable providers can cause global outages. As the Collins Aerospace disruption showed, a failure in a relatively narrow layer can ripple into chaos.
This incident arrives at a time when digital resilience is high on the political agenda. Proposals to tighten obligations on operators of essential services have focused attention on how sectors such as aviation, health and energy prove that they can withstand attacks and recover quickly. Saturday’s events and the extended disruption into the week will add urgency to calls for tougher rules on supply chain security, more rigorous stress testing and clearer accountability when things go wrong.
The Transport Secretary said she was receiving regular updates and monitoring the situation. That vigilance is welcome, but the public will expect more than oversight. They will want assurances that aviation can withstand the next attack. The business community will be asking similar questions about their own dependencies.
IT leaders and their teams must deliver seamless digital services to customers while preparing for the moment those systems fail. They must persuade boards to invest in resilience, even when budgets are tight and the immediate return is hard to quantify. They must engage staff in practising manual procedures without appearing to undermine confidence in technology.
Saturday’s disruption shows that resilience is not about perfection. It is about agility, communication and preparation. Passengers tolerated long queues and manual processes because they understood the scale of the problem. They were less forgiving about the lack of information, inconsistent updates and poor support for vulnerable travellers. For IT leaders, the lesson is that communication strategies are as important as technical fixes.
The most useful way to read the events at Heathrow is as a rehearsal. Disruption arrived suddenly, crossed borders in minutes and turned a narrow technical problem into a full service challenge. The correct response is not to look for a single tool that fixes everything. The answer is a set of layers that degrade gracefully when something fails.
Start with an operational map that shows the systems that truly run the service. Keep a living inventory of the platforms that sit inside the end to end passenger journey. Include airline reservation and departure control, identity and security screening, baggage sortation and the shared use platforms at desks and gates. Set out the owners, the hosting locations, the failover paths, the data flows and the change authority. Keep it concise and written in plain language so that a duty manager can act on it in the middle of the night.
Assume supplier failure and plan for it. Where the model allows, create a second route for critical functions such as check in and boarding. The second route can be a parallel provider, a local instance that can be isolated from the network, or a simple but documented manual mode that has been rehearsed. The goal is to keep passengers flowing and staff productive while the primary system is restored.
Separate what must never fail from what can pause. Segment networks so that check in workstations, kiosks, boarding gates and baggage control are on well governed segments with strict rules about which systems may talk to which. Supplier access should be just in time, time bound and recorded. Administrative accounts should use strong physical tokens. Remote management should be possible when needed and impossible when not.
Build a way to run without the network for a limited period. Airlines can maintain offline passenger lists that refresh frequently, issue boarding documents from local caches and use mobile scanners that sync when links return. Airports can print pre numbered bag tags and provide a simple path to reconcile tags with flights once the systems are back. These are not elegant steps. They shorten queues and prevent missed connections, which is what matters during an incident.
Treat restoration as a discipline. Keep golden images for check in workstations and kiosks. Keep known good configurations in escrow. Practise clean rebuilds of the platforms that matter. Establish clear recovery time and recovery point objectives for the systems that most affect the passenger journey. Measure performance against those targets during exercises and live incidents.
Rehearse together, not in silos. Run tabletops that bring airports, airlines, ground handlers, police, border officers and the supplier into the same room. Run at least one live exercise each year that turns a terminal to manual for a short window during a quiet period. Measure the time to degrade, the time to restore and the time to communicate. Reward teams for finding failure modes rather than hiding them.
Fix the contract before you need it. Supplier agreements should state how often failover will be tested, which logs will be retained, how escalation will work through the night and how configuration will be handed back if the relationship ends. Contracts should include security obligations that match the sensitivity of the service. They should contain practical service credits that reflect the real cost of disruption on a per terminal, per hour basis.
Communicate as you would in a safety incident. Create plain language templates for airline agents, social posts and recorded announcements. Maintain a single public status page that can be updated by an authorised manager from a handheld device. Explain what has happened, what passengers should do and when the next update will arrive. Provide specific support for vulnerable travellers. The quality of information is as important as the speed of restoration.
Design for forensic readiness and privacy. Keep logs that will allow investigators to see what happened without shutting the airport for days. Collect only the personal data that is needed for passenger processing, retain it only as long as required, and segment it from operational metadata so that a disruption to one store does not create a wider privacy problem. Prepare the material you will need if you must notify a regulator and rehearse the approval path for that notification.
Use this as the push to make resilience a real line in the budget. Create a permanent resilience programme that reports to the executive and tie incentives to measured improvements in recovery time, manual capacity and cross provider failover. Publish a short, honest post incident review within two weeks of any material outage. The public has a long memory for queues and an even longer memory for organisations that evade responsibility.
For IT leaders outside aviation, apply the same lens to your own dependency stack. A retailer that cannot take payment, a hospital that cannot move patients from assessment to ward, a logistics business that cannot allocate drivers. All depend on shared platforms and third party providers. The steps above read the same even if the acronyms change.
As of Monday 22 September, Heathrow was working to restore normal service. Most flights had operated, though many were delayed. Brussels faced the largest impact with wholesale cancellations into the week. Collins Aerospace said it was in the final stages of pushing secure updates for MUSE. Investigations into the source of the attack will take time, and attribution may not be straightforward. Passengers will remember the hours lost, the connections missed and the frustration endured.
For IT leaders, the memory should be longer lasting. This was not just a day of delays. It was a demonstration of how a single cyber incident can ripple across borders, strand travellers and expose weaknesses in systems we take for granted. The next disruption could be bigger. The next one might not be recoverable with manual check ins and paper tags.
The Heathrow disruption should not be dismissed as an unfortunate glitch. It was a glimpse into the vulnerabilities of a sector, and by extension a society, that depends on digital infrastructure as much as on concrete runways. For IT leaders, the imperative is clear. Cyber resilience must be treated not as a technical problem to be solved, but as a cultural principle to be lived.
What is your take? Was Saturday’s disruption an isolated incident, or a sign that aviation’s digital dependencies are now too brittle to ignore?
2025-09-14
With Windows 10 support ending in October 2025, UK IT leaders face difficult choices over budgets, security and user readiness. The clock is almost out, and hesitation equals risk.
Image credit: Created for TheCIO.uk
Windows 7 showed the cost of clinging to an operating system beyond its supported life. Unpatched machines fuelled the spread of WannaCry in 2017, crippling NHS services and exposing how dangerous legacy technology can be. Now, history risks repeating itself.
On 14 October 2025, Microsoft will end support for Windows 10. After almost a decade as the backbone of enterprise IT, the platform will fall silent. No more patches. No more updates. No more protection. Microsoft Support confirms the cut-off.
The numbers are stark. As of July 2025, Windows 10 still runs on around 43 per cent of desktops worldwide, according to StatCounter Global Stats. Despite Windows 11 being available for four years, industry research in late 2024 found that over two-thirds of enterprises were still relying on Windows 10 (IT Brew). Hardware restrictions, legacy applications and user familiarity have slowed adoption of its successor.
This means that in the UK, thousands of organisations are about to find themselves with estates of unsupported machines. Some of those devices cannot even run Windows 11 due to Microsoft’s hardware requirements. Others are tied to applications that remain untested on the new platform. For IT leaders, the challenge is not just technical. It is operational and strategic.
The most pressing concern is security. Once support ends, every new vulnerability will remain open, permanently. Attackers know this. Unsupported systems are easy targets for ransomware and phishing-led intrusions.
The NHS learned this the hard way. WannaCry exploited outdated Windows systems in 2017 and forced operations to be cancelled across the country. With Windows 10 still deeply embedded across enterprises, the risks this time are even greater.
Some organisations may look to Microsoft’s Extended Security Updates (ESU) programme, which offers up to three additional years of patches. But ESU is not a strategy. It is expensive, the price increases each year, and it merely buys time rather than solving the problem (Microsoft ESU details).
This is no longer a technology upgrade to be handled quietly by IT teams. It is a board-level decision. Unsupported operating systems represent not just a vulnerability but a failure of governance. Regulators and insurers will not look kindly on breaches caused by systems that were knowingly left exposed after years of warning.
Yet the budget challenge is real. Replacing functioning machines looks like cost, not investment. Many CIOs will find themselves arguing against sceptical boards who see no immediate benefit in refreshing thousands of desktops. But delaying carries greater risks: data loss, fines and reputational damage that far outweigh the price of migration.
Technology is only part of the story. Employees are comfortable with Windows 10. They know how it works, and they trust its stability. Windows 11, with its redesigned interface, will not be universally welcomed. Poorly planned migrations will trigger frustration, escalate service desk demand and erode trust in IT.
Communication and preparation will be critical. Pilot programmes, clear messaging and targeted training can make the transition smoother. Without them, the project risks being remembered not as a security safeguard but as an unnecessary disruption.
There is also a wider conversation. The push to move from Windows 10 is already fuelling debate about e-waste and the environmental cost of forced hardware refreshes. Machines that are otherwise operational may be discarded simply because they cannot meet Windows 11’s requirements. For enterprises, this adds an ethical and sustainability dimension to what might otherwise be seen as a technical decision.
The end of Windows 10 is not just another software milestone. It is a test of readiness, governance and leadership. Organisations that act now — modernising hardware, validating applications and preparing their workforce — will emerge stronger. Those that delay will be gambling against history and inviting the same kind of disruption that once paralysed the NHS.
The last month before end of support is not a grace period. It is the final call. By the time 14 October arrives, enterprises must already be in motion. Anything less is a risk that no board should accept.
What’s your take? With just weeks left on the clock, is your organisation ready to move beyond Windows 10?
2025-09-04
Jaguar Land Rover’s recent IT outage exposed the fragility of modern automotive manufacturing. For IT leaders across all industries, it underlines the urgency of building true cyber resilience that bridges IT, operations and supply chains.
Image credit: [Created for TheCIO.uk by ChatGPT]
When production at Jaguar Land Rover ground to a halt following a cyber incident, the immediate headlines focused on the cars that did not roll off the line. Yet for IT leaders, the deeper story lies in how an organisation of such scale and heritage can still find its operations disrupted by an unseen digital adversary. Manufacturing resilience has long been tested by supply chain delays, labour challenges and economic headwinds, but cyber risk now sits at the top of that list. The JLR disruption is far from an isolated case. It represents a wider truth: almost every sector, from automotive to healthcare to finance, is grappling with a rising frequency of cyber incidents that threaten the very continuity of business.
Over the past decade, the frequency and scale of attacks has surged. Ransomware groups no longer limit themselves to banks or tech firms. They target industries with real operational dependencies, knowing that downtime translates quickly into financial loss. For IT leaders, the lesson is stark. Cyber resilience is not an optional technical upgrade. It is a business survival strategy.
This article examines the lessons of JLR’s recent disruption, placing it in the wider context of global cyber risk, and setting out what IT leaders must prioritise if they are to protect not just systems but the continuity of their organisations.
Details remain under investigation, but reports confirm that Jaguar Land Rover experienced an IT systems outage that directly impacted production. Assembly plants saw operations slowed or suspended. Suppliers were left waiting on instructions. Dealers and customers faced uncertainty over delivery schedules.
For an automotive giant, the costs are eye watering. Industry analysts estimate that every lost hour on the production line in a major car plant costs millions in foregone revenue. With JLR producing close to half a million vehicles a year, even a short stoppage ripples outward. Suppliers lose revenue. Logistics networks face congestion. Customers may turn to competitors.
The incident is part of a growing pattern. In recent years, Toyota, Renault, Honda and other carmakers have suffered similar disruptions linked to cyber issues. The lesson is not that any one company has weaker defences, but that the operating model of modern automotive manufacturing is highly vulnerable.
To understand why attackers are increasingly drawn to manufacturers, one must look at the structural nature of the industry. Production lines are driven by operational technology systems, often decades old, designed for uptime not for security. These industrial control systems are often connected to corporate IT networks to enable efficiency, monitoring and predictive maintenance. The result is a fragile bridge between two very different worlds.
Attackers know that breaching IT can give them a path into OT, and once inside, the pressure on a manufacturer to restore operations is immense. Unlike a law firm or a retailer, where staff can revert to manual processes for days or weeks, a car plant without functioning systems is silent. Every hour lost brings not only costs but reputational damage.
Beyond technology, there is the complexity of supply chains. Automotive production is famously just-in-time. Components arrive at the line hours before being assembled into vehicles. A cyber incident that disrupts supplier communications or logistics systems can paralyse production as effectively as a ransomware lockout. The interdependence means that a weakness in one vendor can cascade across the entire ecosystem.
While the JLR case draws attention because of its scale, the trend is visible everywhere. Hospitals diverted patients after ransomware crippled electronic health records. Shipping companies have had vessels stranded in port due to cyber attacks on logistics systems. Food producers have lost entire harvests in storage because cooling systems were locked down.
The latest data from insurance firms and national security agencies confirms the direction. Attack frequency is up year on year. Ransom demands have escalated. The industrial sector has become a primary target because criminals know that executives cannot afford downtime.
IT leaders across industries must internalise this reality. Cyber incidents are not abstract risks confined to the IT department. They are operational threats that can close plants, halt services and cost lives. The rising tide means the question is not if an organisation will be tested but when.
The JLR incident illustrates that resilience must be led from the top, and IT leaders are central to that task. The role is no longer about keeping networks patched or upgrading hardware. It is about translating cyber risk into operational risk, and ensuring boards understand the stakes.
Resilience demands investment in three core areas.
First, Informational Technology (IT) and Operational Technology (OT) integration must be secured. This means proper network segmentation, monitoring of gateways, and a clear understanding of what connects to what. Too often, organisations do not have an accurate map of their own dependencies. Without that, response is guesswork.
Second, incident response must be broadened. Traditional playbooks assume a breach of data or loss of applications. In manufacturing, the incident response must also include steps to safely shut down production, to recover machines, and to bring plants back online. This requires rehearsal with operations teams, not just IT staff.
Third, supply chain security must be treated as a first-order concern. Standards such as ISO 27001 and TISAX in the automotive sector offer frameworks, but compliance alone is not enough. Continuous monitoring of vendors, contractual obligations for security, and real-time communication channels are needed. The weakest supplier can be the vector for attack.
Technology alone will not secure the line. People are both the first line of defence and the key to recovery. Social engineering remains a favoured route for attackers. The recent Salesforce breach affecting Gmail users demonstrated how convincing phishing can bypass technical barriers. In a manufacturing context, a single compromised login could give an attacker access to plant scheduling or supplier ordering systems.
Training must therefore be practical and continuous. Staff at every level need to know what suspicious activity looks like and how to escalate it quickly. At the same time, boards and senior executives must be rehearsed in crisis communication. The hours following a breach are when trust can be lost. Clear, confident communication, backed by evidence of preparation, can prevent panic among suppliers and customers.
It is tempting to see cyber resilience as a cost centre, but the JLR case shows the opposite. The real cost lies in downtime. Industry studies suggest that in automotive, each lost hour can cost up to £10 million in revenue. That figure does not include reputational harm or the cost of restoring systems. Nor does it capture the opportunity cost of delayed product launches or missed seasonal demand.
For IT leaders seeking board investment, these numbers are compelling. The business case for resilience is not theoretical. It is grounded in hard financial impact. Every pound spent on resilience reduces the risk of losses orders of magnitude larger.
Although this analysis focuses on automotive, the lessons are transferable. Hospitals, airlines, logistics providers, utilities and even education institutions rely on continuous operation. In each case, IT leaders must ask what would happen if systems were offline for a day or more.
The uncomfortable truth is that many organisations still have no workable manual fallback. They assume resilience but have never tested it. The JLR incident should prompt every IT leader to run a tabletop exercise with their operations colleagues and ask the hard questions. If an attack hit tomorrow, how would we continue to deliver our core service? Who would we call? What systems could we do without?
Too many organisations still treat cyber as a compliance box to tick. Policies are written, certifications are gained, and the topic is then sidelined. Real resilience requires culture. It requires boards to see cyber not as an IT expense but as a strategic necessity.
Culture is built when IT leaders demonstrate how resilience supports growth. A manufacturer that can demonstrate robust resilience is more attractive to suppliers, partners and customers. It can win contracts that rivals lose. It can expand into new markets with confidence. In that sense, cyber resilience is not defensive but enabling.
The Jaguar Land Rover disruption is not a story of one company’s weakness. It is a wake-up call for every IT leader in every sector. The frequency of attacks is rising. The cost of downtime is climbing. The complexity of supply chains and the fragility of legacy systems make resilience harder than ever.
But resilience is achievable. With investment in secure IT-OT integration, comprehensive incident response, and supply chain vigilance, organisations can withstand attacks and recover quickly. With cultural change, they can turn resilience into a competitive advantage.
The line at JLR stopped. For other organisations, the question is whether they will act before the same happens to them. For IT leaders, the responsibility is clear. Resilience is no longer just about protecting data. It is about protecting the business itself.
What’s your take? How should IT leaders balance the need for resilience with the pressure to deliver innovation and cost savings? Let’s share the good, the bad and the messy middle.
2025-08-29
A newly revealed court filing shows the UK government sought sweeping access to Apple customer data, including non-UK users, through a Technical Capability Notice. The move raises serious privacy, security and accountability questions.
Image credit: Created for TheCIO.uk by ChatGPT
The UK government has quietly stepped into treacherous territory by seeking expansive access to Apple customer data including users outside its jurisdiction. A newly revealed court filing and expert commentary reveal a saga that has captured attention across the globe. This is more than a domestic push; it is a powerful test of the balance between national security and personal privacy, of legal secrecy and public accountability.
On 29 August 2025 the Financial Times disclosed that the UK Home Office issued a Technical Capability Notice under the Investigatory Powers Act that extended beyond UK borders. That revelation landed like a thunderclap. It confirmed that the notice demanded access to Apple’s standard iCloud service, not merely its optional Advanced Data Protection feature that offers end-to-end encryption. Moreover the order appeared to oblige Apple to provide data drawn from any user of iCloud globally including messaging content and saved passwords.
The Investigatory Powers Act, known colloquially as the “Snoopers’ Charter”, grants Britain sweeping surveillance powers. Section 253, invoked here, permits the issuance of Technical Capability Notices that compel companies to adjust their products or infrastructure to enable government access.w
This is not Apple’s first run-in with the Home Office. Reports indicate that earlier in 2025 the Home Office moved to issue a TCN specifically targeting Apple’s Advanced Data Protection system. That demand prompted Apple to withdraw the option altogether for UK users in February.
Until the FT court filing, much about the precise scope of the TCN remained shrouded in secrecy. Apple cannot publicly discuss the notice under the secrecy provisions of the Act. The Investigatory Powers Tribunal accordingly treated key facts in the case as assumed for the purposes of hearing the challenge, allowing the case to proceed without confirming or denying sensitive details.
Privacy advocates did not wait for leaks to act. In March 2025, Liberty and Privacy International teamed up with two individuals to challenge the TCN itself and the closed nature of the legal hearing. They demanded that the hearing be opened to public scrutiny and that the tribunal refrain from operating under a cover of secrecy.
Their plea found traction. By 7 April 2025 the Investigatory Powers Tribunal had lost its bid to suppress even the bare details of the Apple case from public view. Judges ruled that the identity of the parties and basic facts could be disclosed, rejecting the Home Office’s argument that such disclosure would harm national security.
Next, the tribunal set a case management order for a seven-day hearing in early 2026 to proceed largely in public under assumed facts. Other parties, including WhatsApp, moved to intervene.
The global dimensions of the notice sparked explosive reaction abroad. Last week, U.S. Director of National Intelligence Tulsi Gabbard confirmed that following extensive consultations including with President Trump and Vice-President JD Vance, the UK decided to withdraw its demand for an encryption back door into Apple systems.
The decision was reported widely in the U.S. press as a triumph for civilian rights and transatlantic diplomacy. The Washington Post noted that the UK had pulled back in the face of criticism over civil liberties and concerns under the CLOUD Act. Privacy advocates, while welcoming the reversal, emphasised that the underlying legal authority to compel breaches of encryption remains intact.
Digital rights advocates are not celebrating just yet. The Investigatory Powers Act and associated regulations still allow for broad demands to be issued. Experts caution that without legislative reform the door remains ajar for government intrusion into encrypted systems.
Moreover the mechanics of the legal challenge rest on assumed facts rather than full disclosure. That raises concerns that even if Apple prevails the specific details may never fully emerge.
This episode resonates on multiple levels. It exposes a profound tension between government efforts to strengthen national security and the fundamental rights to privacy and encryption. Apple’s decision to cut ADP in the UK reinforced its commitment to security, but still left the possibility of compelled back doors looming for ordinary iCloud users globally.
It also underscores how secrecy provisions in law can be weaponised to shield state activity from democratic oversight. That shadowy axis of security legislation runs counter to the principles of open justice.
Internationally it triggered reaction. U.S. officials, civil society, and media framed this as a potential transgression on American citizens’ rights. The prospect of state-mandated vulnerabilities in encryption alarmed even moderate figures in Washington. Gabbard and others warned that compliance with such demands could violate FTC law and undermine both constitutional rights and trust in technology.
Apple’s case is set to be heard in early 2026. The tribunal will be working with “assumed facts” designed to protect official secrets while allowing public debate. Judges have set a timeline, Apple and the Home Office must agree the scope of those facts by 1 November 2025.
Civil society and industry observers will be paying close attention. If Apple succeeds, there may be precedent to limit future notices. If not, the legal threshold for government-mandated access to encrypted data might be lower than many think.
In parallel, experts are calling for legislative change. Without revision, the IPA continues to expose all users, inside the UK and overseas, to potential forced weakening of encryption.
This confrontation between Apple and the UK government may represent a turning point. It highlights three enduring truths.
First, in the face of official secrecy and sweeping laws, sunlight remains the best disinfectant. Transparency and open judicial scrutiny are essential to preserving essential liberties.
Second, encryption is not a glitch or luxury. It is a cornerstone of digital trust, privacy, and security. Undermining it undercuts not just individual safety, but the integrity of digital economies and democratic life.
Third, surveillance capabilities must always be balanced against civil liberties. Without firm guardrails and democratic visibility, the law becomes a lever for unchecked intrusion.
As Apple and human rights groups push back, they do more than defend a corporation, they defend the principle that some doors must remain locked, from governments as well as criminals.
The hearing in 2026 may deliver clarity. Until then the world watches as the legal frameworks and fundamental values of privacy, security, and surveillance collide in open court.
What’s your take? Should governments ever compel back-door access to encrypted data, or is this a line that must never be crossed?
2025-08-28
A cyber attack on APCS and its software supplier has left thousands of people vulnerable to identity theft. With sensitive data exposed across sectors, the breach highlights the fragility of supply chains, fragmented accountability, and the collapse of trust in systems designed to safeguard.
Image credit: Created for TheCIO.uk by ChatGPT
A cyber attack on the software system used by Access Personal Checking Services (APCS) has placed thousands at risk of identity theft. The gravity lies not only in the type of data exposed, but in the purpose of the service itself. Background checks through the Disclosure and Barring Service (DBS) exist to protect children and vulnerable adults. To find that the systems designed to safeguard instead became a liability raises profound questions about governance, resilience and trust.
APCS is the UK’s self-described fastest DBS checking service (APCS official site), working with more than nineteen thousand organisations across healthcare, education, charities, finance and religious institutions. While much of the early reporting focused on dioceses, the exposure stretches far wider. This was not a niche church systems failure. It was a supply chain breach affecting an umbrella body relied upon across multiple regulated industries.
The breach originated with APCS’s external software developer, Intradev, based in Hull. Certified under the UK National Cyber Security Centre’s Cyber Essentials programme, Intradev detected unauthorised malicious activity in its systems on 4 August 2025. Managing director Steve Cheetham described it as a “significant IT incident”, without confirming whether ransomware was involved.
Containment measures were put in place and the incident was reported to the Information Commissioner’s Office (ICO) and Action Fraud. Crucially, APCS’s own production systems were not directly compromised, but the developer’s environment appears to have contained sensitive records. This raises questions about segmentation between development, test and live systems — and whether principles such as least privilege and encryption were adequately enforced.
APCS has stated that it does not hold card details or criminal conviction data. But the personal identifiers at risk are still highly sensitive. Records include names, dates and places of birth, addresses, gender, National Insurance numbers, passport details and driving licence numbers.
Winchester Diocese clarified that compromised data consisted of text-based fields rather than scanned images (Winchester update), a detail that may reduce the risk of document forgery but does nothing to mitigate the fraud potential of raw identifiers.
The confirmed breach window stretches from December 2024 to May 2025, though Worcester indicated exposure may have started as early as November 2024 (Worcester statement). That represents months of DBS applications potentially exposed, and even if only a fraction of records were taken, the scale is significant.
Further reporting has shown that the breach extends beyond church use into education. Schools Week highlighted that school staff records stored in single central record systems were potentially exposed (Schools Week), broadening the scope of risk into the education sector. Legal guidance for schools and data controllers quickly followed, including recommendations from Browne Jacobson on regulatory reporting and safeguarding obligations (Browne Jacobson).
Once notified, APCS alerted its client organisations — who are themselves the data controllers under UK GDPR. Here the fragmentation became visible. Some institutions urged affected individuals to sign up for identity monitoring services. Others paused all DBS checks through APCS. A few insisted parishes or branches handle communication independently.
For volunteers and employees, the result was confusion. Should they expect direct contact from APCS, their employer, or a third-party service? For IT leaders, the lesson is stark: inconsistent messaging compounds harm. Crisis communication must be centralised, clear and coordinated.
From a compliance perspective, reports were filed to the ICO, Action Fraud and in some cases the Charity Commission. That demonstrates baseline regulatory diligence, but the divergence in organisational responses may invite further scrutiny. The ICO has repeatedly signalled that accountability cannot be outsourced — even if the immediate failure is a supplier’s.
The breach illustrates a familiar pattern of technical fragility across software supply chains. Developers sometimes use live production data in test environments without anonymisation, creating unnecessary exposure if those environments are compromised. Segmentation between development and production can also be weak, allowing intruders to pivot across systems. References to “text-based” data point to storage choices that may not have included encryption at rest. And where vendors retain broad access privileges without granular controls, a compromise of one environment can cascade into multiple clients.
These are not unique failings of APCS or Intradev. They are endemic across supplier ecosystems where speed and cost efficiency are prioritised over resilience.
For individuals, the risks are direct. With National Insurance numbers, passport and driving licence details, criminals can attempt impersonation, credit fraud or targeted phishing. Services such as Experian’s Identity Plus, offered to affected individuals by some dioceses (Southwark statement), provide a layer of protection, but only for a limited period. The shelf life of stolen data is long, and fraud attempts can surface years after the monitoring stops.
For organisations, the reputational damage can be severe. APCS marketed speed as its differentiator. Yet when “fastest” becomes synonymous with weakest, the long-term cost to trust can outweigh any operational benefit. For clients in healthcare, finance or education, continuing to rely on a provider now publicly associated with a breach carries its own risks.
The APCS breach underscores why supplier oversight cannot be reduced to certification logos. For IT leaders and boards, resilience depends on more than internal controls. It requires interrogation of suppliers’ data handling practices, segregation of environments, use of anonymised test data, encryption at rest and in transit, and clear contractual obligations around incident response. Leaders should insist on verifiable evidence, not marketing claims, and demand assurance through regular independent testing and reporting.
“If your vendors fail, you fail — in the eyes of regulators, the public and those you serve. Supply chain resilience is not optional. It is the frontline of trust.”
Certification such as Cyber Essentials signals a baseline commitment, but it is not a guarantee of resilience. A logo on a tender document is no substitute for visibility into how a vendor actually manages and protects sensitive data.
This breach sits within a wider pattern of institutional exposure. The British Library ransomware attack in 2023 saw 600GB of data leaked online. The Legal Aid Agency incident in early 2025 exposed millions of records. Each case involved trusted institutions where sensitive information is central to public service. The APCS breach adds a further dimension by showing how attackers can target the supply chain to reach data indirectly.
This was not just a data breach. It was a breach of confidence in the very systems intended to protect. When background checks become an attack surface, safeguarding collapses into liability.
For IT leaders, the lesson is clear. Resilience depends on the strength of every link in the supply chain. If your vendors fail, you fail — in the eyes of regulators, the public and those you serve. Operational efficiency must never come at the cost of resilience.
The APCS breach is a frontline reminder that data protection is not an IT back-office issue. It is a leadership responsibility, tied to safeguarding, trust and legitimacy. Unless supplier resilience is treated with the same seriousness as in-house controls, incidents like this will continue to erode confidence in the institutions people rely on most.
In the end, the question every IT leader must ask is simple: if your supplier was breached tomorrow, would you still be trusted the day after?
What’s your take? Do you believe organisations are taking third-party risk seriously enough, or will incidents like this keep repeating?
Let’s share the good, the bad and the messy middle of managing trust in our supply chains.
2025-08-28
A four-year cybercrime campaign targeting Mexican banks reveals just how resilient, regional and relevant financially-motivated threat actors remain – and why the UK financial sector cannot treat it as someone else’s problem.
Image credit: Created for TheCIO.uk
For almost four years, a small, disciplined group of criminals has taken aim at Mexican banks, retailers and public bodies, exfiltrating credentials and emptying accounts. Researchers who finally stitched the evidence together call the gang Greedy Sponge, a name borrowed from a SpongeBob meme once spotted on their command-and-control server.
The criminals’ latest campaign, revealed this week, shows a sharp uptick in capability. Instead of the vanilla remote-access tools that first drew attention in 2021, Greedy Sponge now delivers a heavily customised variant of AllaKore RAT alongside the multi-platform proxy malware SystemBC. Together, the pair gives attackers persistent footholds, covert tunnels and a menu of plug-ins to siphon money at will.
Greedy Sponge may feel distant, confined to Mexican institutions. Yet its tools, patience and operational discipline send a warning that extends far beyond Latin America. For British financial leaders, the lesson is blunt: geography is no longer a firewall.
Initial access still begins with people. Victims receive zipped installers purporting to be routine software updates. Inside sits a legitimate Chrome proxy executable and a trojanised Microsoft Installer.
Run the file and a .NET downloader named Gadget.exe quietly reaches out to Hostwinds infrastructure in Dallas, pulls down the modified AllaKore payload and moves it into place. The loader even cleans its own tracks with a PowerShell script so nothing obvious remains in %APPDATA%. It is careful, boring, and effective — the kind of intrusion that does not light up a SIEM dashboard until money is already moving.
Greedy Sponge was once content to geofence victims client-side, checking IP addresses before releasing the final stage. The group has now shifted that logic server-side, a subtle change that blinds many sandboxes and threat hunters.
By handing the decision to the server, the criminals limit forensic artefacts and make it harder for defenders outside Mexico to replicate the kill chain. The network map is small but resilient: phishing domains, RAT control servers and SystemBC proxies all sit in neat clusters, registered through offshore companies and hosted in the same American data centre.
It is a reminder that scale is not always the objective. A tight, disciplined infrastructure can evade takedowns and stay online far longer than sprawling botnets.
AllaKore is open-source, written in Delphi and first surfaced back in 2015. Open-source malware often ends up discarded or replaced, yet Greedy Sponge has treated it as a living project.
Their fork now grabs browser tokens, one-time passwords and banking session cookies, wrapping the loot in structured strings for easy ingestion at the back end. Once entrenched, the RAT fetches fresh copies from z1.txt and drops secondary payloads via SystemBC proxies. The operation looks methodical, suggesting a tiered workforce: entry-level operators handle phishing while more skilled colleagues sift stolen data and run fraud at scale.
In cyber crime, longevity is often underestimated. What defenders dismiss as “old” can still bleed institutions dry when packaged with new tricks.
Three traits stand out:
Operational patience. Four years is an eternity in cyber crime circles. This crew has not chased quick ransomware payouts; it has refined tooling until the infection chain is almost mundane.
Regional intimacy. Spanish strings inside binaries, lures themed on the Mexican Social Security Institute and netflow showing remote desktop traffic from Mexican IPs point to local knowledge and comfort operating near home turf.
Incremental upgrades. Moving geofencing server-side, bolting in SystemBC, adding UAC bypasses via CMSTP — each tweak raises the bar without triggering a brand-new hunting signature.
This is not smash-and-grab. It is slow cooking, with every change carefully tasted before it is served to victims.
Greedy Sponge is not the first financially-motivated crew to grow from local to global impact. Carbanak began with targeted intrusions against Eastern European banks in 2013 before spilling into Western institutions, with estimated thefts exceeding one billion US dollars. TrickBot evolved from a small banking trojan into a modular platform rented out to ransomware gangs worldwide.
Even Lazarus, the North Korean-linked group behind the Bangladesh Bank heist in 2016, showed how a crime born of local compromise could ripple across the global financial system.
These precedents underline the risk: tools refined in Mexico today can be franchised or sold into Europe tomorrow.
The International Monetary Fund has linked nearly one-fifth of total financial losses worldwide to cyber incidents. In 2024 alone, destructive attacks against banks rose thirteen per cent, according to multiple threat intelligence reports.
Financial crime is a marketplace. Malware, access and stolen credentials circulate like commodities. Greedy Sponge may have begun in Mexico, but its harvest can feed fraud operations anywhere.
The geography of compromise no longer dictates the geography of loss.
British lenders have weathered recent storms better than many peers. Freedom of Information data shows the FCA logged 53 per cent fewer cyber notifications from regulated firms in 2024 than the year before, crediting tighter operational resilience rules for the fall.
Yet the same dataset confirms that vendor incidents and data-exfiltration events remain stubborn risks. Greedy Sponge’s knack for secondary infections and geofenced payloads speaks directly to that threat: if a UK supplier with operations in Latin America is compromised, credentials harvested abroad can still unlock systems in London.
A call centre in Monterrey, a development team in Guadalajara or a shared service hub in Mexico City can all act as stepping stones into the UK core banking estate.
Chaucer Group’s analysis of 2023 breaches put the number of UK citizens affected by attacks on financial services at more than twenty million, a rise of 143 per cent year-on-year. Those figures reflect an ecosystem in which stolen data moves fast.
A credential skimmed from a Mexican multinational with a London subsidiary is just as valid on a British banking portal. A cookie stolen from a contractor’s remote session can be replayed against an FCA-regulated payment switch.
The sponge analogy is apt. Quiet absorption in one region eventually drains customers half a world away.
Greedy Sponge reinforces a simple mantra: controls must travel with data, not with office locations.
If your firm operates call centres, development shops or outsourced back-office teams in Latin America, credential harvesting there becomes a direct threat to UK core banking. Zero-trust principles, privileged access management and mandatory hardware tokens are the modern seat belts.
They are the difference between a phish leading to an isolated workstation rebuild and an attacker replaying session cookies against the production payment switch.
Indicators tied to this campaign include the PowerShell filename file_deleter.ps1, the .NET user-agent string mimicking Internet Explorer 6 and the Hostwinds IP range 142.11.199.*.
Blocking those artefacts buys time, but reliance on static indicators of compromise is a losing race. The smarter route is behavioural: alert on unsigned MSI executions that spawn PowerShell, on any network request with the vintage MSIE 6 user-agent and on outbound connections to port 4404.
Criminals evolve fast. Behavioural signals evolve slower, and defenders can use that inertia to their advantage.
Every UK lender now embeds suppliers deep inside payments, analytics and customer service flows. A pre-production environment in Monterrey running on a contractor’s laptop can bridge, via VPN, into a London data centre.
Greedy Sponge already exploited that scenario domestically by moving laterally from retail to banking networks. The same tactic, exported, would let criminals bypass hardened internet perimeters and walk in through trusted third-party tunnels.
Controlling and segmenting supplier access is no longer a compliance hygiene task. It is a front-line defence.
The Bank of England and the FCA are finalising rules that label certain cloud and IT suppliers “critical”. Under the proposals, outages or compromises at those providers could trigger direct intervention by supervisors.
Boards tempted to treat geofenced Latin-American malware as someone else’s problem will find less room to hide. Regulators increasingly expect firms to model and test cross-border attack paths, just as they rehearse liquidity stress scenarios.
Ignoring regional campaigns is no longer an option when supervisors demand proof that attack paths have been mapped, tested and mitigated.
It is tempting to dismiss AllaKore and SystemBC as yesterday’s malware. Yet the persistence of such tools reveals uncomfortable truths. Old codebases offer reliability. Open-source means multiple groups can fork and improve them. And familiarity makes detection harder, as defenders may downgrade alerts on “known” malware families.
Greedy Sponge’s success with AllaKore is proof that novelty is overrated. Steady refinement often beats innovation in the criminal toolkit.
Defenders rarely need silver bullets. They need consistency. Small, boring controls applied daily matter more than headline-grabbing solutions.
Teach staff to doubt unexpected installers. Instrument networks to recognise odd user-agents. Enforce multi-factor authentication even on staging environments.
These steps are not glamorous, but neither is Greedy Sponge. Both attacker and defender win through relentless repetition.
Greedy Sponge did not invent zero-day exploits or novel encryption. They packaged known tools, tuned them carefully and taught staff to follow a script.
Defenders can mirror that discipline. Cyber resilience is rarely heroic; it is the accumulation of small steps taken every single day.
The sponge analogy holds. Slow, quiet absorption eventually drains the victim. The antidote is equally unglamorous: keep wringing out the risk before it saturates your estate.
2025-08-27
The discovery of PromptLock – the first AI-powered ransomware – signals a new era in cyber threats. By leveraging local large language models, this proof of concept marks a turning point in how ransomware can adapt, evade, and scale beyond traditional defences.
Image credit: Created for TheCIO.uk
In a development that reads like a page from tomorrow’s tech thriller yet remains very much rooted in today’s threat landscape, cybersecurity researchers have uncovered what appears to be the first instance of ransomware built with genuine AI capability. Dubbed PromptLock, this malware represents a new frontier in how attackers might weaponise artificial intelligence. Far from theoretical musings, PromptLock signals a tangible shift, with criminals crafting malware that not only encrypts and steals data but does so by leveraging local large language models to generate malicious code dynamically.
This breakthrough was reported by ESET researcherswho analysed malware samples uploaded to VirusTotal and determined that PromptLock uses a local AI model to drive its operations. Its discovery raises profound concerns about how quickly threat actors could employ AI to scale threat sophistication and evade detection.
PromptLock is written in Go and targets Windows, Linux, and macOS environments. What sets it apart is the integration of AI directly into its attack chain rather than relying on static payloads or precomposed scripts. The malware makes use of gpt-oss-20b, an open-weight large language model developed by OpenAI. By running the model locally via the Ollama API, ransomware architects avoid making outbound requests to commercial AI providers, effectively evading scrutiny and attribution.
The sequence of operations unfolds like this: inside the compromised system the malware triggers a local instance of gpt-oss-20b, supplying it with hard-coded prompts to produce Lua scripts. Those scripts perform a range of malicious activities: enumerating the file system, inspecting and exfiltrating files, and applying encryption using the NSA-developed SPECK 128-bit algorithm. In essence, the AI model composes payloads on the fly, swapping static code for responsive, bespoke instructions based on the environment it inhabits.
Strikingly, ESET also found that whilst PromptLock does contain code suggesting destructive capabilities, such as file deletion, those routines appear to be unfinished or inactive at this stage. That, combined with other contextual evidence, suggests that what we are seeing is likely a proof of concept, still under development rather than an actively deployed malicious tool.
Traditional ransomware relies on predefined code and behaviour. Analysts can trace signatures, predict threat patterns, or contain outbreaks using known indicators of compromise. PromptLock disrupts that model in two critical ways.
Firstly, it introduces non-determinism. Since AI models generate outputs that vary, even when given the same prompt, each execution of the malware could look different. This variability hampers signature-based detection. As one researcher explained, "indicators of compromise may vary from one execution to another," making defences far more complex.
Secondly, by processing AI locally, the malware obviates the need for external communication with AI service providers. That shields attackers from potential exposure and intrusion detection that might occur when connecting to cloud services.
Beyond its novelty, the very concept of malware adapting in real time to its environment, composing tailored commands based on local data, marks a new class of threat... one that combines adaptability with anonymity, speed and technical sophistication.
PromptLock arrives at a time when AI is already disrupting cyber offence and defence dynamics. Organisations, particularly in the UK, must anticipate the arrival of smarter, more flexible malware.
Endpoint defences need to monitor for anomalies such as unexpected executions of Lua, Go-based binaries and local AI processes. Behavioural analysis must evolve to detect unexpected contexts.
Network monitoring should flag suspicious tunnelling to local AI APIs, especially Ollama-like infrastructure or traffic patterns moving data from endpoints to internal AI servers.
Threat intelligence frameworks must shift from relying solely on static signatures to context and behaviour. PromptLock variants may evade detection unless defences adapt to recognise AI-generated sequence patterns.
Policy enforcement needs updating. If organisations adopt AI agents for automation or analysis, they must ensure those agents operate in secure, compartmentalised environments. Without proper safeguards, such systems can be hijacked or turned inward.
In short, PromptLock is not just another malware; it is a harbinger. Security teams need to prepare for active AI agents as adversaries, not merely static code.
While PromptLock appears to be the first AI-powered ransomware detected in the wild or near the wild, it is not the only project in the space. Researchers had previously explored AI-guided ransomware in academic contexts.
For instance, as reported in itnews.com.au, arxiv.org RansomAI, a reinforcement-learning framework developed in mid-2023, shows how ransomware could adapt its encryption behaviour to evade detection while maximising damage, though it was experimental and targeted hardware like Raspberry Pi.
Similarly, EGAN, a generative adversarial setup from May 2024, focused on producing ransomware mutations that evade modern antivirus solutions using AI-enhanced mutation strategies.
Though both are theoretical exercises, they underscore that the concept of “intelligent malware” is not science fiction—it is a subject of active research. PromptLock brings us closer to that unsettling reality.
Leading cybersecurity voices warn that PromptLock’s emergence is the tip of the iceberg. As one expert put it on X:
“We are in the earliest days of regular threat actors leveraging local / private AI. And we are unprepared”.
ESET themselves emphasised the significance of the discovery on their official research channel:
“ESET Research discovered PromptLock, the first known AI-powered ransomware. Written in Go and using gpt-oss-20b through Ollama, it demonstrates how threat actors could use local LLMs to generate malicious payloads and evade traditional detection”.
These warnings reinforce the gravity of the moment. While PromptLock may still be embryonic, the blueprint is out in the open.
What does PromptLock’s discovery mean for the near future of cyber threats and defences?
Rapid Evolution of Malware
If attackers can deploy AI models, whether open-weight or proprietary, within their malicious infrastructure, malware becomes not only more flexible but easier to adapt and harder to predict.
Proliferation of AI Toolkits
As models like gpt-oss-20b and frameworks like Ollama gain popularity, attackers lose barriers to entry. Open-source AI reduces costs and raises the threat ceiling quickly.
Arms Race in Detection Tools
Defenders must invest in AI-powered detection themselves. These systems must be capable of recognising dynamic, generative attacks that adapt in real time. New defences may include AI-based anomaly detection, deep behavioural monitoring, and AI sandboxing.
Policy and Regulation Challenges
How do regulators respond when AI becomes a weapon in criminal toolkits? Discussions over AI usage, access control, logging, and traceability gain urgency.
Rethinking Incident Response
Traditional IR approaches assume consistent behaviour and predictable traces. Now responders must be prepared for dysregulated, randomised attack logic that defies conventional pattern matching.
PromptLock does not yet appear to have infected targets in the wild. It remains, for now, a proof of concept. But that does not lessen its significance. Instead it amplifies the warning: the mechanisms and techniques exist. All that is needed is for threat actors to deploy them at scale.
In the UK and beyond, organisations must treat this moment as a turning point. The revolution of cyber threats is not merely AI-augmented… it is AI-powered.
CISOs and security teams must embrace smarter defences, update detection regimes, constrain internal AI agents, and stress test infrastructure against generative threat logic.
The future of ransomware may no longer carry the fingerprints of its creator. Instead, it may arrive as the output of an AI, tailored precisely to its environment and destined to remain one step ahead.
Corroborating the details of PromptLock across several trusted outlets reinforces its significance.
Together these sources paint a consistent picture: PromptLock is a novel, embryonic threat, a notable departure from static ransomware of the past.
2025-08-26
UK banks are balancing legacy technology, an evolving threat landscape and growing regulatory demands. The sector’s ability to modernise at pace will define not just its resilience but its credibility in the eyes of customers and regulators alike.
Image credit: Created for TheCIO.uk
The UK banking sector is under renewed pressure to modernise its cyber security. For years, banks have been seen as some of the most mature organisations in the way they handle cyber risk. Yet the reality is more complex. Legacy systems, fragmented digital estates, and an expanding attack surface have left cracks in the armour. Attackers have noticed.
This summer has seen an uptick in incidents and warnings directed at UK financial institutions. Ransomware groups are testing their luck with extortion campaigns. State-backed actors are probing critical systems, while fraudsters exploit the gaps between customer expectations and the ability of banks to keep their channels secure.
The core issue is that cyber security is no longer about perimeter defence or compliance checklists. It is about resilience. And that requires modernisation at scale.
Banks are uniquely exposed to legacy technology. Decades of mergers, acquisitions and rapid digital expansion have left many institutions with a patchwork of systems. Some of these platforms are still running on out-of-support operating systems or applications that were never designed to interact with modern architectures.
For IT leaders inside banks, this creates a paradox. These systems are too critical to simply replace, yet too outdated to properly secure. Modernisation programmes are underway in most institutions, but they take time, money and political capital. In the meantime, adversaries exploit known vulnerabilities in older systems, often finding the weakest link in a supply chain rather than breaching a fortified core.
The more time legacy systems remain operational, the greater the burden on cyber security teams to defend the indefensible.
Banking is one of the few sectors where customers still expect absolute reliability. A retail customer may tolerate glitches from a streaming service or an e-commerce platform, but if their bank suffers an outage or a breach, trust is shattered immediately.
This trust deficit makes banks prime targets. Attackers know that even minor service disruptions can generate panic, headlines and regulatory scrutiny. A phishing campaign against customers, a credential stuffing attack on mobile apps, or a ransomware hit on a payments processor all carry reputational risk far beyond the initial compromise.
As customers increasingly engage with banks through digital channels, the attack surface widens. Mobile apps, open banking APIs, cloud-based services and instant payments all bring innovation and convenience. They also bring complexity, dependencies and fresh vectors for exploitation.
The race to modernise is therefore not only about operational resilience, but about preserving customer confidence.
The Prudential Regulation Authority (PRA), the Financial Conduct Authority (FCA) and the Bank of England have all stepped up their expectations around operational resilience. UK regulators are clear: banks must be able to withstand and recover from disruptive cyber events.
The new rules on important business services and impact tolerances are shifting boardroom conversations. It is no longer enough to focus on recovery times. Institutions must map dependencies, test their assumptions and prove that critical services can continue even under sustained attack.
Meanwhile, the Digital Operational Resilience Act (DORA) in the European Union is raising the bar for international banks with cross-border operations. Even though DORA is EU legislation, its ripple effects are felt in London. Global institutions cannot afford to run resilience to different standards in different markets.
The regulatory message is consistent: cyber resilience is now a core component of financial stability. Boards are accountable, and excuses are no longer tolerated.
For banks, the financial impact of cyber incidents goes far beyond fines. The direct costs of responding to a breach include investigation, recovery, customer compensation and system rebuilds. Indirect costs include lost business, higher insurance premiums, increased borrowing costs and reputational harm.
History provides clear lessons. The 2018 TSB IT migration failure left millions of customers locked out of accounts, costing the bank hundreds of millions of pounds and damaging its reputation for years. While that incident was more about IT failure than a direct cyber-attack, it shows how technology weaknesses can quickly spiral into systemic issues.
Ransomware groups are also evolving. Rather than encrypting systems and hoping for a payout, many now focus on double or triple extortion, stealing sensitive data and threatening to release it unless payment is made. For a bank, the release of customer information is not just a data protection issue. It is a trust crisis that regulators, politicians and the public will not forgive easily.
While legacy systems are a major weakness, innovation brings its own risks. The rapid adoption of artificial intelligence, machine learning and automation within banking is reshaping operations. Fraud detection is faster, customer service is more efficient, and risk models are more dynamic. Yet AI also introduces opaque decision-making processes, data governance concerns and new avenues for adversarial manipulation.
Similarly, the push to cloud brings agility but also dependence on third-party providers. Banks are increasingly reliant on hyperscale cloud vendors to host critical services. While these providers invest heavily in security, the concentration risk is real. A disruption at a single provider could cascade through the sector. Regulators are acutely aware of this, which is why operational resilience is not just about the bank itself but its entire ecosystem.
Technology is only part of the equation. Human behaviour remains one of the most significant risks in banking cyber security. Phishing, business email compromise and social engineering are still responsible for a disproportionate number of breaches.
Banks have invested heavily in awareness campaigns and simulated phishing exercises, but fatigue is setting in. Employees are overwhelmed by security training, alerts and procedures. At the same time, the pressure to deliver digital transformation at speed can lead to shortcuts that weaken security.
CISOs and IT leaders in banking are therefore under pressure to balance strict security controls with business agility. Achieving this balance requires cultural change, not just technical fixes. Security must be embedded into decision-making at every level, from product design to customer service.
In UK banks, cyber security is now firmly a board-level issue. The days when it could be delegated to the IT department are over. Directors are personally accountable under regulatory frameworks, and they face questions from investors, customers and Parliament when things go wrong.
Board engagement is improving, but challenges remain. Many directors lack deep technical expertise, and translating cyber risk into financial and operational terms is still a work in progress. CISOs must become storytellers, articulating not just threats but the business case for investment.
This shift in governance is positive, but it adds pressure. Boards are less tolerant of uncertainty, and they expect clear answers. The problem is that cyber risk is inherently uncertain. The question is not whether banks will be attacked, but when and how effectively they can respond.
No bank can defend itself in isolation. The sector has long recognised the value of intelligence sharing, and initiatives such as the Financial Sector Cyber Collaboration Centre (FSCCC) and the Bank of England’s CBEST framework are now well established.
These initiatives are critical, but they require active participation. Smaller institutions sometimes lack the resources to fully engage, leaving them more exposed. At the same time, adversaries are increasingly collaborating across borders, trading tools and techniques on underground forums.
To keep pace, UK banks must deepen their collaboration not only with each other but also with telecoms providers, cloud vendors, government agencies and even competitors. Cyber defence is becoming an ecosystem challenge, not a solitary one.
Like every sector, banking faces a cyber skills shortage. Experienced security professionals are in high demand, and banks must compete with technology firms, consultancies and government agencies to attract talent.
The stakes are higher in financial services. The skills shortage cannot be solved with recruitment alone. Upskilling existing staff, automating routine tasks, and investing in security orchestration and AI-driven threat detection will all be essential.
If banks cannot close the skills gap, they risk overburdening their teams and missing emerging threats. The pressure to modernise is therefore also about modernising how the workforce is supported, trained and augmented.
The next decade will determine whether UK banks can stay ahead of their adversaries. Cyber threats are not static, and neither can defences be. Quantum computing, deepfake-enabled fraud, AI-driven malware and state-backed campaigns will all redefine the risk landscape.
For banks, the imperative is clear: modernise now or be left exposed. That means accelerating legacy replacement programmes, embedding security into digital transformation, strengthening governance and deepening collaboration across the sector.
The UK banking sector has long been a global leader. But leadership is not a static position. It must be earned repeatedly, especially in cyber security. The pressure to modernise is not just about compliance or resilience. It is about safeguarding the trust that underpins the entire financial system.
Cyber security in UK banks is no longer just a technical issue. It is a strategic priority that cuts across leadership, regulation, customer trust and operational resilience. The sector has some of the brightest minds, deepest pockets and strongest incentives to get it right. But that does not make it immune to failure.
The window for incremental change is closing. Attackers are innovating, regulators are tightening their grip, and customers are watching closely. The challenge for banks is to modernise before events force their hand. The cost of delay is measured not just in fines and losses, but in trust, reputation and the stability of the financial system itself.
2025-08-24
Schools are juggling ageing technology, squeezed budgets and thin teams while cyber threats rise. The standards are clearer, the stakes are higher, and the window for incremental change is closing.
Image credit: Created for TheCIO.uk
Scottish pupils have already settled back into classrooms, while many English schools will open their doors in the first week of September. The return marks more than the end of summer; it is also a reminder of how dependent modern education has become on digital systems that need to be both available and secure. As teachers prepare lesson plans and pupils adjust to new routines, school leaders face a growing pressure to ensure that the technology underpinning everyday learning is resilient, compliant and protected against increasingly sophisticated cyber threats.
Schools are carrying more digital risk than ever, often with fewer hands and older kit. Breaches in the private sector make the headlines, yet classrooms and trust offices are an attractive target for criminal groups that value the mix of sensitive information, operational pressure and limited capacity to respond.
Parents expect security to be a given. The sector is trying, and many teams do a solid job with what they have, but the gap between risk and readiness is getting wider. Standards and expectations are moving faster than budgets, skills and contract cycles. The Department for Education has set out a clearer floor for good practice that covers risk assessment, identity and access, multi factor authentication, patching, backups and incident planning, with roles and responsibility sharpened in the 2024 and March 2025 updates. The wording that leaders will be held to is set out in the current DfE cyber security standards and the official updates log.
The scale of the problem is not in doubt. The official Cyber Security Breaches Survey 2024, education annex shows most secondary schools identified a breach or attack in the last year, with higher education and further education reporting even higher levels. Phishing remains the main way in across education settings. Primary schools are more likely than secondaries to outsource cyber security to a provider. Structured risk activity and testing are less common in schools than in colleges or universities, which hints at a familiarity gap as much as a resource gap.
Walk the estate and the pattern repeats. A cupboard server that should have retired two summers ago. Laptops that will not take the latest operating system. A wireless network that is fine until the first mock exams. A ticket queue that never quite reaches zero because the same flaky devices keep coming back. A trust office that relies on one person who knows every quirk in the setup. Contracts that read well until the first hour of an incident when nobody is quite sure who calls whom. None of this is unusual. It is the daily reality for many schools and trusts.
Keeping up asks for time, attention and a constant focus on the basics. Ageing infrastructure pushes costs into firefighting and out of planned improvement. Multi factor authentication is clearer in policy than it is on the ground. The standard is explicit that senior leaders and staff who handle confidential, financial or personal data must use multi factor authentication, and it encourages schools to extend that protection to all cloud services and to all staff where appropriate, as set out in the DfE standards. Training is too often a yearly tick in a learning portal rather than short, timely sessions that reflect how staff actually work. The same page points schools to free NCSC training for school staff and expects an annual cycle for users in scope. Backups exist in most places, but restore tests are less certain. The guidance calls for an approach that reflects the three two one principle, for termly tests, and for evidence that can be shown to insurers. Members of the Risk Protection Arrangement should note the cyber conditions in the RPA membership rules.
Roles and responsibilities with service providers are another weak seam. Many schools buy support that includes security but do not write down who owns the first hour of a crisis or how changes to identity, firewalls and backups are controlled and recorded. The DfE advises schools to ask for Cyber Essentials or Cyber Essentials Plus from suppliers and to map contracts to the controls the school must meet in the supplier expectations section.
Every request for a firewall refresh, a device replacement round or an identity project competes with classroom and welfare priorities. That is the context for most decisions. Even when small pots of money or frameworks exist, the bidding and compliance work is hard to absorb for small teams. The standards help. They say that a cyber risk assessment should be completed each year and reviewed every term, that data backup should be planned and tested, and that multi factor authentication should be used by senior leaders and by anyone handling sensitive or financial information. Anchoring spend to the DfE standards moves the conversation from optional to expected.
Large trusts can justify a chief information officer or a dedicated security lead. Many schools rely on a small internal team and an external provider to cover identity, devices, connectivity and day to day support. Recruiting and retaining people with current skills is difficult because public sector pay rarely keeps pace with private offers. In small schools, the lone technician can be isolated and short of time to learn. The standards acknowledge that reality by naming a senior leadership digital lead as accountable and by telling schools to seek outside help where skills are not available in house, set out under roles and accountability.
Security now touches a wider set of skills than a decade ago. It is not enough to keep antivirus up to date and patch servers. Schools need to understand cloud identity, conditional access, logging and alerting, incident response, supplier risk and insurance conditions. The NCSC 10 Steps is a simple lens for conversations with governors and senior leaders and lines up well with the DfE standards.
Schools work within UK GDPR and the Data Protection Act, and they should align with the DfE standards. Colleges are required to hold Cyber Essentials under their funding agreement. Schools are not required to certify, but the department encourages it and advises schools to ask suppliers for certification, as recorded in the standards and the DfE updates log. These points are worth writing into procurement, contract renewals and any review of a managed service.
Regulation sets the floor. Reputation sets the ceiling. Parents will assume the school uses modern, safe technology and sound practice. When a breach becomes public, the technical fix is only one part of the work. Community trust is harder to rebuild. The case for early investment is not only technical. It is also about confidence, transparency and the ability to show that the basics are in place and tested. For broader context, see the GOV.UK data protection guidance for schools and the ICO overview of children and UK GDPR.
Begin with a written cyber risk assessment and set a rhythm of review each term. Keep it short, name the owners and focus on what will change before the next holiday. Make sure a senior leadership digital lead is accountable and that governors see the risk register and the business continuity plan. Turn on multi factor authentication for senior leaders and for anyone who handles confidential, financial or personal data, as framed in the DfE standards. Extend coverage to administrator accounts and set out the path to bring all staff into scope where appropriate. Where a person needs accessibility adjustments, write them down and keep a record of the reasoning.
Tidy identity and access. Use unique credentials for all staff and pupils. Set sensible lockout rules. Follow NCSC guidance on passwords. Remove standing administrator rights wherever you can and add simple checks with HR for joiners, movers and leavers so that accounts follow the person and do not drift.
Fix backups and prove that they work. Keep protected copies that reflect the three two one principle. Test a restore each term, record the evidence and store the plan somewhere that does not rely on the system you are trying to recover. If you are in the Risk Protection Arrangement, note that cover depends on these practices and on annual training for users in scope, as set out in the RPA membership rules.
Secure the boundary you actually have. Check the firewall configuration. Protect available administrator interfaces with multi factor authentication. Make sure logs and alerts are enabled and someone will see them. If your broadband contract includes a managed firewall, sit down with the provider and map what they run to the wording in the DfE standard. Then write down who does what in an incident and share a one page flow that lists first actions, on call numbers and the information both sides will exchange in the first hour. Ask for proof of Cyber Essentials or Cyber Essentials Plus from your provider and keep it with the contract.
Move what you can to cloud services. The guidance is explicit that schools should use cloud solutions rather than local servers where possible, again set out in the DfE standards. If a system cannot move this year, record why and set a review date.
Finish the job on multi factor authentication. Bring all staff into scope. Choose methods that reduce the chance of tricking someone, especially for administrator accounts. Treat identity health as routine work.
Use the tools you already pay for. Many schools on Microsoft 365 or Google Workspace have baseline security features that are not yet switched on. Plan the rollout of endpoint protection, conditional access, email security, data loss prevention and identity risk signals. Tie every change back to the written risk assessment so the story is clear.
Improve monitoring and logging. Decide what you will collect, where you will keep it and who will look at it. Even simple steps such as forwarding audit logs for administrator actions and setting alerts for risky sign ins can cut the time it takes to see trouble. The DfE standard links to NCSC guidance on logging that can help define scope.
Test the plan, not just the backups. Run two tabletop exercises a year. Choose one scenario where a staff account is taken over after a phishing lure and one where shared drives are encrypted. Time the first hour. Write down what slowed you down and fix it. The NCSC Exercise in a Box service offers a structured path if you need it.
Raise the floor on patching and device health. Shorten deadlines for critical updates. Automate operating system and browser updates wherever you can. Measure compliance every week and chase what falls behind. The education annex to the 2024 breaches survey shows that primaries in particular have room to improve on structured risk identification and testing.
Bake supplier checks into buying. Ask for Cyber Essentials or Cyber Essentials Plus during procurement. For higher risk systems, ask how the supplier will help you meet your duties under data protection law and under the DfE standards. Keep the evidence with the contract and review it at renewal, as advised in the DfE standards.
Join a peer community and share what works. If you are a single school IT lead, do not work alone. Use local networks and LGfL security resources to compare notes and borrow practical guidance.
Technology matters, but people and process keep a school resilient. Training works best when it is little and often rather than a single annual push. Use the NCSC modules, run short refreshers after real incidents and make time in briefings to swap lessons learned. Keep the incident plan to a few pages so it is usable when things are busy. Agree escalation paths with your provider and link the contract to the controls you are expected to meet. Pick a few trusted staff in different parts of the school and ask them to act as security champions. Give them a clear route to report concerns and share tips.
Governors, head teachers and business managers set the tone. The standards place accountability with a senior leadership digital lead and expect governors to ask questions, to include cyber in the risk register and to carry digital risks into the business continuity plan, as set out in the DfE standards. Colleges must hold Cyber Essentials. Schools should consider certification for themselves and ask for it from suppliers. Treat it as a milestone that forces attention on the basics rather than a badge for the website. The requirement for colleges is recorded in the DfE updates log.
The data shows a sector that sees frequent attacks and is still catching up on some fundamentals. The standards are clearer than before about what to do and who is responsible. Put the two together and the message is simple. Without sustained investment in technology, people and partnerships, schools will not keep pace with current threats. Digital resilience needs to move from an information technology task to a school wide priority.
What is your take? Where does your school or trust feel most exposed right now, and what would make the biggest difference this term?
Let us share the good, the bad and the messy middle. What has worked, what has not, and what you would change next time.
About the Author
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. Ben has partnered with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology strategies. He’s held leadership roles across IT support, infrastructure and cyber, and is known for building reliable systems with a clear focus on risk, documentation and operational maturity.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-23
Microsoft will throttle outbound email sent from onmicrosoft.com addresses to 100 external recipients per tenant per day. The aim is to cut abuse and push every customer to send from a verified custom domain. Here is what changes, who is affected, and the practical steps to take now.
Image credit: Created for TheCIO.uk by ChatGPT
Microsoft will throttle outbound email that is sent from a tenant’s default onmicrosoft.com address. The cap is 100 external recipients per organisation in a rolling 24 hour window. Internal mail is not affected. When you hit the ceiling, senders see an NDR with 550 5.7.236. Microsoft’s Exchange Team says the change is designed to stop abuse of shared onmicrosoft domains and to nudge every customer to send from a vetted custom domain with proper authentication. A phased rollout starts 15 October 2025 for trial tenants and completes 1 June 2026 for the largest estates.
Source: Microsoft Exchange Team announcement, August 2025.
When you create a Microsoft 365 tenant, you receive a default email domain in the form tenantname.onmicrosoft.com. This is the MOERA address. It helps you get up and running quickly, but it was never meant to be the long term sending identity for communication with customers, partners or the public.
Microsoft is now enforcing that intent. Messages sent to external recipients from a MOERA address will be throttled. The tenant wide cap is 100 external recipients per 24 hour rolling window. Distribution lists expand before counting, so one message to a large external list can consume the allowance. Internal mail is out of scope. Once throttled, senders receive non delivery reports with code 550 5.7.236. The Exchange Team sets out the changes, the reason, and the edge cases in its announcement.
The abuse pattern is simple. Spammers spin up fresh tenants and blast out spam from new onmicrosoft addresses before reputation systems have any signal. That drags down deliverability for everyone who shares the namespace. The throttle tackles this by limiting the blast radius and by pushing customers to use owned, authenticated domains.
The rollout is phased by Exchange seat count:
Microsoft says tenants will receive Message Center notices one month before their stage begins. Plan on the basis that you may not see or act on that reminder in time.
The Exchange Team is explicit. MOERA is fine for testing. It is the wrong choice for production email. Abuse from new tenants harms the shared reputation of onmicrosoft, so Microsoft is limiting the number of external recipients and advising every customer to move outbound email to a custom domain.
This sits alongside a wider tightening of outbound controls in Microsoft 365:
Tenant wide external recipient rate limit. In February 2025, Microsoft announced a new tenant wide cap on external recipients per day, separate from per mailbox limits. It is designed to frustrate abuse at scale and to stop bad actors spreading sends across many accounts. Microsoft’s post and independent analysis from Practical 365 explain the model and the impact.
Outlook high volume sender requirements. In April and May 2025, Microsoft set new requirements for domains that send more than 5,000 messages per day to Outlook.com addresses. SPF, DKIM and DMARC are mandatory, with non compliant traffic first routed to Junk then rejected with error 550 5.7.515. The Microsoft Defender for Office 365 blog has the canonical guidance.
The direction of travel is clear. Better authentication, better hygiene, and better accountability for anyone who sends email at scale. The MOERA throttle does not replace those controls. It complements them by closing off a shared identity that was never meant for production.
If you already send all external mail from a custom domain that you own and authenticate correctly, you will barely notice the MOERA throttle. If any workflow still sends from onmicrosoft.com, you will.
Beyond obvious cases where small firms and public bodies never moved beyond the default address, there are platform features and integration patterns that can fall back to MOERA when your default domain is still set to it. Microsoft calls out several scenarios in its post:
These are the flows that will hit the wall first because they can lurk under the surface. A service owner may believe that everything uses the corporate domain, while a built in feature still relies on MOERA behind the scenes.
Set your default domain to your custom domain
If your tenant still uses the MOERA variant as the default, change it. Make your owned domain the default so the platform and its services pick it up by design. Microsoft documents how to select the domain used by Microsoft 365 product emails.
Move primary SMTP addresses to your custom domain
Users and shared mailboxes should send from your corporate domain. Changing the primary SMTP can affect the username used for sign in in environments where the UPN equals the primary SMTP, so schedule, communicate and support the change. The Exchange Team flag this impact in the announcement.
Audit actual MOERA usage with Message Trace
Use Message Trace in the Exchange Admin Center to filter senders that match your MOERA wildcard. Pull a 90 day view, filter out internal recipients, then sort by sender and volume. This reveals the systems and patterns to fix before your stage begins. Microsoft gives this exact approach.
Reconfigure Microsoft 365 products to use your domain
Set Microsoft 365 products to send from your domain where supported. It removes reliance on generic product addresses and MOERA fallbacks and makes notifications look like they come from you.
Harden your domain and align identity
If you send at scale, Outlook’s requirements make SPF, DKIM and DMARC non negotiable. In truth, every sender benefits from correct alignment. It protects your brand and helps your email land where it belongs.
Plan the edge cases
Check Bookings configuration, SRS behaviour and hybrid routing. Verify that journaling is excluded and that postmaster and abuse addresses are set sensibly. The Exchange Team’s call outs are a practical checklist.
Start with Message Trace. Set the sender to *@*.onmicrosoft.com
. Pull a three month window so you catch weekly and monthly cycles. Export and filter to external domains. Work the list:
Each category has a fix. Most are straightforward and low cost. The trick is to uncover them before the throttle lands.
Look at the MOERA throttle alongside the other 2025 changes.
The tenant wide external recipient rate limit restricts the total number of external recipients a tenant can reach in a day, regardless of how many accounts you spread the send across. It is designed to frustrate abuse and stop people treating Microsoft 365 as a bulk sending engine. The official announcement and community analysis are clear on intent and mechanics.
At the same time, Outlook high volume sender rules began to enforce basic authentication hygiene for bulk senders to Outlook.com. Fail SPF, DKIM and DMARC and your messages first go to Junk, then risk rejection as enforcement tightens. The bar is higher and the documentation is public.
The MOERA throttle is another piece of that puzzle. It is not a standalone fix, it is a nudge toward owned identity and modern authentication.
Shared domains suffer from the weakest participant. That is the root of the MOERA problem. If a hundred new tenants behave well and five abuse the namespace, the shared reputation for the onmicrosoft family suffers. Filters reflect that reality. A cap on the number of external recipients from MOERA addresses is a blunt but effective way to reduce the threat surface and to steer customers toward owning their identity.
There is a brand and trust element beyond pure deliverability. Email that arrives from a corporate domain that you control and authenticate is part of your public identity. In sectors like financial services, healthcare and central government, where citizens and customers are rightly cautious of anything that looks automated, a note from a product no reply address or from MOERA can undermine trust and increase the chance of being flagged as suspicious. The policy change will force a higher baseline and bring long ignored settings work to the top of the pile.
For the public sector and for schools, the alignment with central guidance is natural. Own your domain. Authenticate it properly. Make systems speak with one voice. The throttle is likely to flush out configuration debt in education, in local authorities and across the third sector where day one settings were never revisited. The cost to fix is low compared with the cost of deliverability problems and the reputational damage of being flagged as spam.
Step one. Inventory
Run Message Trace and identify every flow that sends from a MOERA address to the outside world. Classify by owner and confirm volumes.
Step two. Fix the defaults
Make your custom domain the default. Create or verify the required DNS. Confirm SPF. Set up DKIM. Publish DMARC with a monitoring policy if you are not ready for a strict reject policy. Move primary SMTP addresses for users and shared mailboxes across to the corporate domain. Communicate the change and support teams that have saved credentials.
Step three. Reconfigure product notifications
Use the admin setting to make Microsoft 365 products send from your domain rather than from product brands. This cleans up the look and avoids MOERA fallbacks.
Step four. Tidy the edges
Check Bookings, SRS and hybrid scenarios. Confirm journaling behaviour. Fix anything that still uses MOERA for outbound.
Step five. Prove it with tests
Send to a diverse set of external recipients. Check headers to confirm the right domain is in use, that DKIM is signing with your domain and that DMARC alignment is correct.
Step six. Set guard rails
If you operate a large tenant, consider transport rules that block or flag any attempt to send externally from a MOERA address. Add monitoring to catch regressions. Treat any new system that wants to send externally as a change that requires a domain and authentication review.
If a team is using a user mailbox or a shared mailbox to run outreach at scale, they will already have seen trouble with per mailbox limits and with the tenant wide external cap. The MOERA throttle is another layer. The right answer is not to fight the platform. The right answer is to move bulk send to a dedicated provider that is designed for that purpose, operates within the law and is configured with your domain, your authentication and your consent model. Microsoft 365 is for business communication, not for bulk campaigns. The official guidance is to use Azure Communication Services Email if you must exceed Exchange Online limits.
There are still on premises applications that speak to the world through a relay and that were configured years ago to use a MOERA identity. The fix is the same. Change the sender to a custom domain and authenticate it. If you run hybrid, review the path the messages take and ensure the stamp on the outside is your domain with DKIM signing and DMARC alignment. If the system truly cannot be modernised, consider a relay service that supports your authentication model and is configured with your domain. Do not accept MOERA as an excuse. The throttle turns it from a poor choice into a hard limit.
Inbound mail is not affected. The cap applies only to external recipients on outbound mail from a MOERA sender. Journaling reports use the Microsoft Exchange Recipient address and are excluded. Hybrid out of office edge cases that involve mail.onmicrosoft.com are not throttled so long as MOERA is not used for the original send. If your environment uses federated domains for sign in, you will still need a non federated custom domain in the tenant to act as the default domain. The announcement covers all of these points.
The most useful outcome of this change may be the conversation it forces between IT, security and service owners. Email identity is an organisation wide asset. It deserves a clear policy and a change gate. If a team wants to send externally, they should do it under the corporate domain, with proper authentication and with accountable ownership. The MOERA throttle will flush out shadow IT email patterns because they will simply stop working at scale. Use that moment to consolidate control rather than to grant exceptions.
For boards and senior leaders, the question is straightforward. Do we control the identity that speaks for us. If the answer is not an immediate yes, the Microsoft changes are a timely prompt to fix it.
Microsoft’s decision to throttle outbound email from onmicrosoft.com is not a surprise. Shared domains are a magnet for abuse. The change is pragmatic and overdue. It will frustrate spammers, frustrate poor outreach practices and nudge every customer, large and small, toward owning and authenticating their own domain.
For UK organisations, the work to adapt should be measured in days and weeks, not in months. The steps are clear. Set your default domain. Move primary SMTPs. Repoint product notifications. Tighten authentication. Sweep for edge cases. Prove it with tests. Put guard rails in place so it stays fixed.
Do this now and you will not notice the throttle when your stage arrives. Leave it until the Message Center reminder and you will be fixing production problems under time pressure. The technology is straightforward. The leadership ask is even simpler. Make your organisation speak with its own voice, every time, to everyone.
What is your take. Will this throttle quietly lift deliverability for good actors, or will it expose more configuration debt than teams expect.
Let’s share the good, the bad and the messy middle. What broke in testing, what was easy to fix, and what still needs better guidance from Microsoft.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-22
The Bouygues Telecom breach affecting 6.4 million customers is only one of a series of incidents exposing the fragility of telecoms worldwide. From the UK to the US, from South Korea to Australia, attackers are exploiting the industry’s unique role as both infrastructure and data custodian.
Image credit: Created for TheCIO.uk by ChatGPT
When Bouygues Telecom confirmed on 4 August that hackers had accessed the data of more than 6.4 million customers, the disclosure landed as another chapter in what has become a troubling series of incidents across the global telecommunications sector. On the surface, the French operator provided reassurances: bank card numbers and passwords were untouched, the immediate intrusion was blocked, and national authorities had been informed. Yet the details that did emerge carried significant weight. Contact information, contractual data, civil status records and IBANs had been exposed.
This combination of sensitive but not always headline-grabbing information illustrates the changing nature of risk. The obvious impact may not come through empty bank accounts the following morning. Instead, it will be the gradual build-up of risk as criminal groups and fraudsters recycle, combine and weaponise personal data for targeted phishing, impersonation and more sophisticated forms of fraud. For a provider of Bouygues’ scale, which services nearly 27 million mobile customers, the breach is both a national issue and part of a global story about how vulnerable our communications infrastructure has become.
Telecommunications firms occupy a peculiar position in the cyber security landscape. They are both the providers of connectivity and the guardians of customer data on an extraordinary scale. Unlike financial services firms, which operate in a tightly regulated environment with constant scrutiny from central banks and regulators, telecoms companies have historically had more leeway. They are critical infrastructure, yet they do not always carry the same level of oversight as banks or national utilities.
That imbalance is increasingly being exploited. In Europe alone, Orange Belgium reported a breach in July that exposed the data of 850,000 customers, including SIM card details and PUK codes. Though passwords and financial information were unaffected, the stolen details are enough to enable SIM-swap fraud or social engineering attacks on unsuspecting individuals.
In the United Kingdom, Colt Technology Services was forced to take systems offline in August after attackers stole several hundred gigabytes of data by exploiting a vulnerability in SharePoint. The breach affected internal systems and led to the temporary suspension of customer-facing services. For a company serving multinational clients across data, voice and cloud, the disruption and reputational harm were immediate.
These incidents do not exist in isolation. They form part of a wider trend in which attackers have increasingly targeted telecom providers as repositories of both data and influence.
Half a world away, South Korea’s largest mobile operator, SK Telecom, has been forced into a period of unprecedented introspection. Earlier this year it admitted that attackers had compromised critical USIM authentication data, which underpins how phones connect securely to networks. Regulators fined the company and ordered sweeping reforms, including a multi-year investment programme to overhaul security.
The scale of the breach was staggering. More than 23 million subscriber records were implicated, involving unique identifiers such as IMSI and IMEI codes that are deeply embedded in how devices authenticate themselves. This was not just another case of exposed email addresses. It was a compromise that cuts to the technical fabric of the network itself.
In a different but related case, South Korean investigators revealed that high-profile celebrities and business leaders had been targeted through telecom website breaches, with attackers aiming to hijack access to bank and cryptocurrency accounts. The inclusion of public figures such as K-pop star Jungkook in the narrative underscores how breaches of telecom infrastructure reverberate far beyond corporate boardrooms.
In the United States, the picture is more complex and arguably more alarming. On one level, consumer data breaches continue to generate lawsuits and settlements. AT&T is still reeling from a 2024 breach that exposed information from more than 86 million customers. A proposed settlement of 177 million dollars has been floated, which could provide individual compensation of up to 7,500 dollars per person. This financial dimension is familiar territory for observers of American class action law.
But beneath the surface there is a more strategic threat. Intelligence reports and investigative journalism have linked state-sponsored groups, including a Chinese-affiliated cluster known as Salt Typhoon, to intrusions at several major US telecom firms. Unlike criminal ransomware groups seeking ransom payments, these operations have targeted metadata, surveillance systems and even call recordings of government officials. Such campaigns are not about quick profits. They are about intelligence, influence and in some cases preparing the ground for potential disruption in times of geopolitical tension.
The line between criminal cyber operations and state-linked espionage is becoming harder to draw. Where Bouygues Telecom and Orange Belgium may primarily be grappling with criminal data theft, their counterparts in the United States are facing sustained campaigns designed to undermine national security. Yet both phenomena emerge from the same underlying truth: telecoms firms are now in the crosshairs.
In August, TPG Telecom’s iiNet division disclosed that 280,000 customer accounts had been exposed after attackers used stolen employee credentials to access an internal system. The details included email addresses, phone numbers and, in some cases, modem setup passwords. As with the Bouygues incident, the company emphasised that financial records and identity documents were not part of the breach. Yet customers will remain at heightened risk of fraud attempts, while regulators will be asking whether authentication systems for employees are truly fit for purpose.
Australia has already endured a series of high-profile breaches across healthcare and retail sectors. The iiNet incident signals that telecoms are no less exposed, and that the broader Asia-Pacific region is facing the same intensifying wave of attacks that has swept across Europe and North America.
Part of the answer lies in the nature of the data itself. Even when financial details are excluded, telecoms firms hold information that can be leveraged for fraud and surveillance. Contact details, SIM data, call records and authentication identifiers are valuable in themselves and even more so when combined with data from other breaches.
Another factor is the role of telecoms as infrastructure. A breach at a single provider can have a cascading effect across multiple sectors, from emergency services to online banking. The 2023 attack on Kyivstar in Ukraine demonstrated the point with brutal clarity. Attributed to a Russian military hacking group, the attack disrupted not only mobile and internet services but also national air raid warning systems at the height of missile attacks. The financial and operational costs were estimated at 90 million dollars, but the strategic implications went far deeper.
Attackers understand that telecoms firms are not merely businesses. They are arteries through which national life flows. That makes them uniquely valuable and uniquely vulnerable.
The regulatory landscape is evolving, though often unevenly. In France, the national data regulator CNIL and the cyber security agency ANSSI are involved in overseeing Bouygues’ response. In South Korea, the regulator imposed fines and demanded structural reform at SK Telecom. In the United States, consumer lawsuits and settlements continue to shape the landscape, while intelligence agencies take a lead on the espionage dimension.
For UK firms such as Colt, the regulatory burden lies partly with the Information Commissioner’s Office, but also with national security bodies tasked with protecting critical infrastructure. Each jurisdiction has its own emphasis, yet the common theme is that regulators are under pressure to hold providers accountable and to prevent complacency.
One of the most striking lessons from recent incidents is how telecoms boards and executives are now forced to treat cyber security as a front-line issue rather than a back-office function. Customer trust, national security, regulatory fines and legal liabilities all converge on the same point. A data breach is no longer a technical mishap. It is a governance crisis.
Boards are also grappling with how to fund and prioritise cyber resilience in organisations that already operate with thin margins in competitive markets. Shareholders demand returns, customers demand lower prices, and regulators demand security. Balancing these demands requires leadership willing to make difficult trade-offs.
Although the breaches discussed involve telecom providers, the implications for the UK financial sector should not be underestimated. Banks and insurers rely on telecom networks for everything from two-factor authentication via SMS to secure voice communications. If customer data from a telecom breach is recycled into targeted phishing campaigns, financial firms are often the next victims.
There is also a dependency dimension. If a telecom operator suffers prolonged disruption, as Kyivstar did in Ukraine, financial transactions and trading platforms may be directly affected. The resilience of financial services cannot be separated from the resilience of the communications sector.
From France to South Korea, from the United States to Australia, the pattern is consistent. Telecoms firms are struggling with a surge of cyber incidents that vary in detail but converge in meaning. They reveal weaknesses in authentication, in patching, in monitoring, and sometimes in culture. They highlight the growing intersection of criminal profit-seeking and state-linked espionage.
The lesson for executives across all sectors is that no company can assume immunity. The details of what is stolen may differ, but the strategic impact is the same. Breaches erode trust, invite regulatory scrutiny, and create fertile ground for future attacks.
The Bouygues breach is not just a French problem. It is part of a mosaic that spans continents and industries. The attackers may vary in sophistication, and the data may differ in sensitivity, but the direction of travel is clear. Telecommunications firms are now a frontline target in the global cyber conflict.
For customers, the practical advice remains familiar: be alert to phishing, scrutinise messages that request financial or personal details, and recognise that even partial data leaks can have real consequences. For executives and policymakers, the message is sterner. Telecoms are critical infrastructure, and breaches in this sector carry risks that go well beyond the balance sheet.
The global picture is one of rising stakes, where every breach erodes not just the privacy of individuals but the resilience of national economies and public safety. Bouygues Telecom may be the latest name in the headlines, but it will not be the last. The true test is whether the sector can learn from these incidents quickly enough to prevent the next crisis.
What’s your take? Do you think telecoms are prepared to meet the challenge of rising cyber threats, or are we only at the beginning of a much larger crisis?
2025-08-20
The Workday data breach highlights the growing reliance on social engineering tactics, exposing vulnerabilities in enterprise CRM systems and sending ripples across industries including the UK financial sector.
Image credit: Created for TheCIO.uk by ChatGPT
On 18 August 2025 Workday disclosed a data breach following a social engineering attack that compromised a third party customer relationship management platform. The breach, part of a wider campaign targeting Salesforce CRM environments, saw threat actors access business contact information such as names, phone numbers and email addresses. Customer tenant data was not involved.
This incident joins a series of attacks that have ensnared some of the world’s most recognisable brands including Google, Adidas, Qantas, Dior, Chanel and Louis Vuitton. It exemplifies the growing menace of social engineering attacks on enterprise systems, particularly those relying on CRM tools. In this article I explore the unfolding narrative, the threat landscape, the response from security professionals and the ripple effects across corporate Britain, including the UK financial sector. The latter is not the main focus but a significant note of concern.
Workday, a Californian HR software giant with more than 19,300 employees and serving over 11,000 organisations including over 60 per cent of Fortune 500 firms, announced that the breach was detected on 6 August, nearly two weeks before their public disclosure.
In a blog post reported by BleepingComputer, Workday admitted that threat actors accessed “some information” from a third party CRM platform used in their systems. They emphasised there was no evidence customer tenant data or internal user files had been compromised.
The exposed data comprised primarily business contact information such as names, phone numbers and emails. While not highly sensitive, such information is valuable for phishing and social engineering campaigns.
Workday cautioned users against unsolicited communications. They clarified they would never request passwords or sensitive details via phone, and stressed that all official correspondence uses trusted support channels .
Experts link Workday’s breach to a broader wave of attacks targeting Salesforce based systems. Groups such as ShinyHunters, also known as UNC6040, are behind a campaign involving vishing and phishing tactics to drive victims into installing malicious OAuth connected applications in their Salesforce environments.
Attackers impersonate internal HR or IT staff via phone or text, tricking employees into approving these apps. Once installed, threat actors access records, extract data and may attempt extortion via a data leak site.
Google, for example, noted that the attack involved a fake version of Salesforce’s Data Loader app which prompted a user to grant access that allowed data exfiltration.
This method has proved highly effective and alarmingly simple, with a growing number of enterprises falling victim. Thomas Richards of Black Duck noted this trend is deeply concerning, especially when attackers resort to painstaking social engineering because conventional methods may be failing ).
Workday responded by severing access to the compromised CRM platform, introducing enhanced security protocols and reinforcing internal employee defences.
Salesforce customers have been advised to audit connected apps, revoke unfamiliar permissions, implement stricter access controls and enforce multi factor authentication .
William Wright, CEO of Closed Door Security, urged organisations to train employees, limit privileges and apply MFA universally. Kevin Marriott at Immersive likewise warned that even minimal exposure such as names or email addresses can fuel sophisticated phishing campaigns.
This breach underscores a painful reality. The weakest link in many cyber defences lies not in hardware or software vulnerabilities but in human trust. Social engineering plays on our willingness to help and our assumptions about authority.
Enterprise security must adapt. Cyber teams must extend beyond technical controls to reinforce employee awareness, simulate phishing exercises and nurture a culture where refusal to comply with anomalous requests is accepted, not penalised.
Reliance on cloud based tools such as Salesforce makes the entire enterprise surface vulnerable. A single misstep like authorising a rogue OAuth app can permit attackers to harvest data across multiple customers without directly attacking core systems.
The UK financial sector boasts mature cyber defences and a keen regulatory focus. Yet this incident is a warning bell rather than an immediate crisis.
Many financial organisations rely on platforms such as Workday for HR functions, often integrated with CRM systems and third party tools. Should contact or staff details be exposed, adversaries could launch highly targeted phishing efforts. An email or text appearing to come from HR could lure an executive into compromising sensitive systems.
The regulatory landscape, including guidance from the Financial Conduct Authority and the Bank of England, demands robust governance over third party risk. This means assessing supply chain vulnerabilities and ensuring that external tools are subject to strict access controls and incident response plans.
Financial institutions in the UK should take this as a signal to revalidate policies around CRM integrations, vendor access and employee training. Zero trust network models, segmented privileges for auxiliary systems, regular penetration testing and enhanced incident detection protocols are all critical.
The implications for UK finance are notable but they are part of a larger context. This is a global phenomenon affecting every sector that uses cloud based environments and external platforms to manage data and employees.
Audit and harden systems. Conduct thorough reviews of OAuth connected applications, especially those tied to CRM systems. Remove unused apps and restrict the ability of employees to install them without authorisation.
Educate and simulate. Launch simulations of vishing and phishing attacks that emulate real world tactics, training employees to question unsolicited communications even if they appear trusted.
Enforce MFA and monitoring. Require multi factor authentication on all access points to critical systems, especially cloud platforms. Monitor logs for anomalous activity and unusual data exports.
Strengthen third party oversight. Expand contracts with cloud vendors to include breach notification clauses, access reviews and shared responsibility for security audits.
Responsive governance. Create review boards involving security, HR, legal and executive teams tasked with rapid incident response protocols including public communications.
Scenario planning. Embed social engineering scenarios into risk assessments. What if insider impersonation leads to credential theft? How quickly can systems isolate and block such activity?
The Workday breach revealed on 18 August 2025 is a significant event in the evolving landscape of enterprise cybersecurity. Conducted via social engineering of employees to compromise a CRM platform, it exposed business contact data and mirrors a wider assault against Salesforce systems worldwide.
The incident is a reminder that technology alone is not enough. In the age of sophisticated phishing and vishing, building human resilience is as important as firewalls and encryption. Organisations must combine strict technical defences with continuous employee training and a culture of scepticism.
For the UK financial sector the incident adds urgency. Verify that third party systems are secured, ensure staff remain vigilant, and confirm that incident response is rapid. Across all industries the lesson is universal. Threat actors exploit trust, and security must guard well beyond the perimeter.
The true battleground is within daily interactions. A simple call or message, if handled carelessly, can open the door to a major breach. A moment’s hesitation, however, may prevent it.
What’s your take? How should enterprises strengthen resilience against social engineering in a cloud dominated environment?
2025-08-18
QR codes are being weaponised in plain sight, and most people don’t even realise it. Here’s how attackers use them, why they work so well, and what we can do to defend against them.
Image credit: Created for TheCIO.uk
QR codes are everywhere. They’re in cafes, on desks, in meeting rooms and on posters at train stations. They speed up onboarding, bring up menus, and allow frictionless access to just about anything.
But they’re also being weaponised.
In the push toward mobile-first interaction, we’ve handed over a silent, scannable attack vector to cyber criminals, and most people don’t even realise they’re at risk.
In one of my cyber security awareness sessions, I left a flyer with a QR code on it laying in the publicly accessible reception and conference room we were using. No instructions, no description. Just a scannable square and a short headline:
Scan this to enter the draw for a prize.
Most people didn’t hesitate.
A few seconds later, they’d landed on a realistic, branded webpage at thecio.uk/dodgy-qr. It was harmless, a training tool, nothing more, but it proved the point. Almost everyone scanned the code without asking where it came from, where it pointed to, or who placed it there.
They did it because it looked official. Because it was printed on nice paper.
This is precisely the type of logic real attackers exploit.
QR phishing (or “quishing”) doesn’t require a hacked server or social engineering over email. It only needs one thing: your camera.
Here’s what makes it dangerous:
And with a little bit of polish, anyone can design a fake feedback form, Wi-Fi registration page, HR onboarding form or benefits login screen that looks plausible — especially when it loads instantly on your phone.
Here are three practical scenarios where QR-based phishing has shown up in the wild, and in simulations I’ve run directly:
An attacker places a QR code sticker over a legitimate one — outside an event, meeting room or building lobby. It leads to a login prompt resembling a corporate Microsoft 365 login. Users enter credentials to “check in”.
Except the credentials are now in someone else’s hands.
Disguised as a harmless employee engagement survey, this QR leads to a fake HR portal. Users are asked to enter their name and email to participate, then receive a prompt to verify their identity by logging in.
Behind the scenes, it’s a credential harvesting operation.
Sent via email or posted in a building, this QR code claims your mobile access certificate is about to expire. It links to a page that mimics a security team portal, asking users to re-enter MFA details or install a profile.
Suddenly, the attacker has control over push notifications or device-level settings.
You don’t need to ban QR codes altogether. But you do need to train people to treat them with caution, just like suspicious links in emails.
Here’s how to reduce the risk:
The goal isn’t to catch people out. It’s to build a moment of pause, a second thought, before that tap.
Phishing isn’t just in your inbox anymore. It’s on walls, mugs, desks and badges. It hides behind convenience and branding. And it only takes one careless scan to open the door.
As cyber professionals, we need to start treating the physical space, not just the digital one, as part of the attack surface. If something feels too seamless to be secure, it probably is.
Train your teams to look before they scan.
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. Ben has partnered with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology strategies. He’s held leadership roles across IT support, infrastructure and cyber, and is known for building reliable systems with a clear focus on risk, documentation and operational maturity.
2025-08-11
The Clorox breach in the US and the M&S cyber incident in the UK show how attackers can bypass sophisticated defences simply by calling the help desk. For UK IT leaders, the warning could not be clearer.
Image credit: Created for TheCIO.uk by ChatGPT
The breach every IT leader fears often looks the same in the imagination. A nation-state-grade exploit. A shadowy attacker inside the network for months, extracting terabytes of data. A ransomware detonation at 3am, encrypting everything from payroll to production.
The reality, as two recent incidents on opposite sides of the Atlantic prove, is far more prosaic. And for that reason, far more dangerous.
Sometimes the attacker does not need to break into your network at all. They simply pick up the phone, and someone lets them in.
That is the allegation in a lawsuit now making headlines in the United States. Clorox, one of the most recognisable consumer goods companies in the world, is suing its IT services provider, Cognizant, claiming that a help desk technician repeatedly reset passwords and bypassed multi-factor authentication for an attacker impersonating a Clorox employee. Those actions, Clorox says, opened the door to one of the most disruptive breaches in its history, halting production and distribution and costing an estimated $380 million, close to £300 million.
To the British IT leader, this might sound like a distant drama across the pond. But the implications are chillingly local. Because what happened in Atlanta could just as easily happen in Aberdeen, or Ashford, or Acton. UK enterprises are no less reliant on third-party IT providers. And in many cases, they are even more exposed due to resource constraints, fragmented oversight, and legacy thinking about accountability.
The method was devastatingly simple. No zero-day vulnerabilities. No malware with a Hollywood backstory. Just persistence, confidence, and a support process that trusted the caller.
According to court filings, the attacker, allegedly a member of the Scattered Spider hacking group, contacted the Cognizant-run help desk posing as a Clorox employee locked out of their account. Over a series of calls, the help desk granted their requests: passwords were reset, MFA challenges were removed or circumvented, and the attacker was issued fresh, valid credentials.
With those credentials, the attacker walked straight past the organisation’s perimeter defences. Within days, manufacturing systems stalled. Distribution lines were disrupted. Orders could not be fulfilled. The breach became a shareholder issue, a media story, and a costly operational crisis.
This was not an advanced technical compromise. It was a social engineering campaign, and a highly effective one. Which is why it should be keeping UK IT leaders awake at night. Because we have already seen the same playbook here.
On Saturday 19 April 2025, while much of the UK was preoccupied with the long Easter weekend, Marks & Spencer began to suffer a series of unexplained outages. In-store contactless payments failed. Click-and-collect orders could not be processed. Customers complained on social media, reporting abandoned baskets and frozen tills.
Three days later, M&S confirmed publicly that it was dealing with a major cyber incident. By Friday 25 April, the situation had escalated: the retailer suspended online ordering for its clothing and home ranges entirely.
Behind the scenes, investigators traced the breach back to a supplier. The attacker had not found an unpatched server or stolen a database backup. They had gained entry through a third-party help desk by convincing support staff that they were a legitimate M&S employee in need of a reset.
Tata Consultancy Services, which provides IT help desk services to M&S, was named in multiple press reports as the possible supplier in question, though M&S has never officially confirmed this. What is certain is that the breach was a case of social engineering, not a technical exploit.
The damage was sustained. Online orders in Great Britain only resumed, partially, on 10 June, nearly two months later. M&S has warned investors that the incident will reduce profits by up to £300 million. Analysts estimate the company’s market value dropped by over £700 million in the days following disclosure.
Nor was M&S the only target. The Co-op suffered disruptions to contactless payments and store operations. Harrods was also reported to have experienced issues linked to similar methods. The National Cyber Security Centre responded by issuing urgent guidance to retailers: review your help desk verification procedures immediately.
The Clorox and M&S breaches have a common DNA. Both began with a phone call to a help desk. Both succeeded because the agent trusted the caller’s identity. Both involved resetting credentials that became the keys to an operational meltdown.
In both cases, the breach did not hinge on the sophistication of the attacker’s technical tools. It depended entirely on the vulnerability of human process, a process that exists in almost every UK enterprise today.
And therein lies the problem. Most organisations have designed their service desks for efficiency and customer satisfaction. The performance metrics are clear: average handling time, first-call resolution, ticket closure rates. These KPIs incentivise agents to move quickly and keep the caller happy. None of them reward taking extra time to interrogate a request or escalate a reset for further verification.
For attackers, this is an open invitation.
In the US, Clorox’s case against Cognizant is shaping up to be a precedent-setter. Clorox alleges breach of contract, negligence, and mishandled incident response. Cognizant rejects the claims, maintaining that it provided only a limited service and was not responsible for Clorox’s wider security posture.
For UK IT leaders, this should trigger a review of every supplier agreement in your portfolio. The UK legal and regulatory environment leaves no safe harbour in “the vendor did it” excuses.
The Information Commissioner’s Office has repeatedly stated that both data controllers and processors are responsible for implementing “appropriate technical and organisational measures”. This year, the ICO fined a software supplier directly for security failures that led to a breach, even though that supplier was operating under contract to another company.
For financial services and other regulated sectors, the PRA’s Supervisory Statement SS2/21 and the FCA’s operational resilience rules impose specific obligations on outsourcing and third-party risk management. These include contractual rights to audit suppliers, requirements to test controls, and clear exit strategies if a supplier cannot meet security expectations.
The NCSC’s post-Easter guidance to UK retailers could not have been clearer: if your help desk can reset credentials without rigorous verification, you are vulnerable. If that help desk belongs to a supplier, you are still responsible.
Help desk staff are not careless or unprofessional. The reality is that they operate in high-pressure environments with multiple, often conflicting demands: resolve the issue quickly, keep the caller satisfied, minimise ticket backlog. In outsourced arrangements, the person handling the reset may be thousands of miles away, several contractual layers removed from the company whose systems they are accessing. Their scripts may be outdated, their training generic, and their understanding of the client’s risk environment minimal.
Groups like Scattered Spider specialise in exploiting this gap. They study corporate structures, learn the terminology of internal projects, and mimic the tone of a stressed but important employee. They often have partial information from previous breaches, names, job titles, office locations, to make their impersonation convincing. Once on the call, they present a plausible story and a sense of urgency, and more often than not, the reset is granted.
For years, the industry has talked about “Zero Trust” as the solution to modern cyber threats. But these breaches expose its most glaring blind spot, the human interface.
If your help desk can reset a password or bypass MFA without watertight verification, your Zero Trust model is compromised before it has even begun to work. The sophistication of your endpoint detection or your cloud security controls is irrelevant if the front door is opened by someone trying to be helpful.
This is not a technology problem. It is a process problem, and by extension, a leadership problem.
The answer is not another layer of software or a shiny security dashboard. It is a cultural and procedural reset.
Identity verification must be treated as a security-critical control, not an administrative step. That means clear policies that no credential reset occurs without robust, independent verification, and that no urgent business request overrides that policy. It means empowering agents to say “no” when verification fails, and protecting them from performance penalties for doing so.
Boards need to treat help desk risk as a strategic issue. If a supplier’s help desk can grant access to your systems, then the liability, legal, financial, and reputational, belongs to you. That requires regular audits of help desk processes, shadowing of live calls, and commissioning of unannounced social engineering tests. It also means engaging with suppliers to ensure they have the training, processes, and contractual obligations to resist manipulation.
The most unsettling aspect of the Clorox case is the likelihood that the technician involved believed they were doing the right thing. They were following the script. They were solving a problem for someone they thought was a colleague. The process said “yes”, so they said “yes”.
This is what makes the help desk such an effective attack surface. It is not malice. It is misaligned incentives. Procedure without context. And unless IT leaders address that, the breaches will continue.
If you lead technology in a UK organisation, the clock is ticking.
First, map the access that your help desks, internal and outsourced, actually have. Not what the contract says, but what the agents can do. Then, test the process yourself. Call the desk as “you” from an unrecognised number. See what happens.
Engage your suppliers. Demand to know their verification process in detail. Ask how they train staff on social engineering. Ask when they last failed a test, and what changed as a result. If the answers are vague or defensive, you have a problem.
Work with your board to make help desk compromise a recognised strategic risk. That means measurable oversight, not vague assurances. Insist that social engineering testing is part of your assurance programme. Review contracts and add language that gives you audit rights, testing rights, and the ability to demand remediation.
Finally, remember that this is as much about culture as controls. Build an environment where an agent feels rewarded for stopping a suspicious reset, even if it means telling a genuine senior executive to wait. Because the only thing more costly than slowing down an access request is speeding it up for an attacker.
Could someone call your help desk today, convincingly impersonate an employee, and obtain valid access to your systems?
If the answer is anything other than “impossible”, you are not ready.
The attackers have already shown us their playbook. They have shown it in Atlanta. They have shown it in London. And they will keep showing it, until we change the rules of the game.
What’s your take? Could your help desk stand up to a determined and convincing attacker armed with only a phone and a story?
Let’s share the good, the bad and the messy middle when it comes to securing the human layer of our cyber defences.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-08-04
A deeper look into Muddled Libra’s modular team structure, AI-enabled deception, ransomware partnerships, and the defences organisations need now.
Image credit: Created for TheCIO.uk by ChatGPT
In July 2025, Unit 42 published its landmark assessment entitled Muddled Libra Threat Assessment: Further Reaching, Faster, More Impactful. It captured a dramatic evolution of the adversary formerly known to many as Scattered Spider or UNC3944. Organisations across government, retail, insurance and aviation have now been forced to confront a threat actor with unprecedented speed, agility and destructive potential. This article brings missing intelligence into the conversation by profiling the modular structure of the threat actor, their partnerships with ransomware-as-a-service providers, their advanced use of artificial intelligence and voice deepfakes, and the critical set of recommended defensive controls.
The aim here is to move beyond awareness to action. IT and business leaders must see Muddled Libra not as a distant menace but as a sophisticated adversary that threatens infrastructure, core operations and digital resilience.
Unit 42 describes Muddled Libra as operating in a decentralised, modular fashion. Rather than a monolithic gang, the adversary is made up of specialised sub‑teams that function like small enterprises. One cell may focus on reconnaissance and victim profiling, another on call‑centre based vishing, yet another on endpoint lateral movement or ransomware deployment. This modular structure creates resilience: if one part of the network is disrupted, others continue operations unabated. It also enables a playbook operating at scale. Arrests in the UK in mid‑2024 reduced capacity temporarily, but the structure rebounded swiftly under new leadership. The law enforcement wins served as deterrence and capacity degradation, not elimination.
Another critical accelerant in Muddled Libra’s evolution has been formal partnerships with a variety of RaaS providers. Unit 42 identifies DragonForce (also known as Slippery Scorpius) as a key partner since April 2025, but the group also contracts with ALPHV (Ambitious Scorpius), Qilin (Spikey Scorpius), Play (Fiddling Scorpius), Akira (Howling Scorpius), and RansomHub (Spoiled Scorpius). Through these alliances, Muddled Libra has shifted beyond purely encrypting data to executing destruction of virtual infrastructure through legitimate management tools. In one documented case, VMs were deleted at scale using ESXi tooling, rendering backups ineffective and demanding ransom for restoration of cloud assets.
This evolution transforms the nature of extortion. Victims can no longer rely solely on backup restoration when infrastructure has been directly obliterated. The threat now extends into critical SaaS operations and cloud‑native environments.
Perhaps the most unsettling development is Muddled Libra’s adoption of artificial intelligence and deepfake voice technology to manipulate helpdesk staff and victims in real time. Unit 42 confirms that the group now generates voice clones using mere seconds of publicly available audio, such as from media interviews or earnings calls, to engineer vishing calls that sound convincingly like executives or IT staff. This capability converts the human firewall into an unreliable defence. Even vigilant teams cannot reliably distinguish synthetic voices from authentic ones.
Moreover, Muddled Libra leverages AI‑driven tools to automate reconnaissance. Large language models produce impeccably written phishing lures tailored to individuals based on scraped public profiles. Algorithms assemble hierarchical maps of target organisations, uncovering help desk escalation paths and authentication fallback vectors. As one expert summarised, layering in AI can elevate the number of victims from hundreds to tens of thousands in a single campaign. Such automation makes each intrusion dramatically more scalable.
With this tech‑augmented operational model, traditional training and awareness are not enough. The defence must be technical, procedural and behavioural, matching attacker sophistication rather than relying on hope that staff will recognise deception.
In 2025, Unit 42 tracked intrusion operations in four main sectors between January and July: government, retail, insurance and aviation. The group executed sequential campaigns across UK and US retailers in spring, then pivoted to US insurance firms in June, and by mid‑July was striking aviation clients both in the United Kingdom and North America. This organisational flexibility underlines their ability to shift campaign focus quickly while maintaining a consistent tactic deck centred on help‑desk vishing and credential resets.
Retail giants such as Marks & Spencer and Harrods in the UK were confirmed victims in attacks that led to data theft and ransom demands. Meanwhile in the insurance space, breaches such as that at Aflac emphasise that financial services are now firmly within their crosshairs. Aviation organisations including WestJet and Hawaiian Airlines publicly reported disruptions linked to Scattered Spider associated activity.
Muddled Libra’s tradecraft is deliberately designed to execute quickly, often before detection and response teams can react. According to Unit 42 intercepted incidents, the average time from initial access to containment was just one day, eight hours and forty‑three minutes. In some cases, the adversary escalated privileges to domain administrator within forty minutes of first contact. These operations typically commenced with vishing of a help desk agent, password and MFA reset, installation of legitimate remote management tooling, credential harvesting, lateral movement and eventual extortion deployment.
Such speed leaves little margin for error on the defensive side. Without cloud‑native monitoring and rapid conditional access enforcement, malicious activity can succeed before it is even observed.
Despite their modern sheen, Muddled Libra relies heavily on living off the land. They prefer to use existing legitimate remote monitoring and management (RMM) tools in target environments. Recorded tactics include the manipulation of remote tools such as AnyDesk, RustDesk, ConnectWise, Tailscale, Pulseway and more. They also abuse hypervisors, cloud management platforms and even EDR and endpoint agents to embed persistence and escalate access. When credentials are compromised, they harvest password vault data via NTDS.dit or Mimikatz, then leverage Microsoft 365 and SharePoint for internal reconnaissance and data exfiltration.
This strategic avoidance of custom malware enhances stealth, reduces detection probability, and expedites exploitation of systems already trusted by enterprise security.
The Unit 42 report emphasises the need for cohesive defensive strategy built around modern cloud identity, behavioural analytics, organisational readiness and process resilience.
Muddled Libra’s rise demonstrates that cybersecurity is no longer a technical domain alone. When organisations are hit with destructive ransomware operations that shortcut traditional recovery through infrastructure deletion, financial cost, litigation risk and trust damage multiply in severity. Public sector victims face service interruption; private sector leaders suffer stakeholder fallout. Cyber risk has therefore become a boardroom issue, not merely an IT one.
According to Unit 42, Muddled Libra will continue evolving along its current trajectory. Its modular structure means that even with arrests or takedown actions, new cells emerge quickly. The group’s cloud‑first mindset, coupled with RaaS partnerships, ensures it will refine its destructive capabilities over time. Organisations without visibility and control over cloud native infrastructure are vulnerable to escalated data theft, extortion and infrastructure denial.
At the same time, the automation enabled by AI means campaigns will become increasingly multi‑vector and global. Defenders should anticipate voice‑based social engineering across countries, languages and time zones. Standard awareness training will fail: adversaries already speak like your executives and know your org chart. Detection must move to machine speed.
Finally, information‑sharing efforts between public and private sectors remain vital.
For UK IT and business leaders, the imperative is clear. Now is the time to adopt proactive, coordinated strategies across identity, cloud access, detection capabilities and organisational readiness.
What’s your take? Are your helpdesk, access policies and exec team ready to counter real-time AI-driven voice phishing?
Let’s share the good, the bad and the messy middle of defending identity, trust and cloud-first infrastructure before the adversaries redefine our risk thresholds.
Disclaimer: This article is provided for general information only and does not constitute legal or technical advice. The author is not a legal professional. Organisations should seek independent legal counsel and technical advice before relying on, or acting upon, any of the points discussed.
2025-07-28
Phishing remains the number one threat vector for organisations. Here's why user training still matters and what to do the moment someone clicks a malicious link.
Image credit: Created for TheCIO.uk by ChatGPT
Phishing remains the most persistent and damaging cyber threat facing organisations across the UK. Whether it comes through an unexpected email, a spoofed login page or a WhatsApp message purporting to be from IT, phishing succeeds not because of technical brilliance but because of human fallibility.
This makes phishing unique. Unlike a zero-day exploit or brute-force ransomware tool, phishing relies almost entirely on a person making a split-second decision to click. That decision can happen after a long day, a moment of distraction or out of misplaced trust in the message’s source. For security leaders, it creates the ultimate challenge: no control over the attacker’s timing and no guarantee of user behaviour.
The vulnerability is not a software bug. It’s a moment of inattention. It’s the absence of doubt when doubt is most needed. And because users are human, that vulnerability cannot be patched in a traditional sense. The risk has to be managed in a very different way.
The risk is real and escalating. In July 2025, the University of Hull was hit by a targeted phishing campaign that compromised 196 accounts in a matter of hours. The attackers used those accounts to send further scam messages and demand money from recipients. While the university’s response was fast, accounts were blocked and systems contained, the damage to operational continuity and trust was significant. Email and Microsoft Teams access was suspended for affected users, impacting daily workflows and teaching schedules.
The Hull incident serves as a clear reminder: phishing is not just a risk to individual credentials, it’s a threat to business continuity. Once attackers are inside a network, even for a short period, they can exploit trust, move laterally, and create reputational fallout that persists long after access has been restored.
That’s where phishing training earns its place. When done well, it raises baseline awareness, increases the chances of suspicious links being flagged and reduces the time between compromise and detection. But let’s be clear: training is not a firewall. It doesn’t prevent incidents. It buys you time. And when every minute counts after a compromise, that time is everything.
The first and most important goal of phishing training is to build muscle memory. Repetition and variation are key. Users need to be exposed to different types of messages... fake invoices, fake HR updates, fake calendar invites. Each scenario builds recognition and instinct. Over time, patterns emerge and users begin to question the unexpected.
Good training is not just about information. It is about simulation. Clicking on a link in a test environment is not a failure, it’s a teaching moment. And the more realistic those moments are, the more confident users become in the real world.
Equally important is building a culture where users aren’t punished for reporting clicks. If someone realises they’ve clicked a bad link, the clock starts ticking. The longer they stay quiet out of fear, the more damage an attacker can do. The best security cultures reward reporting. They treat a reported click as a win, because the alternative is silence.
This cultural shift is subtle but powerful. It means framing security as a team effort rather than a gatekeeping exercise. It means encouraging questions, not just issuing mandates. And it means celebrating when a user catches a phish, even if they did so after initially falling for it.
So what should happen when someone clicks?
First, isolate the user’s device. If your Endpoint Detection and Response (EDR) tool hasn’t already flagged the event, the IT or security team should disconnect the machine from the network to prevent further command-and-control traffic.
Next, identify what was accessed. Was it just a link? Did it request credentials? Was malware downloaded? Pull browser logs, check DNS traffic and review any log-in attempts from new IP addresses or devices.
Reset credentials and invalidate active sessions. If the phishing attempt was credential-harvesting, assume the password is already in the wrong hands. For organisations using Single Sign-On (SSO), this step is critical. Change the password, kill all sessions and monitor for reauthentication.
Let staff know what happened, what to look out for and what changes — if any — will be rolled out in response. The worst thing you can do is stay silent. People need context to stay vigilant.
The University of Hull, to its credit, handled the communication aspect better than most. Support centres were set up for in-person assistance. Affected users were updated through alternate channels. IT teams responded quickly to restore services. But even with a fast response, the fact that nearly 200 accounts were compromised shows how quickly phishing attacks can escalate inside an organisation without widespread vigilance.
A phishing link doesn’t need to deliver ransomware to cause chaos. Disrupted access to systems, broken trust in communications and the potential for follow-up fraud all create cascading effects. The downtime is real. The reputational damage is real. And the opportunity cost, lost teaching, delayed research, confused students, can linger for weeks.
Good phishing response is not about blame. It’s about speed, transparency and culture. When users are trained to spot red flags and know exactly what to do after clicking, the risk drops dramatically.
To build organisational resilience, leaders need to:
There is no such thing as a click-proof organisation. But there is such a thing as a resilient one. And resilience starts with preparation.
Has your organisation ever had to deal with a phishing click in real time? What saved the day, or what fell short?
Let’s share the good, the bad and the messy middle. The more openly we talk about failures and recoveries, the stronger our collective defences become.
2025-07-27
BBC Panorama's "Fighting Cyber Criminals" delivers a sobering reminder that cybercrime is no longer hypothetical – it's operational, scalable and happening daily. The attacks are sharper, the damage harder to reverse, and the response often muddled.
Image credit: Created for TheCIO.uk by ChatGPT
BBC Panorama’s latest investigation doesn’t so much break news as expose what most IT leaders already know. The attacks are already happening. They don’t come with warnings, or countdown clocks. They begin with a link, a guessable password or a cloned login page. The programme, Fighting Cyber Criminals, aired this month and laid bare the scale of what’s unfolding behind the firewalls of councils, companies and public utilities across Britain.
The documentary takes viewers behind the curtain at the National Cyber Security Centre, Britain’s digital front line. Inside the NCSC’s threat response room, the backdrop is one of ceaseless vigilance. Analysts comb through data, link indicators of compromise, and chase malicious IP trails across continents. It’s a glimpse into the reality: ransomware is a 24-hour industry. The UK now sees at least one confirmed ransomware attack per day.
Those are just the ones we hear about.
Panorama focuses on a case that hardly made headlines – the quiet collapse of KNP Logistics. A 158-year-old transport firm in Northamptonshire, it was crippled by ransomware in late 2023. It started with a password. It ended with 700 jobs lost, a shuttered fleet and a company left with no operational control. The attackers didn’t need to break in. They walked through the front door.
The ransom? Between £3 million and £5 million, depending on who you ask. The company never recovered.
Panorama doesn’t sensationalise. It doesn’t need to. The real-world footage is powerful because it mirrors what so many CIOs see every day: users clicking phishing links, flat MFA coverage, ageing systems wrapped in modern branding, and a boardroom that still thinks security is IT’s problem.
The episode turns its lens to South Staffordshire Water, where attackers demanded a ransom under threat of tampering with supply infrastructure. The utility refused to pay. The incident prompted an overhaul of its cyber controls, but the story might have ended very differently.
What stood out most from the episode wasn’t the NCSC’s posture or the NCA’s readiness. It was the disconnect.
Despite these daily incidents, most of the UK’s public and private boards still don’t treat cybersecurity as a strategic priority. For many, it remains a compliance box, something that gets mentioned after the finance slides or buried in risk registers with generic language like "data breach" or "IT outage".
The numbers alone are frightening. The average ransom demand for a mid-sized UK organisation is now estimated at £4 million. And that’s before calculating downtime, data loss, remediation costs, reputational damage and legal exposure.
And yet – we still see councils running decade-old on-prem servers. We still see default admin accounts, expired SSL certificates, flat Active Directory forests, and backup systems that haven’t been tested in real-world failover mode since the day they were installed.
Let me be blunt. Too many executives are betting their business on hope. Hope that it won’t be them. Hope that insurance will cover it. Hope that someone in IT has already sorted it.
Hope is not a strategy. It never was.
As Head of Technical Operations and Cyber, I’ve had these conversations at every level. The CFO who asks whether we “really need MFA for everyone.” The project sponsor who “needs that exception just this once.” The line manager who thinks cyber awareness training is optional. The legacy supplier who tells us, flat out, that they don’t support secure API integration.
Every one of these moments is a crack in the wall. A way in.
Panorama reminds us that attackers don’t need to invent new exploits. They just need to find the people and processes that gave up defending the old ones.
And that’s the real story here. We’re not failing because the threat is evolving too quickly. We’re failing because we haven’t done the basics. And because we’re still treating cybersecurity as a cost centre instead of a resilience function.
The solution is painfully clear, but rarely easy to implement: enforce the fundamentals. Patch aggressively. Remove legacy systems. Insist on MFA, even when it’s inconvenient. Run red team exercises. Encrypt everything. Validate your backups. Drill your incident response like it’s a fire evacuation.
And most of all – educate your people.
The most powerful firewall in the world won’t stop someone from wiring £80,000 to a fraudster if they believe the CEO sent the email.
Boards need to get this. Not in theory. Not in bullet points. In blood, sweat and budget.
Panorama did an excellent job of showing what happens when that doesn’t occur. But the episode should be shown in every council, every NHS trust, every mid-sized manufacturer with an exposed RDP port and an old insurance policy.
The biggest risk to British organisations right now isn’t China, Russia or some faceless hacking syndicate. It’s the belief that we are too small to matter, or too old to be vulnerable.
You’re not.
They’re coming for everyone.
What’s your take? Have we normalised cyber incidents as the cost of doing digital? Or is there still time to change the culture before the next wave hits?
Let’s share the good, the bad and the messy middle. Who’s genuinely ready – and who’s still hoping it won’t be them?
About the Author
Ben Meyer is a senior IT and cybersecurity leader with a decade of experience spanning scale‑ups, global enterprises and the public sector (contract). He specialises in modernising IT operations, implementing cloud platforms, and embedding security and compliance frameworks aligned with ISO 27001 and GDPR. As the founder of Meyer IT Limited, Ben partners with startups, charities and high-growth organisations to deliver pragmatic, people‑centred technology leadership.
2025-07-26
Nearly 200 University of Hull accounts were blocked after a phishing campaign targeted students and staff with scam emails demanding money.
Image credit: Created for TheCIO.uk by ChatGPT
The University of Hull has confirmed that nearly 200 user accounts were compromised in a phishing email campaign earlier this week, prompting a swift internal response and temporary service disruption for staff and students.
The breach, which was first detected on Wednesday 23 July, saw attackers successfully exploit email accounts across the university’s internal systems. According to the university’s official statement, 196 users were affected by the scam, which involved malicious messages designed to appear as legitimate communications. Once the attackers had access, they used those accounts to send further fraudulent messages demanding money.
Hull’s IT security team worked with its third-party cybersecurity provider to contain the incident. Affected accounts were blocked quickly, cutting off the ability of the attackers to spread their phishing campaign any further. However, the swift action also meant that dozens of staff and students lost access to essential services such as Microsoft Teams and email while the accounts were being assessed and restored.
In a statement issued via the university’s website, officials reassured the wider campus community that the breach had been contained and that no widespread system failure had occurred. They emphasised that the university remained operational and that student and staff support teams were now working one-on-one to restore access and ensure that victims of the scam were supported. Those unable to log into their usual services were advised to present identification at the university’s IT help points in person.
The BBC reports that the attack appears to have been financially motivated, with scammers seeking direct payments through fake correspondence. University officials have not disclosed whether any money was actually transferred or whether police have become involved in the investigation. The attack is being treated as an isolated incident but sits within a broader context of growing cyberattacks targeting UK universities.
Institutions in the higher education sector continue to find themselves in the crosshairs of cybercriminals. Universities manage sprawling networks of user accounts, often with inconsistent security postures across departments. Students, in particular, can be susceptible to social engineering attacks due to their frequent transitions between systems and high levels of trust in institutional communications.
The incident at Hull follows a familiar pattern. Attackers typically send a small number of highly targeted emails that appear to come from university authorities, IT departments or financial offices. Once a single user clicks a link or replies to a message, the attacker gains a foothold inside the institution’s ecosystem. From there, access can be used to harvest data, move laterally through systems or send further phishing emails from within the network to boost credibility.
What differentiates the Hull case is the speed with which the university detected the breach and moved to isolate affected accounts. In contrast to several recent attacks across the UK higher education sector, the spread appears to have been curtailed before systemic harm could take place. Still, the fact that nearly 200 users were compromised before the breach was contained raises questions about how the initial emails bypassed existing security controls.
Universities have increasingly adopted multi-factor authentication, anti-phishing training and behaviour-based detection systems, but attackers have become more sophisticated in their tactics. In some cases, fake messages now include institution-specific language and signatures, making them harder to distinguish from legitimate communication.
A spokesperson for the university confirmed that wellbeing support was being offered to affected users. Students were directed to the Hubble help centre on campus, while staff were offered support through internal health and wellbeing resources. The university also provided a dedicated phone number for IT assistance and pledged to follow up directly with those whose access had been blocked.
This breach is unlikely to be the last of its kind. As universities expand their reliance on cloud-based services, third-party platforms and hybrid working environments, their attack surfaces will only grow. Cybersecurity experts continue to warn that without consistent investment in user education, threat intelligence sharing and incident response planning, the sector remains exposed.
For the University of Hull, the event serves as both a warning and a vindication. The warning lies in the sheer speed and reach of a targeted phishing campaign, able to penetrate nearly 200 accounts in one day. The vindication comes in the form of containment and response, which, according to available evidence, was fast enough to prevent broader damage.
No information has yet been released regarding the origin of the phishing campaign or whether law enforcement agencies have been asked to assist. The university said it would provide updates to staff and students directly via alternative channels while account access is gradually restored.
As of the time of writing, full service for the majority of users had yet to be reinstated. For those impacted, the disruption offers a stark reminder of how rapidly trust can be eroded when institutions become the targets of well-timed digital attacks.
What’s your take? Should UK universities be required to publish details of every phishing attempt that leads to account compromise?
Let’s share the good, the bad and the messy middle. Has your institution faced something similar? What worked, what failed, and what would you do differently next time?
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-26
The UK government will prohibit public sector organisations and critical infrastructure operators from paying ransomware demands. The policy aims to weaken the cybercriminal business model and improve national cyber resilience. But for it to work, reporting, funding and public sector readiness must evolve in parallel.
Image credit: Created for TheCIO.uk by ChatGPT
The UK Government has announced a major new measure to counter the growing ransomware threat: a ban on public sector bodies and critical infrastructure operators paying cyber ransoms. The aim is to disrupt the economic model behind these attacks and shift national cyber strategy from reactive recovery to active deterrence.
The announcement confirms that public sector organisations including NHS trusts, local authorities, schools, and operators of critical infrastructure will no longer be allowed to pay ransoms under any circumstances. For private organisations, the policy introduces a mandatory pre-notification requirement before any payment is made.
Security Minister Dan Jarvis described the change as part of a wider effort to “smash the cyber criminal business model”. The move is being widely interpreted as a turning point in UK cyber policy, and a challenge to organisational leaders to upgrade resilience.
The UK’s public services have suffered high-profile ransomware incidents over the past decade. The 2017 WannaCry attack severely disrupted NHS systems. More recently, ransomware-linked disruption has been reported in hospital pathology, library services, and across the private sector, including at major retailers such as M&S and Co-op.
Public support for tougher action has grown. A consultation held earlier this year found that nearly three-quarters of respondents supported banning public bodies from paying ransom demands. That public backing has given ministers cover for a strong stance.
The government’s messaging focuses on resilience, sovereignty and justice. But turning that ambition into operational reality will take more than legislation.
The proposals sit within the government’s broader cyber strategy. The Cyber Resilience Bill, expected later this year, will give enforcement agencies the power to fine organisations that fail to patch vulnerabilities or that neglect risk assessments.
Ransomware is not just a technical threat. It is an economic one. Cybercriminal groups often target public services precisely because they know that the stakes are high and that organisations are likely to pay to resume operations quickly.
The UK Government is trying to do what other governments, including the United States? have hesitated to do: remove the financial incentive. If attackers believe they are unlikely to get paid, they may move to less impactful strategies.
But this only works if the system behind public services can withstand the impact of an attack. That means recovery, not ransom, must become the standard response.
If public bodies can no longer pay, there is no negotiation. That increases the risk for attackers and reduces their likelihood of success. Over time, the hope is that this discourages targeting of public systems altogether.
The policy mandates better backups, offline recovery systems, and tested incident plans. This could strengthen operational resilience in areas that have historically under-invested in cybersecurity.
Forcing private sector organisations to notify before making payments ensures that intelligence is captured, patterns are recognised and regulators can intervene where necessary — particularly where sanctioned actors may be involved.
While few countries have formal bans, many are now discouraging ransomware payments and increasing enforcement against criminal networks. The UK’s move positions it as a leader in this space.
If systems go down and lives are at risk, as can happen in healthcare or emergency services, leaders may feel forced to pay despite the law. That puts frontline staff in an impossible position.
Small councils, academies, and NHS trusts may lack the funding, skills or capacity to rebuild systems without external help. If funding and support do not accompany the ban, the risk of prolonged disruption rises.
If encryption-based attacks no longer work, attackers may shift to stealing data and threatening to publish it. This avoids the need for system disruption and still creates leverage, particularly in politically sensitive or high-trust environments.
If pre-payment reporting is too complex or legally risky, private firms may bypass it entirely or turn to unregulated intermediaries. Clear, fast, confidential routes are essential.
This is not just a policy issue. It is an operational one. Leaders in the public and regulated private sectors should now assume that:
Steps to take now:
This policy will not eliminate ransomware. But it does provide a basis for a more mature response, one that refuses to treat criminal threats as a service disruption cost.
Ultimately, this is a bet. A bet that by removing the ransom option, the UK can both reduce attacks and push the public sector into a more resilient posture.
That bet will only pay off if organisations are supported. If contingency plans are tested. If sector-specific recovery frameworks exist. And if the burden of compliance is matched with practical help.
Otherwise, we risk a policy that is principled, but painful.
Banning ransomware payments is a bold move. It will frustrate attackers. It may frustrate some in the public sector too.
But it sets a direction: one where public data, public services, and public trust are not negotiable.
In the years ahead, we will look back at this moment as the point the UK said enough.
Let us make sure the system is ready to follow through.
2025-07-25
A new partnership between OpenAI and the UK Government marks a major moment in the role of AI in the public sector. But as the Memorandum of Understanding moves from statement to strategy, the focus must shift to capability, safeguards and long-term public value.
Image credit: Created for TheCIO.uk by ChatGPT
The announcement that the UK Government has signed a Memorandum of Understanding with OpenAI is more than just another story about artificial intelligence. It signals something bigger: a deliberate shift in how the state approaches AI adoption, infrastructure, and delivery at scale.
The Memorandum suggests collaboration across key areas including national infrastructure, service delivery, security research, and skills. It mentions the possibility of shared data environments. It commits to safeguards. It outlines an intention to invest in AI capabilities in the UK, including through the expansion of OpenAI’s presence.
This is a moment of strategic alignment between government and one of the world’s most influential AI companies.
But the benefits will only be realised if this agreement becomes a blueprint for capability and service transformation, not just a brand alliance or a procurement channel.
Though the MoU is not legally binding, it does set out a number of shared goals between the Department for Science, Innovation and Technology (DSIT) and OpenAI:
The document reflects a growing recognition that government cannot sit on the sidelines as AI evolves. But it also carries risks, especially where the public interest and private incentives diverge.
So far, the official messaging has focused on the promise: productivity, innovation, job creation and research acceleration.
That is all possible. But none of it is automatic.
The UK lacks dedicated public infrastructure for AI. Existing compute environments, training resources and secure sandboxes are limited. If this agreement accelerates investment in UK-based data centres, research partnerships and secure experimentation zones, it could move the UK from theory to practice much faster.
This would also reduce dependence on foreign compute assets, an important consideration for digital sovereignty and long-term resilience.
AI can help improve service delivery if deployed with care. For instance:
The agreement positions AI as a delivery asset, not just a policy topic, and that matters.
OpenAI’s expansion in London is welcome, but more important is what comes with it... data scientists, engineers, legal experts and infrastructure architects who can engage with government, academia and regulators.
There is potential here to seed a new generation of public AI talent, particularly if secondments, shared projects or co-designed tools are on the table.
The phrase “information sharing” in the MoU is doing a lot of work. It could mean aggregated, non-sensitive insights. It could also mean direct access to some of the most sensitive public datasets in the country.
That includes health records, benefits data, education results and criminal justice documentation. These datasets are powerful, and valuable.
If shared without clear legal and ethical guardrails, they risk being used to train commercial models without public consent or accountability.
Transparency is not a nice-to-have. It must be foundational. That includes data protection assessments, external review and a right for the public to understand and challenge use.
OpenAI is not a public utility. It is a commercial actor, with private investors, global priorities and a competitive roadmap.
This agreement must not become a de facto procurement pipeline. It should be a mechanism for joint work on standards, tooling and experimentation; and not a commitment to embed a single vendor across the state.
Public sector technology should be plural, open and accountable. Any deployment of OpenAI models must be justified against those values, not simply assumed based on the MoU.
If departments use AI to bolt automation onto outdated workflows, the result will be more confusion, not less. Faster decisions, but not necessarily fairer ones. Personalised content that reinforces structural inequalities.
The real opportunity lies in rethinking services around AI, not using it to paper over structural cracks.
This is not a passive moment. Leaders across digital, data, operations and policy have a short window to ensure this agreement delivers value, and avoids becoming a missed opportunity.
Set clear expectations for AI in your service area. What are the outcomes? What should not be automated? What role does human judgment play? Get ahead of vendor pitches with your own public value tests.
This deal should not lead to external dependency. Build in-house teams who can evaluate models, test prompts, design safeguards and write clear service documentation.
If AI is embedded into a service, the user must know. There must be clear ways to opt out, challenge decisions, and speak to a person. Explainability is not theoretical, it must be operational.
Make this public. Pilot carefully. Publish results. Share learnings across departments. A single team cannot deliver safe, inclusive AI alone. It has to be a community of best practice.
This agreement could be a turning point. It could show how the UK can build services that are faster, fairer and more personal. It could place the UK at the forefront of safe, democratic AI development.
But only if we treat this not as an endpoint, but a starting point. Not as a transaction, but a long-term process. Not as a shortcut, but a structured test of capability.
This is not a partnership between equals. It is a partnership between public interest and private capability. To keep that balance right, the public sector must lead with confidence, clarity and care.
We now have the signal. The delivery comes next.
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-24
Why cybercriminals target charities, and how small organisations can reduce risk without breaking the bank.
Image credit: Created for TheCIO.uk by ChatGPT
In the cybercrime ecosystem, attackers don’t just chase value, they chase vulnerability.
Banks and fintechs are fortified, monitored, and resilient. Charities? Often not. And that makes them attractive for a different reason: they’re seen as easy wins.
Charities are small, underfunded, and reliant on trust. They work with sensitive data but lack technical defences. Many operate with thin IT support and aging infrastructure. In the eyes of cybercriminals, that’s the perfect recipe.
Most charities don’t have:
People open emails from charities. They click links. They want to help. That trust makes phishing and impersonation attacks far more effective.
Charities collect:
This is the kind of data attackers can sell or exploit.
Volunteers often use personal devices. Cyber hygiene varies. There's rarely formal onboarding, MFA enforcement, or remote device management.
The cost of a breach can go far beyond the financial:
In 2023, 24% of UK charities reported a cyber breach or attack. The larger the charity, the more likely it was hit.
Most attacks aren’t advanced, they succeed because the basics are missing. Here’s how charities can become much harder targets, using free or low-cost measures.
Cost: Free
Cybersecurity is everyone’s responsibility.
Start with one clear message per month. Keep it practical and human.
Cost: Free
MFA is one of the most effective defences available.
Enable it on:
How-to links:
👉 Enable MFA in Microsoft 365
👉 Enable MFA in Google Workspace
Cost: Free
Unpatched software is a top attack vector.
Learn more:
👉 Mitigating Malware – NCSC
Cost: Low
Guide:
👉 NCSC backup checklist
Cost: Free
Too much access = too much risk.
Cost: Free–Low
Recommended:
Cost: Free
You don’t need reams of documentation. Focus on:
Templates available via NCSC:
👉 Policy templates for charities
Cost: ~£300+
Cyber Essentials is a UK government-backed scheme that helps small orgs:
Learn more:
👉 Cyber Essentials
Some regions offer funding support—check with your local authority or grant body.
Cybercriminals aren’t just targeting banks, they’re looking for soft spots. And right now, too many charities fit that profile.
But cybersecurity doesn’t need to be expensive or complex. With free resources and a bit of focus, you can dramatically reduce your risk, and protect the data, donors, and communities that rely on you.
You don’t need to be perfect. Just harder to breach than yesterday.
2025-07-21
"KNP Logistics, one of the UK’s oldest haulage firms, collapsed after hackers exploited a single weak password and missing MFA. The incident is a stark reminder for IT leaders and business owners: basic cyber hygiene is still the frontline defence."
Image credit: Created for TheCIO.uk by ChatGPT
Sometimes cybersecurity fails aren’t about cutting-edge malware or zero-day exploits. They’re the result of old-school mistakes, like a single weak password, with catastrophic consequences. That’s exactly what happened to KNP Logistics, a UK haulage firm founded in 1865.
Last year, the Akira ransomware gang, believed to operate from Russia, broke into KNP by brute-forcing a guessable password. With multi-factor authentication disabled, they walked in. Once inside, they:
Within weeks, the firm entered administration. The result? 730 people lost jobs, a fleet of 350 trucks was grounded, and 158 years of business history vanished.
Here’s the brutal truth: ransomware gangs target companies like yours. Not because you’re rich, but because your defences are porous. And often, that porosity comes from the simplest vulnerabilities:
Even if your business day-to-day runs smoothly, events like this rarely come out of nowhere. They're the result of layered missteps, ignored basics that become fatal when stitched together.
If you’re a business leader or senior IT decision maker, here’s your moment. Put these on the table with your IT and security teams:
If you don’t have firm answers, it’s time to act.
All the tech in the world can’t fix human error. In KNP’s case, one reused password unraveled everything. Security culture isn’t about fear, it’s about habits and accountability:
These are small asks compared to losing millions—or your whole business.
Not all security improvements require big budgets. KNP could have been saved by enforcing existing tools: passwords and MFA. That’s discipline, not £s.
But it’s worth it. Because a few seconds of inconvenience is tiny compared to losing centuries of trust, staff livelihoods, and company valuation.
As Paul Abbott, a director at KNP, put it:
“What brought us down wasn’t a sophisticated hack, it was a simple human failing.”
If your next chat with IT buzzes with talk of “basic security stuff,” don’t tune it out. That’s not check-box noise. That’s your front door. Make sure it’s locked.
2025-07-20
Attackers are combining Microsoft Teams calls with Quick Assist to deploy malware and ransomware inside two hours. Here’s what every IT leader needs to know, and act on.
Image credit: Created for TheCIO.uk by ChatGPT
Attackers are calling staff directly via Microsoft Teams, posing as internal IT support. Once the conversation starts, they guide the target to open Quick Assist, Microsoft’s built-in remote support tool.
It sounds routine, a helping hand during a tricky moment. But in reality, it’s the start of a full compromise. Within the same session, attackers are launching PowerShell, dropping malware like Matanbuchus 3.0, and triggering Cobalt Strike or ransomware like Black Basta.
This isn’t theory. Microsoft, Morphisec and others have seen this playbook evolve rapidly, and copycats are on the rise.
The tactic isn’t new, but it’s been upgraded. Criminal groups now use subscription-based malware loaders, sell access on demand, and rehearse their delivery to slip past endpoint tools.
Quick Assist is signed by Microsoft, which often leads to misplaced trust. The app is genuine, but once an attacker convinces someone to read out a session code, it becomes a tunnel into the estate. Everything from keyboard access to command execution flows through it.
Microsoft Teams plays a key role. Many organisations leave federation open for ease of collaboration. Attackers exploit this by creating tenants named “IT-Support” or similar, then start calls that look and sound plausible, especially when paired with email noise, ticket references or even voice clones.
Morphisec timed one full compromise, from the initial Teams call to ransomware, at one hour and fifty-one minutes.
Targeting
Public profiles, leaked data and supplier lists offer everything needed to craft a convincing call.
Initial contact
The user gets a Teams message or voice call from “IT Support”, usually amid email noise or fake tickets.
Quick Assist session
A six-digit code is exchanged and access is granted. At this point, the attacker has full control.
Payload delivery
PowerShell pulls down a loader like Matanbuchus, which quietly prepares the next stage.
Privilege escalation
Tools like Cobalt Strike disable logs, extract credentials and spread internally.
Ransomware deployment
A ransomware package encrypts systems and exfiltrates data, all before security teams detect a breach.
Correlate Quick Assist and Teams activity
Look for Quick Assist Event ID 41002 within minutes of an external Teams call. This pairing should always raise a flag.
Block outbound scripts during remote sessions
Any PowerShell execution to pastebins or URL shorteners during Quick Assist should be blocked or alerted.
Log all remote control sessions
Whether through video or keystroke capture, this gives vital context and deters insider risk.
Label external users in Teams
Highlighting external contacts disrupts social engineering and gives staff a prompt to pause.
Phase out Quick Assist
Move to Intune Remote Help, which includes RBAC, policy enforcement and session auditing. Microsoft itself now advises this.
Tighten federation controls
Limit Teams federation to a known allow-list. Disable anonymous joiners where possible.
Require call-back verification
No privilege reset or remote session should proceed without confirmation via a trusted number or device.
Run vishing simulations
Include Quick Assist prompts in phishing and vishing drills. Celebrate the people who say “no” and report it.
Invest in recovery, not just defence
Maintain clean, offline backups and rehearse business decision-making. A well-tested recovery limits the damage — and the ransom.
Quick Assist is a useful tool, but in the wrong hands, it becomes the attacker’s way in. The fix doesn’t start with new tech. It starts with policy, clarity and culture. Let’s give people the confidence to pause, verify and push back when something doesn’t feel right.
That’s how we stay ahead of the next “friendly” call.
How are your teams responding to suspicious calls today?
Talking points for senior management
2025-07-19
Large language models can invent facts... a risk that carries legal, compliance and reputational costs. Here’s how leaders can contain the damage.
Image credit: Created for TheCIO.uk by ChatGPT
Large language models (LLMs) now draft emails, write code and summarise contracts in seconds, yet they sometimes invent facts. These errors, known as hallucinations, are already landing in courtrooms and compliance reports. Understanding the stakes is now as important for non‑technical directors as it is for CIOs.
LLMs predict the next word in a sentence, not the truth. That means they can generate:
Research from Stanford’s Institute for Human‑Centered Artificial Intelligence (HAI) found legal‑specialist models hallucinate in roughly one answer in six. The team likens the issue to a sat‑nav that occasionally drops you in the wrong city – still useful, but you must check the road signs.
How it bites | Real‑world cost | Quick defence |
---|---|---|
Staff rely on bogus case law | Tribunal payout and staff distrust | Lawyer review before filing |
Consultant memo cites fake regulation | Negligence claim and fee write‑off | Draft–approve workflow with SME check |
Chatbot gives bad mortgage advice | FCA redress and fine | Guardrails and audit logs |
Vendor API injects wrong data | SLA breach and reputational hit | Indemnity clause plus monitoring |
Insurance may soften the blow, but underwriters now ask for evidence of AI oversight before paying.
UK law firm TLT LLP warns that companies still owe a duty of care when customers rely on AI‑generated content, stressing that inaccurate outputs can breach FCA rules or contract warranties around “reasonable skill and care”. In professional services, misstatements can trigger negligence claims even when an AI drafted the error. High‑profile cases such as Mata v Avianca – where lawyers were sanctioned for filing citations invented by ChatGPT – illustrate the point.
Regulators are clear: businesses cannot hide behind a black box when mistakes harm consumers.
Hallucinations will not disappear soon – the creativity that makes LLMs powerful also makes them prone to fiction. Until verifiable AI arrives, businesses must invest in oversight – or budget for the consequences.
Disclaimer: This article is provided for general information only and does not constitute legal advice. The author is not a legal professional. Organisations should seek independent legal counsel before relying on, or acting upon, any of the points discussed.
2025-07-18
Every new tool sparks fear of job losses, but the reality is always more nuanced. AI won’t replace people; it will reshape how we work. Here’s what leaders need to know.
When the first steam engines rattled to life in the 18th century, the world braced for the end of human labour. The same fear resurfaced when computers entered office blocks in the 1970s and when the internet began stitching the world together a generation later. Each time, prophets of doom declared that machines would put people out of work for good. Each time, they were wrong.
Now, we find ourselves here again, but this time the machines can write emails, draft code and even produce passable poetry. Artificial intelligence has captured boardroom agendas, media headlines and our collective imagination. Once again, the question resurfaces: will AI replace us?
The truth is both simpler and more complex. AI is just another tool. It is remarkable, yes, but it is still a tool. And like every tool before it, it won’t erase our jobs outright. Instead, it will transform them.
For leaders in technology and beyond, understanding this distinction is crucial. Because what matters most now is not whether AI will take our jobs, but how we adapt our roles to make the most of it.
There is a long tradition of mistrusting new tools. The Luddites famously smashed textile machinery because they saw it as a threat to their livelihoods. In the end, mechanisation didn’t kill textile work. It reshaped it, unlocking new industries, markets and skills that no one could have imagined from the clattering looms of Yorkshire.
AI is the loom of our time. It automates tasks we once thought required uniquely human traits: judgement, creativity, intuition. But if you look closely, what AI does is closer to prediction than true understanding. A large language model can draft an article (though I’d wager this one will still read better than ChatGPT’s version). A generative AI tool can spin up marketing copy or summarise meeting notes. These are useful outputs, but they still need context, oversight and, above all, a human to steer the ship.
In that sense, AI is not so different from the spreadsheet or the search engine. We once needed clerks to add up columns of numbers by hand. Spreadsheets didn’t eliminate finance jobs; they made them more strategic. Search engines didn’t get rid of librarians; they gave knowledge workers instant access to information that once took days to uncover.
The key shift is this: AI is best thought of not as a replacement for human workers, but as an augmentation. It gives us leverage. It frees us from repetitive drudgery so we can focus on higher-value tasks.
Take software development. Generative AI can write boilerplate code, suggest bug fixes and even generate test cases. But no CTO worth their salt will fire the entire dev team and hand the keys to a chatbot. Instead, good leaders will ask: what happens when my engineers spend less time debugging and more time designing better products? What new services can we build when mundane tasks take minutes instead of hours?
The same applies to marketing, HR, customer service and countless other functions. AI can draft job descriptions, write first-pass emails and handle routine queries. But people are still needed to define strategy, build relationships and make sense of the results.
It’s true that some roles will disappear. History shows us that when technology removes repetitive tasks, the jobs tied solely to those tasks fade away. Switchboard operators, typists, factory line workers; these roles have dwindled or vanished altogether.
Yet work itself did not shrink. Instead, it shifted to places where human skills, empathy, judgement, creativity, are indispensable. In the process, entirely new jobs emerged. Nobody was hiring social media managers or cloud architects thirty years ago. Entire industries such as digital advertising, app development, e-commerce were built on the backs of technologies that were once viewed as job killers.
AI will be no different. It will render some tasks obsolete. But it will also create demand for new skills: prompt engineers, AI ethicists, data trainers. We will need more people who can bridge technical and human worlds such as people who understand how to ask the right questions, interpret the results and guide AI in ways that align with real business goals.
For business leaders, the biggest danger is not AI itself, but failing to adapt. The organisations that will fall behind are those that treat AI as a gimmick or, worse, an excuse to cut costs without rethinking how work should evolve.
Imagine a customer service centre that uses AI to automate routine queries. Great. But if leadership simply banks the savings and lets the human agents go, they miss the bigger prize. What if those agents could now focus on complex cases that build deeper customer loyalty? What if they could train the AI to handle ever more nuanced scenarios? What if they became customer experience designers rather than call handlers?
The same principle applies at the board level. AI can help draft reports, flag trends in data and surface insights leaders might otherwise miss. But decision-making still needs human context. An AI might tell you sales dropped 12% last quarter. Only you can ask the right follow-up questions: was it seasonal? A supply chain hiccup? A competitor’s new product launch? Tools can present facts, but meaning comes from people.
So, if AI won’t take our jobs but will transform them, what should we focus on?
First, cultivate curiosity. The best people I know are not the ones with the deepest technical knowledge, but the ones who keep asking questions. What can this tool do? What can’t it do? How could it help us work better?
Second, invest in adaptability. The pace of AI development means that what looks state-of-the-art today will feel quaint in five years. Teams that cling to old ways of working will struggle. Teams that embrace experimentation will thrive.
Third, double down on distinctly human strengths. Emotional intelligence, critical thinking, ethical reasoning, these are not easily codified into algorithms. They are also the traits that make organisations resilient in the face of constant change.
Finally, build cross-functional fluency. The most successful AI projects are rarely the sole domain of IT. They succeed when business leaders, technologists and end users collaborate to solve real problems, not just deploy shiny tools.
If you are an IT leader, or any leader for that matter, your job is not to have all the answers. Your job is to ask better questions, set the right guardrails and ensure your people feel empowered to use AI wisely.
Too many organisations rush headlong into AI adoption without clear principles. This creates risk, from biased algorithms to wasted spend on tools nobody uses. Good leadership means putting ethical frameworks in place, asking who benefits and who might be harmed, and being clear about where human oversight sits.
Equally, resist the temptation to hoard control at the top. The best AI use cases often come from the front lines, the sales rep who figures out how to use an AI assistant to cut admin time in half, or the operations manager who spots inefficiencies that a predictive model could help solve. Create space for experimentation. Celebrate small wins. Learn from failures.
One of the myths about AI is that its impact is inevitable, as if algorithms simply wash over us like a tide we can’t control. In reality, how AI changes work depends on the choices we make now.
Governments have a role to play, too. Regulation must keep pace with innovation. Education systems need to help people gain the digital literacy and critical thinking skills that AI-enhanced workplaces demand. But business has a responsibility as well. It is not enough to say, “We will reskill our people” while quietly hoping they’ll manage on their own. Investment in training, clear communication and honest dialogue are essential.
For all the anxiety AI stirs up, it also holds enormous promise. If we get this right, AI can help tackle complex problems faster, from improving patient outcomes in healthcare to driving sustainability in supply chains. It can give small businesses capabilities once reserved for big players. It can level the playing field, free up our time and make work more meaningful.
But it won’t do any of this on its own. It will do it through people, people who know how to use it wisely, ask better questions and put it to work in ways that reflect our values.
So the next time someone says AI will take your job, remind them of this: it is not the tool that shapes the future of work. It is how we choose to use it.
And if history is any guide, we humans have always been very good at turning new tools into new possibilities.
2025-07-16
AI tools are entering businesses faster than most teams can track, often through everyday platforms or individual experimentation. That’s exposing organisations to silent risks: leaked data, hallucinated outputs, and unaudited decisions. Without clear policy or oversight, what starts as convenience can quickly become a governance headache.
It’s everywhere. From automated assistants and smart analytics to synthetic voice, code and content, artificial intelligence is reshaping the way businesses operate. Or at least, it promises to.
But beneath the rush to adopt new tools lies a growing tension. Leaders are asking how to embrace AI’s potential without exposing their organisations to unexpected risks. That tension has moved from the IT team to the boardroom.
So is AI ready for business? And more importantly, is your business ready for AI?
Used well, AI can save time, improve decision-making and reduce operational friction.
Early adopters are seeing value in areas such as customer service (via intelligent chatbots), threat detection (through pattern-recognition models), and internal productivity (with large language models summarising reports or drafting content).
Some organisations are already integrating AI into more strategic domains, including financial forecasting, supply chain optimisation and legal document review.
AI is no longer a lab experiment or tech pilot. It’s showing up in Microsoft 365, Salesforce, HR platforms and customer-facing products.
With any new technology, benefits arrive faster than safeguards. The biggest concern? Visibility. Many companies are unsure how many AI tools are being used across their teams, and by whom.
Security researchers have highlighted examples where employees have pasted sensitive data into free-to-use tools like ChatGPT, with no clear policy on data handling or retention. In some cases, proprietary code or client documents were processed by public models without oversight.
And then there’s the quality problem. Generative AI systems can produce convincing but incorrect content, sometimes called “hallucinations”. If employees rely on that output without human checks, the consequences could range from embarrassing to legally risky.
Data leakage
Who controls what’s shared with AI tools? Are prompts stored? Can outputs be retrieved?
Compliance ambiguity
If an AI system makes a decision, about a loan, a CV, or a medical case... who’s accountable?
Shadow adoption
Staff may use AI tools without approval, bypassing procurement, infosec and legal review.
Third-party risks
AI features are now embedded in software from vendors who may not fully explain how models are trained or secured.
Workforce impact
While automation can free up time, it can also introduce anxiety, over-reliance or confusion about roles.
The point isn’t to scare teams off AI. It’s to put the right checks around it, and ask better questions before diving in.
What data is this AI trained on?
Can I audit its decisions?
What happens to the information I give it?
Could I explain this process to a regulator?
When AI is deployed with structure, it can amplify the best of your business. But without that structure, it can create blind spots that are hard to spot and harder to fix.
Most organisations don’t need to roll out a full AI governance framework overnight. But they do need to know where AI is already in use, where it could add value, and where it might cause problems if left unmanaged. That means focusing on three areas: visibility, policy, and people.
AI adoption rarely starts with a strategy. It often starts with curiosity.
A marketing executive asks ChatGPT to draft a campaign. A developer uses GitHub Copilot to write boilerplate code. A finance analyst tries an AI plugin to summarise invoices.
These aren’t fringe examples. They’re happening across sectors, often with no formal sign-off.
Start with a simple discovery exercise:
This doesn’t need to be a surveillance exercise. It’s about understanding exposure so that you can design controls that support good behaviour — not block productivity.
A five-page acceptable use policy hidden in a shared folder won’t cut it.
Instead, offer clear, accessible guidance that answers everyday questions:
Good policies don’t just list rules, they reduce uncertainty. Include examples, highlight grey areas, and make it clear where accountability sits.
It’s also important to coordinate with legal, data protection, and procurement teams. Make sure contract reviews cover AI features, vendor claims, model updates and data retention.
Once you know where AI is used, introduce basic safeguards:
For high-risk use cases, such as tools that screen CVs, score loan applicants or summarise legal documents, establish a review process and document the checks.
AI risk is rarely about malicious intent. It’s more often about unintended consequences. Controls should make it easier to do the right thing.
Finally, AI governance isn’t just about tech or compliance. It’s about trust.
Staff need to feel confident they can ask questions, raise concerns and explore new tools safely. That means:
The goal is not to shut down AI. It’s to help your people use it wisely, and to know where the boundaries lie.
Yes, there’s hype. But there’s also a genuine opportunity for well-governed, carefully scoped innovation.
AI isn’t just another tool. It’s a change in how decisions are made and knowledge is created.
The organisations that will benefit most aren’t the ones who adopt it first, they’re the ones who ask the right questions before they do.
2025-07-15
A 7.3 Tbps DDoS attack is a reminder that the basics of security are still our biggest blind spots. Here’s what IT leaders and non-technical teams need to learn from the world’s biggest DDoS attack.
In the age of zero trust, AI-driven threat detection and cyber insurance, it’s easy to think the era of crude, brute-force attacks is behind us. But last month’s record-breaking distributed denial-of-service (DDoS) attack is a sharp reminder that some of the oldest threats in our playbook are still among the most potent.
According to Cyber Security News, in May 2025, Cloudflare successfully mitigated an unprecedented DDoS attack peaking at 7.3 terabits per second (Tbps). To put that number in context: that’s more than twice the scale of the infamous 2018 GitHub attack, which held the record at 1.35 Tbps at the time.
These numbers are staggering, but they’re not the most important part of the story. The real lesson for CIOs, CISOs and business leaders alike is that basic infrastructure vulnerabilities, complacency and underinvestment in fundamental resilience still pose some of our biggest risks.
This is a wake-up call, not just for the people who wear a security badge, but for every executive who signs off budgets and roadmaps for how digital services are delivered.
Let’s break this down. Distributed denial-of-service attacks aren’t new. The concept is brutally simple: flood a target’s servers with so much traffic that they become overwhelmed and legitimate users can’t get through. It’s the digital equivalent of tens of thousands of people queuing outside your shop, blocking the doors for genuine customers.
What’s changed isn’t the tactic itself, but the scale and sophistication. Botnets today are built from armies of compromised IoT devices, misconfigured servers and unsecured endpoints around the world. Each individual device might have a trivial amount of bandwidth. But when thousands, or millions, of them are marshalled together, the result is a tidal wave that can knock over the world’s biggest brands.
And this hasnt been the only large scale attack in recent years. Microsoft Microsoft’s own article Unwrapping the 2023 holiday season: A deep dive into Azure’s DDoS attack landscape noted an increase of attacks with their robust security infrastructure automatically mitigated a peak of 3,500 attacks daily!
The tools to launch this kind of chaos aren’t locked away on the dark web anymore. Many are off-the-shelf scripts, available to anyone with a browser, a crypto wallet and a grudge.
This is not a one-off! In its [report](Digital Defense Report 2024), Microsoft said that in the first half of 2024 they mitigated 1.25 million, which represents a 4x increase compared with the previous year.
What’s more concerning is the continuing trend towards larger, shorter, more targeted bursts. Attackers know that short, massive spikes are harder to trace and easier to launch from disposable infrastructure. The record-breaking 7.3 Tbps blast lasted just minutes, but that’s enough to take down services that aren’t properly defended.
For businesses, the consequences can be severe: downtime, lost revenue, damaged customer trust and, in some regulated sectors, significant penalties.
Too many leaders still treat DDoS as an IT-only concern. It’s not. The ripple effect of even a short outage can hit supply chains, customer service, brand reputation and share prices. When GitHub was hit in 2018, it survived because it had invested heavily in upstream mitigation and a robust incident response plan. Not every organisation is so prepared.
Ask yourself: if your main web portal, customer login or payments gateway went down for an hour on Black Friday, what would the cost be? And would you get it back? Most boards have rough figures for the cost of a data breach or ransomware demand. Very few track the true business cost of unplanned downtime in the middle of their busiest season.
If we know the threat so well, why does it keep working? The answers aren’t complicated, they’re painfully familiar.
1. Weak Basic Hygiene
Far too many businesses still run poorly configured servers that can be used as open relays for reflection attacks. IoT devices ship with default passwords that are never changed. Public-facing APIs expose unnecessary endpoints. The basics matter, and they’re too often overlooked.
2. No Layered Defence
Some organisations still believe a single vendor or firewall will save them. Real resilience comes from layers: upstream DDoS scrubbing, geo-fencing, intelligent traffic shaping and the ability to spin up extra capacity in the heat of an attack.
3. Complacency About Scale
Many organisations test for “typical” spikes, the kind that come during a big product launch or seasonal sale. But they rarely test what happens if they get hit with an attack an order of magnitude bigger than their largest peak. That’s exactly what Microsoft’s data shows: attackers are scaling up faster than defenders plan for.
So, what should an IT leader, or any business leader, take away from this? Let’s look at what the most resilient organisations have in common.
1. They Know Their Attack Surface
They keep an up-to-date map of every public-facing asset: websites, APIs, partner integrations, third-party services. They understand where they’re exposed and where there are weak spots.
2. They Run Live Drills
It’s one thing to have a DDoS mitigation contract. It’s another to know how it works under stress. The best teams run war games: they simulate massive floods of traffic and practice switching over to backup servers or alternative routing in real time.
3. They Budget for Resilience
Too many businesses treat DDoS protection as a ‘nice to have’. The smart ones know it’s cheaper than recovering from hours of downtime. They budget for upstream mitigation through providers like Cloudflare, Akamai or Microsoft’s own Azure DDoS Protection, and they test it regularly.
4. They Talk to the Business
This is key. Security is not an IT silo. The best IT and security leaders I know talk in terms the board understands: risk to revenue, customer trust, compliance and reputation. When security is a business conversation, it gets funded properly.
There’s another layer here that many ignore: supply chain risk. The biggest DDoS botnets don’t grow in isolation. They thrive because countless companies leave digital doors wide open.
A misconfigured server in one small business can become part of the botnet that brings down your global website tomorrow. And you might not even know it’s your supplier until it’s too late.
This is why supply chain security is becoming a board-level issue. Regulators are paying attention, too. In the EU, the NIS2 Directive expands obligations for supply chain security and incident reporting. Similar moves are afoot in the UK and US.
The conversation about DDoS shouldn’t stop at mitigation. The strongest organisations look at how quickly they can recover. That means designing for redundancy, distributing workloads across multiple providers and building graceful degradation — so critical services keep running even if parts of the system go dark.
Think of the difference between a single web server running your main customer portal versus a global content delivery network (CDN) with built-in failover. When GitHub survived its record 2018 attack, it did so because it used Akamai’s Prolexic service, a vast distributed scrubbing network that absorbed malicious traffic upstream before it hit GitHub’s servers.
That model still works. In fact, it’s more relevant than ever as DDoS tactics evolve.
If you’re reading this and you’re not the person configuring firewalls day to day, you still have a crucial role to play. Good security starts with good questions.
Ask your IT and security teams:
You don’t need to know how to write the code. You do need to know whether the basics are in place.
According to recent research, the average cost of downtime has inched as high as $9,000 per minute for large organisations. For higher-risk enterprises like finance and healthcare, downtime can eclipse $5 million an hour in certain scenarios, and that’s not including any potential fines or penalties.
Ultimately, DDoS attacks are not about stealing data, they’re about trust. If customers can’t access your service, they don’t care whether it was a hostile state actor, a bored teenager or a professional extortion racket. They care that you weren’t ready.
And they may not come back.
The 7.3 Tbps attack won’t be the last record breaker. If anything, it’s a milestone we’ll look back on as just the start of a new arms race in volumetric attacks. As bandwidth grows, so does the scale of potential disruption.
But that doesn’t mean we’re powerless. The fundamentals remain the same: know your environment, plan for the worst, test regularly and embed resilience as a business priority, not an afterthought.
Security stories can feel overwhelming. But remember: it’s rarely the shiny new threat that gets us, it’s our neglect of the basics.
A record-breaking DDoS attack might grab the headlines. But the real question is whether it changes our habits. For leaders, now is the moment to make sure that when the next wave hits, and it will, you’re ready, resilient and able to keep the lights on when your customers need you most.
2025-07-14
54% of employees admit to reusing work passwords, exposing organisations to preventable credential attacks. Here’s what IT and business leaders should be doing instead.
Despite years of cyber awareness campaigns, new data from Bitwarden’s World Password Day Survey 2025 shows that 54 % of employees still reuse passwords across multiple work systems.
It’s a number that should prompt pause, especially at a time when credential-based attacks remain one of the most common breach vectors across cloud, SaaS and hybrid infrastructure.
The logic behind reuse is often innocent: convenience, habit, or a lack of clear guidance. But to an attacker, it’s an open invitation.
Stolen passwords from third-party breaches are readily available online, and cybercriminals use automated tools to plug them into email platforms, VPNs, collaboration tools and admin consoles. It’s called credential stuffing, and it doesn’t require any hacking skill at all.
“Reusing a password is like re-using the same key for every lock and having that key be something that you give out to everyone you meet.”
Joe Siegrist, CEO of LastPass (Inc. Magazine)
Even in large, well‑resourced organisations, password reuse persists for several reasons:
In many firms, employees still reset passwords quarterly, without tools to track reuse.
The result? Shortcuts.
Good password hygiene is a shared responsibility, and it begins with smart defaults, not strict rules.
Here are four moves that every CIO, CTO or COO can prioritise:
Make a secure password manager available to everyone.
Modern enterprise tools provide vaults, autofill, alerts and admin oversight, making unique credentials easier to manage, not harder.
Multi-factor authentication remains one of the strongest defences against stolen credentials.
Use app-based or hardware methods by default; phase out SMS or email-based MFA where possible.
Disable POP3, IMAP and basic authentication.
Move to federated login or single sign-on where possible, and ensure OAuth is the default for new SaaS tools.
It’s not about entropy scores or symbol count.
Focus messaging on impact... what can happen when one password unlocks too much. Link stories to real breaches, phishing campaigns and what they cost the business.
Organisations are starting to see results from shifting their posture away from password punishment.
“We moved from 90-day resets and complexity rules to vaults, MFA, and supportive guidance,” said one FTSE250 cyber lead.
“Helpdesk resets dropped. Credential stuffing alerts went down. Most importantly, our staff stopped gaming the system.”
Metric | Why it matters |
---|---|
Vault adoption rate | Are employees actually using the password manager you provide? |
Reuse alerts | Does your vault or IDP detect password overlap across services? |
MFA coverage | What percentage of user accounts — especially admins — are protected by strong MFA? |
Credential-stuffing attempts | Monitor what your IDP, firewall or SSO tool is blocking daily. |
Passwords may not be the most exciting item on a CIO or COO’s to-do list, but they remain a high-value target for attackers because they’re easy to exploit and often poorly managed.
While no single tool will eliminate credential-based risk, a shift to vault + MFA + clarity can transform your security posture in just a few months.
In short? One reused password shouldn’t bring down an entire enterprise.
📊 Source: Bitwarden World Password Day Survey 2025 (May 2025)
🗝️ Quotation: Joe Siegrist, CEO of LastPass via Inc. Magazine
📝 Written for thecio.uk – July 2025
2025-07-13
Researchers showed it took 30 minutes to pivot from a guessed login to applicant names, email addresses and full chatbot transcripts. The episode exposes how a single forgotten test account can turn into a data-protection calamity, and why default passwords have no place in modern systems.
Image credit: Created for TheCIO.uk by ChatGPT
In one of the more frustrating examples of preventable exposure, McDonald’s AI recruitment platform, McHire, was found to be exposing millions of job application records through a test admin account using the password 123456.
Researchers Ian Carroll and Sam Curry spotted the flaw at the end of June while looking into McHire’s backend. The system, developed and run by Paradox.ai, had a publicly accessible login panel for franchise HR users. The test credentials, username: 123456, password: 123456
, opened the door.
Inside, they found an admin interface linked to a long-defunct test "restaurant" environment. From there, a basic API call using incrementing lead_id
values allowed them to pull the personal data and full application transcripts of other users.
The total scope? Over 64 million job applications, covering years of applicant conversations with McHire’s chatbot, “Olivia”.
Paradox has since confirmed the exposed data included:
No CVs or national insurance numbers were leaked, but that doesn’t diminish the risk. As Carroll put it: “This data is more than enough to socially engineer job seekers or run targeted scams that look completely legitimate.”
Paradox disabled the test account the same day they were notified (30 June), and the IDOR flaw was patched immediately. No malicious access is currently suspected beyond the researchers’ activity.
But the root issue, a default credential left active in a production-connected environment, is far more telling.
It’s easy to scoff at a password like 123456
, but according to NCSC, it’s still one of the top 10 most common in real-world breach datasets. And while most orgs wouldn’t dream of using it for core systems, test environments and sandbox tenants often slip through the net.
In this case, the environment was created in 2019 and seemingly forgotten. But its credentials were still valid, had admin-level privileges, and had direct API access to real-world user data.
The flaw wasn’t just the weak password, it was the absence of basic hygiene:
It’s not just the volume of data that’s worrying, it’s the context.
Applicants trusted they were speaking to a bot inside a controlled process. That means transcripts contain sensitive disclosures, availability, previous roles, even vulnerabilities like health conditions or relocation challenges.
An attacker wouldn’t need to scrape all 64 million. A few hundred high-fidelity records would be enough to build convincing phishing kits, employment scams, or identity-theft campaigns targeting those actively seeking work.
The average jobseeker is more likely to respond to an email that seems to come from McDonald’s recruitment. This breach gave attackers everything they’d need to impersonate that channel convincingly.
This isn’t a story about McDonald’s being a soft target. It’s a story about the risks that linger in the corners of any scaled digital estate, especially in supplier-hosted platforms.
Make test accounts time-limited and auto-expiring. Tag them in your IAM platform and treat them as high risk until removed.
Enforce deny lists and block credential patterns known from breach corpuses. If your password policy allows 123456
, the policy is broken.
Just because it's SaaS doesn’t mean it's safe. If your brand is on the front end, you own the risk, and the reputational blowback.
Test environments shouldn’t mean test-grade security. Same authentication standards, same visibility, same response playbooks.
The best outcome here was that ethical researchers found it first. Your org should know exactly how to respond, investigate and remediate, fast.
Paradox.ai has now launched a public bug bounty programme. McDonald’s says it's reviewing controls and supplier access. No regulators have announced formal investigations (yet), but in privacy terms this is a breach in all but name, and would almost certainly be reportable under UK GDPR or California’s CCPA if replicated in those markets.
If there’s one positive here, it’s visibility. Few incidents spell out the consequences of default passwords and abandoned access quite so clearly.
As Carroll summed it up:
“It was a literal 30-minute journey from the world’s most obvious login to 64 million records. No tricks. Just a forgotten door, left open.”
2025-07-11
True cyber resilience goes beyond technical controls or annual awareness campaigns. It’s about building a culture where everyone feels a personal stake in security. Here’s why ownership matters, and how IT leaders can help every team member shift from “they” to “we”.
If you read my previous piece—Cyber Starts with Culture: Why Technical Controls Aren’t Enough, you’ll know I believe technology alone can’t solve cyber risk. Controls matter, but it’s people and their behaviours that make the biggest difference.
Cyber incidents rarely come from sophisticated nation-state attacks. More often, they start with everyday things: a click on a dodgy link, a process shortcut, or too much trust given to a supplier. When you look closely, the real weakness isn’t technology—it’s people believing cyber is someone else’s problem.
In many organisations, cyber security is still seen as the IT department’s job. You’ll often hear, “They’ll deal with it,” or “That’s not my area.” But the reality is, this thinking leaves gaps everywhere—gaps that attackers are only too happy to exploit.
The best organisations break out of this mindset. They encourage every employee, from apprentice to board member, to see security as something they own. The cultural shift from “they” to “we” is a subtle one, but it’s at the heart of genuine resilience. It’s not just about protecting the company; it’s about protecting colleagues, clients, and your own reputation.
In organisations where a cyber-first culture is thriving, you notice a few things straight away:
It’s not about being perfect. It’s about being open, honest, and willing to improve together.
Changing culture isn’t easy. Most people want to do the right thing, but a few classic obstacles get in the way:
Recognising these issues is half the battle. Overcoming them is about making ownership easy, safe, and rewarding.
Here’s what I’ve seen work in real organisations:
At one organisation I worked with, security was seen as someone else’s job until a close call with an email scam. Instead of locking everything down and blaming the user, the company used the incident as a case study in a town hall session. Staff who reported the scam were praised, lessons were shared openly, and the leadership team took questions directly. The result? A noticeable jump in both incident reporting and collaboration between teams—and a sense that everyone had a role to play.
Ownership only works if leaders are ready to share it. If the board treats cyber as a tick-box or a budget line, the rest of the organisation will do the same. But when leaders regularly ask about risk, join simulations, and praise those who speak up, ownership starts to feel normal.
The NCSC and FCA both make it clear: cyber resilience isn’t just a technical matter; it’s a leadership responsibility. It has to run right through the organisation, from top to bottom.
You can’t manage what you can’t measure. Look at engagement in training sessions, the number and quality of reported near-misses, and the openness of conversations around risk in team meetings. Use staff feedback to spot blind spots and improve your approach.
Regular pulse surveys, open forums, and post-incident reviews are all great ways to keep your finger on the pulse—and to show staff that their input genuinely shapes future decisions.
When you get culture right, cyber stops being just a risk—it becomes a business enabler. It can help win client trust, support digital transformation, and demonstrate to regulators and partners that you take your responsibilities seriously.
A culture of ownership also unlocks faster, more flexible ways of working. Teams who feel trusted and involved are more likely to speak up, collaborate, and embrace new tech securely.
Moving from awareness to ownership isn’t about rolling out another tool or policy. It’s about creating an environment where everyone feels trusted, responsible, and safe to speak up.
If you want genuine cyber resilience, invest in your culture. Make ownership everyone’s business, and you’ll find your strongest defence is your own team.
For more on this theme, see: Cyber Starts with Culture: Why Technical Controls Aren’t Enough.
2025-07-07
Ingram Micro, the world’s largest IT distributor, suffered a major ransomware attack in July 2025, forcing global platform outages and revealing systemic supply chain vulnerabilities. The SafePay group has claimed responsibility for the incident, which has sent shockwaves through the IT channel and prompted urgent reviews of supplier resilience across the sector.
Image credit: Created for TheCIO.uk by ChatGPT
On 6 July, Ingram Micro publicly confirmed a ransomware attack had compromised parts of its internal systems. The company responded by isolating affected environments and engaging external cybersecurity experts to assist with the investigation. Law enforcement was also brought in as Ingram Micro began notifying its extensive global partner network.
The SafePay ransomware group quickly claimed responsibility for the attack. Industry sources indicate that the group exploited a vulnerability in Ingram Micro’s GlobalProtect VPN infrastructure, using compromised credentials to gain access. The method fits a growing pattern of attackers targeting remote access platforms, particularly where security controls such as multi-factor authentication are not uniformly enforced or where critical patches are outstanding.
As a result of the attack, Ingram Micro was forced to take offline several key platforms, including its Xvantage AI-powered distribution portal and the Impulse licence provisioning system. These outages immediately affected IT resellers, managed service providers, and enterprise customers who depend on Ingram Micro for just-in-time delivery and centralised procurement.
Customers reported significant disruption, including difficulties placing and tracking orders, and many expressed frustration at the lack of initial communication from the company. The timing of the attack, coinciding with the end of the financial quarter, amplified concerns over delayed shipments, billing backlogs, and the knock-on effects on client projects.
Financial analysts estimate Ingram Micro could lose up to $136 million in daily revenue while core systems remain unavailable. The disruption also prompted some enterprise clients to explore alternative suppliers, concerned about the risk of future single points of failure.
The impact of the ransomware attack quickly rippled through the IT supply chain. Ingram Micro is not just a single supplier; for many in the technology sector, it represents the backbone of procurement and distribution. When an organisation of this scale is compromised, the aftershocks extend far beyond its own customer base, affecting thousands of businesses globally.
Project deadlines, service level agreements, and even regulatory compliance were suddenly under threat as customers struggled to access products and services. The event has reignited debate about the risks of supplier concentration, with many organisations now revisiting their procurement strategies and continuity plans. Questions around business continuity, contract language, and supplier transparency have moved to the top of the boardroom agenda.
In the wake of the incident, it is clear that effective supply chain security now requires an understanding of not only one’s own cyber posture, but also that of critical partners. Business leaders are considering whether their existing contracts provide sufficient safeguards around incident notification, resilience testing, and exit routes should a major supplier face operational paralysis.
The attack on Ingram Micro is the latest in a series of high-profile ransomware incidents targeting supply chain lynchpins. It serves as a reminder that even global leaders in IT distribution can be caught out by sophisticated adversaries leveraging increasingly advanced techniques. The event has sparked renewed scrutiny of remote access infrastructure, with security teams across the sector reviewing the use of VPNs, patch management policies, and authentication methods.
At the same time, the response to the incident has underscored the need for clear, timely communication with customers and partners during a crisis. The early hours of uncertainty only heightened anxiety among clients, reinforcing the importance of transparency in maintaining trust.
For IT leaders and aspiring CIOs, the Ingram Micro case is a sobering illustration of modern cyber risk. It highlights the interconnectedness of today’s digital supply chains and the need for operational resilience—not just within one’s own walls, but throughout the partner ecosystem.
From a technical expert’s perspective, the Ingram Micro attack is a textbook example of how quickly a security lapse can spiral into large-scale disruption. The breach, reportedly exploiting a remote access vulnerability, is a reminder that even mature enterprises remain vulnerable to overlooked gaps and evolving threats.
This incident shows that patch management and robust authentication protocols are not simply regulatory boxes to be ticked, but fundamental defences that must be woven into daily operational practice. The sophistication of modern ransomware groups also means IT teams need to adopt an “assume breach” mindset—actively hunting for threats, not just passively defending the perimeter.
Supply chain risk is now a board-level conversation, and technical leaders have a seat at the table. This means building relationships with key suppliers, setting clear expectations for transparency and incident reporting, and ensuring resilience is a shared objective. Regular supplier audits, simulation exercises, and clear escalation paths are no longer “nice to have” but essential business practices.
Finally, this episode is a lesson in communication. The speed and clarity with which an organisation responds—both internally and with customers—can make a material difference to how the crisis is perceived and managed. For IT leaders, developing both technical and communication skills is vital as the boundaries between IT and business resilience continue to blur.
#CyberSecurity #Ransomware #SupplyChain #ITOperations #IncidentResponse
2025-07-07
Apprenticeships offer a powerful, underused route into ICT and cyber roles by focusing on real-world capability over credentials. Ben Meyer argues that tech leaders must invest in potential to build diverse, resilient teams equipped for the challenges ahead.
The pace of innovation in tech is relentless. Cloud infrastructure, cyber threats, AI and digital platforms are all evolving in real time. To keep up, we often look to emerging tools, frameworks and providers.
But what if the most important innovation opportunity isn’t a piece of software, it’s how we find and develop the people behind it?
Our industry has long operated on a default setting: academic qualifications plus experience equals capability. But that logic is flawed. Talent doesn’t follow a formula and some of the most capable technologists I’ve worked with got their start through an apprenticeship, a career change or a non-traditional route.
We’ve created a tech hiring culture that’s simultaneously competitive and constrained. We demand 3–5 years of experience for “entry-level” jobs. We filter CVs based on keywords and degree classifications. And then we’re surprised when we struggle to fill roles or build diverse teams.
Apprenticeships challenge this model. They allow people to develop real-world skills while earning a wage, gaining experience and building confidence. But more importantly, they represent a broader philosophy: that potential matters as much as polish.
In my work as a BCS assessor, I've met candidates from all walks of life ex-retail staff, school leavers, parents returning to work, career switchers. Many arrive with imposter syndrome, unsure if they “deserve” a place in tech. Yet time and time again, they prove they do. Not because of where they’ve been, but because of where they’re going.
One candidate I assessed had worked in logistics before joining a digital support apprenticeship programme. No degree, no prior experience in IT. But they came prepared, having documented their projects and learned to script solutions for onboarding new staff.
Another candidate, who had previously worked in hospitality, demonstrated clear cybersecurity thinking; not because they’d studied it at university, but because they’d self-taught, practised risk modelling and brought their understanding of people and process into their final assessment.
These are not exceptions. They are proof that capability is everywhere, and that traditional hiring filters are often too blunt to spot it.
Let’s be pragmatic for a moment. Beyond the moral and social case, there is a clear business case for apprenticeships:
For organisations dealing with persistent cyber threats, complex infrastructure demands, and the pressure to modernise legacy systems, investing in hands-on ICT and cyber talent is not just beneficial, it’s essential.
As senior tech leaders, we’re in a unique position to open doors, or close them. The hiring policies we support, the progression paths we build, and the narratives we tell about success all shape our culture.
Here’s what I believe we should be doing:
We can’t say we value innovation if we only hire people with the same background and experience as ourselves.
The future of tech should reflect the full diversity of our society; not just in ethnicity, gender or background, but in thought, experience and perspective.
If we want to solve complex problems, we need problem-solvers who see the world differently. Apprenticeships are one of the best ways to achieve that, and the impact extends far beyond the workplace.
They create career mobility. They increase confidence. They provide a sense of purpose and belonging. And they show that your worth in this industry is defined not by where you started, but by how far you’re willing to go.
The next brilliant engineer, security lead or systems architect might be out there today working in a call centre, waiting tables, or managing stockrooms. With the right support, they could be leading technical innovation tomorrow.
Let’s stop gatekeeping talent. Let’s invest in potential, and build a better future for tech.
2025-07-06
The new GOV.UK app brings public services together in a single, user-friendly platform. With strong cyber security, accessibility features, and real efficiency gains, it sets a new benchmark for digital government. Notably, it’s among the first UK public sector apps to integrate AI-powered support—demonstrating that artificial intelligence is more than just the latest buzzword.
Image credit: Department for Science, Innovation and Technology
Cyber security is central to the GOV.UK app’s design. The One Login system provides robust authentication, including facial recognition and biometrics, instead of traditional passwords. All data is encrypted in transit and at rest, and the app undergoes regular security testing with support from the National Cyber Security Centre. A clear incident response plan is in place, with prompt user notifications if issues arise. The planned digital wallet feature will be subject to even stricter reviews.
Accessibility is a core principle, not an afterthought. The app is fully compatible with screen readers, features high-contrast themes, and lets users adjust font sizes for readability. Clear, jargon-free language ensures everyone can understand and use the app. Keyboard navigation is built in, and support for Welsh and other languages is on the way. User feedback is encouraged and will drive ongoing improvements.
The GOV.UK app serves as a one-stop shop for everything from tax and benefits to local council services. It reduces the need to navigate multiple sites or complete paper forms. Personalised notifications keep users informed of key deadlines like MOT or passport renewal, and the upcoming digital wallet will reduce paperwork even further. All this streamlines government processes and is expected to bring substantial savings.
Artificial intelligence is everywhere—in IT, in non-IT offices, and now in public services. The GOV.UK app is embracing AI in a practical way, beyond the hype. A generative AI chatbot, arriving later in 2025, will help guide users through complex tasks, answer frequently asked questions, and reduce the burden on support centres. Unlike earlier chatbots, this version aims to be genuinely helpful and conversational.
Behind the scenes, integration of AI and IT is significant. Bringing together systems from central and local government, supporting secure logins, managing notifications, and enabling features like the digital wallet all require strong IT architecture. The app uses scalable cloud infrastructure and is subject to ongoing audits for resilience and compliance.
While digital is the way forward, it’s not for everyone. The government is maintaining traditional contact channels and supporting digital skills initiatives. Privacy remains a top concern, with full compliance with UK GDPR and the Data Protection Act, plus clear user controls over personal information. The app is currently in public beta, with real user feedback shaping its evolution.
The GOV.UK app is a significant step forward for digital public services in the UK. By combining robust security, accessibility, efficiency, and AI integration, it sets a new standard—showing that digital government can be both innovative and inclusive.
#DigitalTransformation #CyberSecurity #GOVUK #PublicSector #Accessibility #AI
2025-07-01
Technical controls are essential, but culture is what actually makes them effective. Drawing on NCSC guidance and real-world experience, here’s why cyber resilience starts with people and attitude, not just process or technology.
Image credit: Created for TheCIO.uk by ChatGPT
Technical controls are essential, but culture is what actually makes them effective. You can invest in all the firewalls, monitoring tools and policies you like—if your people aren’t on board, you’re still vulnerable.
If you ask any security leader for their biggest risk, most will quietly admit: it’s not the latest exploit, it’s everyday behaviours and attitudes. One careless click can undo years of investment.
I’ve seen it myself—organisations with every bit of security kit money can buy, but still one well-intentioned member of staff clicking a dodgy link brings everything undone. The truth is: people are at the heart of every breach, every response, and every successful recovery.
Culture isn’t an add-on to your controls. It’s what gives them value in the first place.
The National Cyber Security Centre (NCSC) is blunt about this. Their guidance on the human factor says most successful attacks are down to ordinary people making ordinary mistakes, not some “Hollywood” hack.
The NCSC’s frameworks—like Cyber Essentials—are as much about bringing people with you as they are about ticking technical boxes. Leadership visibility, openness, and a willingness to learn are non-negotiable. Their message is universal: build a culture where people feel able to challenge, question, and admit mistakes without fear.
Let’s be honest: policy is easy, behaviour is hard. We’ve all worked somewhere with a ten-page password policy everyone finds ways around. You don’t win hearts and minds with laminated posters or e-learning modules done with the sound off.
Real change starts when people want to do the right thing—not just because they’re told to, but because they understand the why. When colleagues know they won’t get their head bitten off for reporting a slip-up, and sharing a near-miss actually leads to positive change, you’re making progress.
It doesn’t matter how many times you say “cyber is everyone’s job”—if leaders treat it as a tick-box or an afterthought, staff will do the same. Leaders have to show up.
Make cyber risk a standard agenda item, not just for IT, but for the whole organisation. Celebrate when someone reports a suspicious email or spots a permissions issue before it becomes a problem.
The NCSC is clear: leaders must be visible, approachable, and genuinely engaged in the details—not just the headlines.
Here’s what I’ve seen work—and what I try to do myself:
Make training relevant and regular
Not the same tired PowerPoint every year. Use real stories, examples, and open Q&A.
Reward the right behaviours
Celebrate “good catches”. Positive reinforcement always beats shaming mistakes.
Normalise talking about risk
It’s not negative to ask, “What’s the worst that could happen?”—it’s good risk management.
Involve every department
It’s not just IT’s problem. Every team has their own risks and perspectives.
Share near-misses and lessons learned
Encourage people to talk about what almost went wrong, so everyone can learn.
Review incentives and targets
Are you rewarding speed at the expense of safety? Be honest about what you’re actually encouraging.
Measure culture, not just controls
Look at engagement in training, near-miss reports, and honest feedback. If you aren’t measuring it, you aren’t managing it.
A while ago, I worked with an organisation that rolled out new security tools every year. But it wasn’t until they introduced “story sessions”—safe spaces where anyone could share a near-miss or lesson learned without fear of blame—that things genuinely changed. Incidents dropped, engagement shot up. It was the culture of openness, not technology, that made the difference.
If you do just one thing after reading this, make it a conversation: ask your team where they feel unsure or unsupported around security. You’ll learn more in ten minutes than from any audit.
Culture isn’t a project with an end date—it’s something you have to live and lead, every day. You can spend millions on technology, but your strongest defence is always a team that cares and feels empowered to do the right thing.
The NCSC get it. It’s time we all did.
For more on this theme, see: From Awareness to Ownership: Building a Cyber-First Culture.
2025-06-30
"Exploring the unique cybersecurity challenges facing financial firms, and why the sector remains a prime target for cybercriminals."
Image credit: Freepik
Cybersecurity is rarely out of the headlines these days. For financial companies, however, it’s not just a trending topic – it’s an ever-present concern that keeps leaders awake at night.
Financial institutions sit at the intersection of money, data, and trust. They hold vast reserves of sensitive information – customer details, transaction data, and payment records. Cybercriminals know this, which is why banks, investment firms, and insurers are under constant attack.
It’s not just about money. A successful attack can also damage a company’s reputation, shake customer confidence, and in some cases, threaten the stability of the entire financial system.
Attackers are relentless, constantly evolving their tactics. Today’s threats include:
Unlike other industries, financial services have a duty to maintain public trust at all costs. Any sign of weakness is quickly seized upon by competitors, the media, and customers alike. The sheer volume of transactions, the complexity of legacy systems, and the pace of regulatory change make the job even harder.
While the threat landscape is daunting, there are reasons for optimism:
Financial firms lose sleep over cyber attacks because the stakes are uniquely high – both for their own business and for the stability of the wider economy. By building a culture of resilience, embracing new technologies, and working together, the industry can stay one step ahead of those who seek to do harm.
2025-05-29
Adidas has confirmed a cyber attack resulting in the theft of customer contact information, specifically targeting individuals who had contacted its help desk. While payment details and passwords were not compromised, emails, phone numbers, and other contact details have potentially been exposed. This is the latest in a run of high-profile retail breaches.
Image credit: Created for TheCIO.uk by ChatGPT
Adidas’ disclosure comes only weeks after similar incidents at Marks & Spencer and Co-op. The M&S cyber attack alone is expected to cost around £300m—about a third of the company’s annual profit [Financial Times]. Retailers are facing a wave of attacks from sophisticated, well-organised threat actors.
UK police are investigating the Scattered Spider group for some of these attacks, though there is no evidence linking them to Adidas [BBC News]. Adidas has also faced breaches in other markets this year, underscoring the scale of the challenge.
It’s a mistake to assume only the loss of payment data matters. The exposure of contact details—email addresses, phone numbers, and more—creates real and ongoing risks:
This breach was enabled by an attack on a third-party customer service provider—a common and often underestimated threat. The UK National Cyber Security Centre consistently highlights the importance of supplier risk management, with many recent breaches beginning at partners or vendors.
UK GDPR requires organisations to notify regulators and those affected if there’s a risk to their rights or freedoms. Adidas is communicating with authorities and customers, but as consumer group Which? points out, post-breach support and guidance are just as crucial as technical fixes.
Retail’s digital expansion and dependence on third parties ensure it will remain a prime target for attackers. Cyber security must be embedded in organisational culture and treated as a board-level concern.
#CyberSecurity #Retail #Adidas #InfoSec #GDPR #DataBreach #RiskManagement
2025-05-03
"The recent M&S cyber incident is a stark reminder that no business is immune—and every organisation should review its security posture."
Image credit: Dorset Live
News broke today that Marks & Spencer has been hit by a significant cyber attack, sending ripples through the UK retail sector and beyond. While details are still emerging, early reports suggest that customer data and core business systems may have been compromised, with M&S racing to contain the fallout and reassure its millions of customers.
M&S isn’t just any retailer; it’s a British institution with a reputation built on trust and reliability. The scale of this incident, and the immediate disruption to services, is a stark reminder that even household names are not immune to the ever-evolving threats facing every organisation today.
While the investigation is ongoing, initial information points to a sophisticated cyber attack targeting both customer-facing and internal systems. This kind of breach highlights just how interconnected and complex modern IT estates have become, and why a “set and forget” approach to cyber security no longer works.
Cyber attacks can happen to anyone.
Size, reputation or investment in technology are not guarantees of safety.
Customer trust is fragile.
A single incident can undo years of careful brand building and erode customer confidence overnight.
Preparation is everything.
Robust incident response plans, tested backups and regular employee training are now non-negotiable.
M&S is working closely with law enforcement and cyber experts to investigate the breach and shore up defences. The wider message to UK businesses is clear: now is the time to double-check your own cyber resilience. Don’t wait for a crisis to put your plans to the test.
We’ll keep you updated as more details become available. In the meantime, is your organisation prepared for a similar incident?
2025-01-15
"For small and medium-sized enterprises, the right MSP can transform IT from a headache into a strategic advantage."
For small and medium-sized enterprises (SMEs), IT can sometimes feel like a constant uphill battle. There’s never quite enough time, resources are tight, and keeping pace with new technology trends can feel impossible. That’s where Managed Service Providers (MSPs) really come into their own.
An MSP is essentially an external partner who takes responsibility for some or all aspects of your IT estate—everything from daily support and monitoring to cybersecurity, backup, and strategic advice. For SMEs, this isn’t just about outsourcing technical problems; it’s about unlocking real business value.
Cost Efficiency:
Most SMEs can’t justify a full, in-house IT team. MSPs give you access to a broad range of skills and experience, but only when you need them. This flexible approach helps you avoid unnecessary overheads.
Proactive Support and Security:
Instead of just reacting to problems, good MSPs spot issues before they escalate. That means better uptime, faster response times, and a reduced risk of cyber threats.
Focus on Core Business:
Let’s face it, most SMEs aren’t in business to manage servers or patch laptops. Handing over IT operations allows your team to concentrate on growth, innovation, and customer experience.
Access to Latest Technology:
MSPs keep up with trends so you don’t have to. Whether it’s adopting cloud services, rolling out remote working solutions, or enhancing security, you get the benefit of new tech without the learning curve.
Strategic Guidance:
The best MSPs don’t just keep the lights on—they become trusted advisors. They’ll help you plan for the future, scale up (or down) as your needs change, and ensure IT underpins your long-term business goals.
Cybersecurity remains one of the biggest risks facing SMEs, yet many lack the expertise or resources to tackle it properly. MSPs bring a wealth of experience here, implementing best practices, monitoring for threats, and ensuring you meet compliance requirements. It’s peace of mind you simply can’t put a price on.
Not all MSPs are created equal. It pays to do your homework—look for partners with a proven track record in your sector, strong customer references, and a commitment to understanding your business. Communication is key: a good MSP should be an extension of your team, not just another vendor.
For SMEs, the right MSP can turn IT from a headache into a genuine strategic advantage. By tapping into external expertise, you’re free to focus on what you do best—knowing your IT is in safe hands. In today’s fast-moving, security-conscious world, that’s not just a nice-to-have. It’s essential.
2024-12-10
"How edge computing is changing the face of IT infrastructure, and why its benefits are too significant for businesses to ignore."
The benefits of edge computing are too significant to ignore.
In the ever-evolving landscape of information technology, the concept of edge computing has emerged as a game-changer, revolutionising how data is processed, stored, and analysed. As businesses strive for faster response times, improved reliability, and enhanced performance, the shift to edge computing represents a paradigm shift in IT infrastructure.
Traditionally, computing tasks have been performed in centralised data centres, where large amounts of data are processed and stored. While this model has served its purpose well, it is not without its limitations, particularly in an era marked by the proliferation of Internet of Things (IoT) devices, autonomous systems, and real-time applications.
Enter edge computing – a decentralised approach that brings computation and data storage closer to the source of data generation, whether it be a factory floor, a retail store, or a smart city environment. By leveraging edge computing, businesses can reduce latency, alleviate bandwidth constraints, and improve overall system performance, thereby enabling new possibilities for innovation and efficiency.
One of the key drivers behind the adoption of edge computing is the explosive growth of IoT devices. With billions of connected devices expected to come online in the coming years, traditional cloud-based architectures may struggle to keep pace with the sheer volume of data generated at the edge. Edge computing offers a solution by processing data locally, near the point of origin, before transmitting only relevant information to the cloud for further analysis and storage.
Moreover, edge computing holds immense potential for industries where real-time decision-making is critical, such as manufacturing, healthcare, transportation, and finance. By processing data at the edge, organisations can minimise latency and respond to events in near real-time, leading to improved operational efficiency, enhanced safety, and better customer experiences.
However, the transition to edge computing is not without its challenges. Managing distributed infrastructure, ensuring data security and privacy, and maintaining interoperability with existing systems are just a few of the hurdles that businesses must overcome. Moreover, edge computing requires a rethinking of traditional IT architectures and investment in specialised hardware and software solutions.
Despite these challenges, the benefits of edge computing are too significant to ignore. As businesses continue to embrace digital transformation and strive for competitive advantage, the shift to edge computing represents a logical evolution of IT infrastructure. By harnessing the power of edge computing, organisations can unlock new opportunities for innovation, agility, and growth in an increasingly interconnected world.
2024-06-10
"The recent London hospitals incident shows that the true impact of cyber attacks goes far beyond the IT department—and it’s time every organisation paid attention."
Cyber attacks are not just an “IT problem”—they can have serious ramifications for the entire organisation, whatever the sector. The recent attacks on London hospitals, widely covered in the media, are a stark reminder that operational disruption, patient care, and even public trust can be put at risk by a single successful breach.
Read the BBC article for more on the ground impacts.
A single vulnerability—whether it’s a human error or a system flaw—is all it takes for cyber criminals to gain entry. The group behind the recent attack has previously targeted automotive firms, Australian courts, and charities like the Big Issue, proving this isn’t just a healthcare problem. It’s an everyone problem.
To prepare for and help prevent cyber attacks, here are some key strategies:
User Training and Awareness
People remain the most unpredictable element in any security plan. No matter how strong your technical defences, all it takes is one person clicking a bad link or visiting a dodgy site to open the door. Ongoing training and awareness programmes are essential.
System Security Fundamentals
And the list goes on.
Disaster Recovery
If a breach does happen, a robust disaster recovery plan and up-to-date backups are absolutely critical. All too often, disaster recovery is tomorrow’s task until it’s too late. Make sure plans are current, tested, and that everyone knows what to do if the worst happens.
What do you think we should be prioritising? Is your organisation prepared for the next cyber attack?
2023-11-02
"IT professionals are essential for SME growth, security, and digital transformation—but do smaller businesses really recognise their value?"
Do SMEs know how IT can benefit them?
In an era driven by digital transformation, the role of Information Technology (IT) professionals has become paramount for businesses of all sizes. However, the question remains: do small and medium-sized enterprises (SMEs) and startups truly grasp the significance of IT professionals in their operations?
In the fast-paced world of entrepreneurship, SMEs and startups often find themselves juggling multiple tasks with limited resources. In such an environment, the value of IT professionals might not always be immediately apparent. Yet, overlooking the importance of IT expertise can have profound implications for the success and sustainability of these businesses.
First and foremost, IT professionals bring specialised knowledge and skills that are essential for leveraging technology to streamline processes, enhance productivity, and drive innovation. From setting up and maintaining network infrastructure to developing custom software solutions, IT professionals play a pivotal role in optimising business operations.
Moreover, in today's digital landscape, cybersecurity threats loom large, posing significant risks to businesses of all sizes. SMEs and startups are not exempt from these threats; in fact, they may be even more vulnerable due to limited cybersecurity measures. IT professionals possess the expertise to implement robust security protocols, safeguarding sensitive data and protecting against cyber attacks.
IT professionals contribute to strategic decision-making by providing insights into emerging technologies and trends that can give businesses a competitive edge. Whether it's adopting cloud computing solutions, harnessing the power of big data analytics, or implementing Internet of Things (IoT) devices, IT professionals help SMEs and startups stay ahead of the curve.
Despite the undeniable benefits that IT professionals bring to the table, there are challenges that SMEs and startups may face in fully recognising their importance. One such challenge is the perception of IT as a cost centre rather than an investment. However, viewing IT expenditures through the lens of long-term value creation can shift this mindset, highlighting the role of IT professionals as enablers of growth and efficiency.
While outsourcing IT services to Managed Service Providers (MSPs) can provide access to specialised expertise, partnering with an MSP offers unique advantages for SMEs and startups. MSPs not only bring technical know-how but also provide proactive monitoring, maintenance, and support services, ensuring continuous uptime and reliability. By entrusting their IT needs to an MSP, businesses can benefit from cost-effective solutions, scalable services, and peace of mind, allowing them to focus on their core operations and strategic objectives. This collaborative approach fosters a symbiotic relationship where SMEs and startups can leverage the expertise and resources of MSPs to navigate the complexities of the digital landscape effectively.
The bottom line is, SMEs and startups must recognise the indispensable role of IT professionals in driving their success and competitiveness. By embracing IT expertise as a strategic asset rather than a mere operational necessity, businesses can unlock a world of opportunities for growth, innovation, and resilience in an increasingly digital world. Investing in IT professionals is not just about staying technologically relevant; it's about future-proofing the business and laying the foundation for sustained success.