77% of Security Leaders Would Sack Phishing Victims
New research from Arctic Wolf shows that most security leaders say they would sack staff who fall for phishing scams, even as incidents rise and leaders themselves admit to clicking malicious links.
Hardening of Attitudes
Arctic Wolf’s 2025 Human Risk Behaviour Snapshot reveals that 77 per cent of IT and security leaders say they have (or would) sack an employee for falling for a phishing or social engineering scam, up from 66 per cent in 2024. The report describes this shockingly high statistic as the result of a significant hardening of attitudes among security professionals, despite continuing increases in attack volume and breach rates.
The Scale
The study, which surveyed more than 1,700 IT leaders and end users globally, found that 68 per cent of organisations suffered at least one breach in the past year. The UK and Ireland, for example, recorded some of the steepest rises, partly due to high-profile incidents in the retail sector. Arctic Wolf notes that many firms are still failing to implement basic measures, with only 54 per cent enforcing multi-factor authentication (MFA) for all users.
Sacking Doesn’t Solve The Problem
The same report also found that organisations taking an education-first approach rather than firing staff saw an 88 per cent reduction in long-term human risk. According to Arctic Wolf’s Chief Information Security Officer, Adam Marrè, “Terminating employees for falling victim to a phishing attack may feel like a quick fix, but it doesn’t solve the underlying problem.”
A Strong Policy Signal
The findings of the report appear to highlight a growing gap between confidence and capability. For example, three-quarters of leaders said they believed their organisation would not fall for a phishing attack, yet almost two-thirds admitted they have clicked a phishing link themselves, and one in five said they failed to report it.
Corrective Action Instead of Dismissal
It should be noted that, in the same survey, more than six in ten leaders said they had taken corrective action against employees who fell for phishing scams by restricting or changing access privileges, which Arctic Wolf suggests is a more constructive approach than dismissal.
Executives Are Valuable Targets For Cybercriminals
In fact, the company’s own data also shows that 39 per cent of senior leadership teams were targeted by phishing and 35 per cent experienced malware infections, highlighting how executives themselves are often the most valuable targets for attackers.
“When leaders are overconfident in their defences while overlooking how employees actually use technology, it creates the perfect conditions for mistakes to become breaches,” Marrè said. He added that the most secure organisations “pair strong policies and safeguards with a culture that empowers employees to speak up, learn from errors, and continuously improve.”
Confidence Vs Behaviour
The Arctic Wolf report appears to highlight a clear contradiction. For example, while most security leaders view phishing as a frontline employee issue, they are actually statistically among the most likely to make the same mistakes. Many also admit to disabling or bypassing security systems. For example, 51 per cent said they had done so in the past year, often claiming that certain measures “slowed them down” or made their work harder.
This gap between stated policy and personal practice is what Marrè describes as “a major blind spot and degree of hubris among some security leaders.” The report concludes that leadership culture sets the tone for the rest of the organisation, and that inconsistency at the top erodes credibility and weakens defences.
Who Is Really Falling For Phishing In 2025?
The question of who gets caught out most is not as simple as it might appear. For example, Arctic Wolf’s data indicates that senior staff, not junior employees, are often prime targets because of their privileged access and decision-making authority. The company found that nearly four in ten executive teams experienced phishing attempts, compared with lower rates among general staff.
Other research appears to support this pattern. For example, Verizon’s 2025 Data Breach Investigations Report confirms that social engineering remains one of the top causes of data breaches, accounting for more than two-thirds of all initial intrusion methods. Its analysis identifies finance, healthcare, education, and retail as the most heavily targeted sectors. Attackers exploit trust, urgency, and routine workflows to trick users into sharing credentials or downloading malware.
New Hires More Likely To Click
Also, a mid-2025 study by Keepnet, reported by Help Net Security, found that 71 per cent of new hires clicked on phishing emails during their first 90 days, making them 44 per cent more likely to fall victim than longer-serving staff. The main reasons were unfamiliar internal systems, a desire to respond quickly to apparent authority figures, and inconsistent onboarding security training. The same research found that structured, role-specific training reduced click rates by around 30 per cent within three months.
Retail Legacy Systems An Issue
Retail has also seen a marked increase in phishing incidents across the UK and Ireland. Arctic Wolf attributes this to the industry’s reliance on legacy systems, seasonal sales spikes, and the complexity of managing large volumes of customer data. The company says these factors have made retail “a prime target” for opportunistic and scalable attacks.
Can Employers Really Sack Staff For Clicking A Phishing Email?
In the UK, simply sacking an employee for falling for a phishing email is legally possible but rarely straightforward. For example, under the Advisory, Conciliation and Arbitration Service (Acas) Code of Practice, an employer can only dismiss fairly if they have both a valid reason, such as misconduct or capability, and have followed a fair and reasonable procedure.
For a dismissal to be lawful, the employer must investigate properly, give the employee a chance to respond, and ensure the sanction is proportionate. Even where a phishing incident causes financial loss or reputational damage, the question is whether the individual acted negligently or was misled despite reasonable training and policies. In most cases, a first-time mistake caused by deception would not actually meet the threshold for gross misconduct.
Unfair Dismissal?
It’s worth noting here that employees with two years’ service can bring a claim for unfair dismissal if they believe the reason or process was unreasonable. Employment tribunals are required to take the Acas Code into account, and may increase or reduce compensation by up to 25 per cent if either side fails to follow it. This means employers that act punitively without clear evidence or consistent practice could face costly legal challenges.
Most employment lawyers, therefore, recommend a corrective rather than disciplinary response, especially where the organisation’s training or technical safeguards may have been insufficient. Arctic Wolf’s data reflects this tendency, with many leaders actually opting to limit access rights rather than dismiss staff outright after a phishing incident.
Ethics And Culture
Beyond legality, there is an ethical debate here to take account of which focuses on culture and transparency. For example, the UK’s National Cyber Security Centre (NCSC) advises that creating a “no-blame reporting culture” is one of the most effective ways to reduce security risk. Its guidance stresses that employees should feel safe to report suspicious emails or mistakes immediately, without fear of reprisal.
In fact, it is well known that when punishment is the first response, employees often stay silent. Arctic Wolf’s own findings appear to bear this out, i.e., one in five security leaders who clicked a phishing link failed to report it. That silence can allow breaches to escalate before they are detected.
Human Error Inevitable
Security experts argue that treating human error as inevitable, and training people to respond effectively, is far more effective than zero-tolerance policies. Marrè says that “progress comes when leaders accept that human risk is not just a frontline issue but a shared accountability across the organisation.” He advocates regular, engaging training that reflects real threats, backed by leadership example and open communication.
The Double Standard In Practice
The data from this and other reports appears to paint a clear picture of contradiction at the top. For example, many of the same leaders who advocate sacking staff for phishing errors have clicked links themselves or disabled controls that protect the wider organisation. Arctic Wolf’s report describes this as “a culture of ‘do as I say, not as I do’,” warning that it undermines credibility and increases exposure to social engineering attacks.
Phishing Now More Sophisticated
One other important factor to take into account here is the fact that phishing techniques have also grown more sophisticated. For example, attackers now use AI-generated emails, cloned websites, and real-time chat-based scams to trick users into sharing credentials. Even experienced professionals can, therefore, struggle to spot these messages, particularly when they appear to come from known suppliers or senior colleagues.
AI Supercharges Phishing Success
Microsoft’s 2025 Digital Defence Report shows that AI-generated phishing emails are 4.5 times more likely to fool recipients, achieving a 54 per cent click-through rate compared with 12 per cent for traditional scams. The company says this surge in realism and scale has made phishing “the most significant change in cybercrime over the last year”.
Microsoft also estimates that AI can make phishing campaigns up to 50 times more profitable, as attackers use automation to craft messages in local languages, tailor lures, and launch mass campaigns with minimal effort. Beyond email, AI is now being used to scan for vulnerabilities, clone voices, and create deepfakes, transforming phishing into one of the fastest-growing and most lucrative attack methods worldwide.
Initial Compromise Comes From Phishing
Industry-wide data continues to show that phishing is the most common initial attack vector in business email compromise, ransomware, and credential theft cases. Verizon’s latest data shows phishing accounts for roughly 73 per cent of initial compromise methods, followed by previously stolen credentials. These statistics underline how difficult it is to eliminate human error entirely, even in well-trained environments.
Arctic Wolf argues that genuine progress actually requires leading by example rather than blaming employees. In its report, the company’s closing recommendations include continuous education, practical simulations, and building a culture that rewards honesty over silence. Its research concludes that organisations where employees feel confident to report mistakes are significantly less likely to experience repeat incidents, and far more likely to detect breaches early.
What Does This Mean For Your Business?
The findings appear to highlight a cultural challenge within cyber security. Punishing individuals for mistakes that even experienced leaders admit to making risks undermining the very trust and openness that strong defences depend on. The evidence shows that while technical safeguards such as MFA and endpoint protection are essential, they are not enough on their own. What really differentiates resilient organisations is how they handle human error, whether they choose to learn from it or treat it as grounds for dismissal.
For UK businesses, the implications are significant. A strict zero-tolerance policy towards phishing may appear decisive, but it can also damage morale, suppress reporting, and expose employers to potential legal and reputational risks. Dismissing staff without due process could also lead to unfair dismissal claims, while a culture of fear can discourage the transparency needed to contain attacks quickly. By contrast, firms that take a measured, education-focused approach tend to see fewer repeat incidents, faster recovery times, and stronger employee engagement in security.
The message from Arctic Wolf’s data is that leadership example matters most. When senior executives model good cyber hygiene, acknowledge their own vulnerabilities, and support open communication, staff are far more likely to follow suit. Creating an environment where everyone feels responsible for reporting threats, and confident they will be supported for doing so, delivers a far greater return than any punitive measure.
For regulators, investors, training providers and others, the findings reinforce the importance of human-centred strategies that combine accountability with education. As phishing continues to evolve in sophistication, organisations across all sectors must balance clear policy enforcement with a recognition that even the best-informed professionals can make mistakes. The organisations that respond to that reality with fairness, transparency, and leadership integrity will be the ones best equipped to withstand the next wave of attacks.
Microsoft Warns: Shadow AI Rampant in UK Offices
Most UK employees are now using unapproved AI tools at work every week, according to new Microsoft research, raising fresh questions about security, privacy, and corporate control over artificial intelligence.
What Microsoft Found
Microsoft’s latest UK study reports that 71 per cent of employees have used unapproved consumer AI tools at work, and 51 per cent continue to do so weekly. The research, conducted by Censuswide in October 2025, highlights a growing trend known as “Shadow AI”, i.e., the use of artificial intelligence tools not sanctioned by employers. The (October 2025) Censuswide survey took account of the views of 2,003 UK employees, aged 18 and over. The sample included workers from financial services, retail, education, healthcare, and other sectors, with at least 500 respondents each from large businesses and public sector organisations.
Typical Uses of Shadow AI
According to Microsoft’s study, typical uses of Shadow AI include drafting or replying to workplace communications (49 per cent), preparing reports and presentations (40 per cent), and even carrying out finance-related tasks (22 per cent). Many employees say they turn to these tools because they are familiar or easy to access, with 41 per cent admitting they use the same tools they rely on in their personal lives. Another 28 per cent said their employer simply doesn’t provide an approved alternative.
Limited Awareness of the Risks
It seems that according to the study, awareness of the risks remains limited, which is a key part of the problem. For example, only 32 per cent of respondents said they were concerned about the privacy of customer or company data they enter into AI tools, while 29 per cent expressed concern about the potential impact on their organisation’s IT security.
As Darren Hardman, CEO of Microsoft UK & Ireland, says: “UK workers are embracing AI like never before, unlocking new levels of productivity and creativity. But enthusiasm alone isn’t enough,” and that “Businesses must ensure the AI tools in use are built for the workplace, not just the living room.”
Why It So Much Matters Now
The research reflects a wider cultural change in how employees are using artificial intelligence (AI) to handle everyday tasks. For example, Microsoft estimates that generative AI tools and assistants are now actually saving workers an average of 7.75 hours per week. Extrapolated across the UK economy, that equates to around 12.1 billion hours a year, or approximately £208 billion worth of time saved (according to analysis by Dr Chris Brauer of Goldsmiths, University of London).
That potential productivity boost most likely explains much of the enthusiasm around generative AI. However, it also highlights why workers are bypassing official channels. For example, when the tools provided by employers feel restrictive, employees often reach for whatever gets the job done fastest, even if that means using consumer platforms that fall outside company governance and data protection frameworks.
What Is ‘Shadow AI’?
The term “Shadow AI” is borrowed from “shadow IT”, which is a long-standing issue where employees use unapproved hardware or software without authorisation. In this case, it refers to staff using consumer AI tools such as public chatbots or online assistants to support work tasks. One potential problem with this is that these platforms often store or learn from user input, which may include company or customer data, creating potential security and compliance problems.
Organisations that allow this kind of behaviour to go unchecked, therefore, risk breaching UK data protection laws, regulatory obligations, or intellectual property rights (not to mention giving away company secrets). The British Computer Society (BCS) and other professional bodies have previously warned that shadow AI could expose firms to data leaks, non-compliance, and reputational harm if sensitive material is entered into consumer models.
The Real Risks for Businesses
The main security concern is data leakage, i.e., where employees enter sensitive company information into AI tools that may store or process data outside of approved systems. This could include confidential documents, client details, or financial data. Once that information leaves the organisation’s control, it may be impossible to delete or track, potentially breaching data protection law or confidentiality agreements.
Another issue that’s often overlooked by businesses is attack surface expansion. For example, the more third-party AI tools are used, the greater the number of external systems handling company information. This increases the likelihood of phishing, prompt injection attacks, and other forms of misuse. Also, there is the problem of auditability. When AI tools operate outside an organisation’s infrastructure, they leave no record of what data was used or how it was processed, making compliance monitoring almost impossible.
Earlier this year, a report by Ivanti found that nearly half of office workers were using AI tools that were not provided by their employer, and almost one-third admitted keeping it secret. Some employees even said they used unapproved AI to gain an “edge” at work, while others feared their company might ban it altogether. The study echoed Microsoft’s findings that even sensitive data, such as customer financial information, is being fed into public models.
Why Employees Still Do It
Despite the risks, many employees say they basically rely on consumer AI because it helps them manage workloads and meet rising productivity expectations. Microsoft’s study also found that attitudes towards AI have become far more positive over the course of 2025. For example, 57 per cent of employees now describe themselves as optimistic, excited or confident about AI (up from 34 per cent in January). Also of note, it seems the proportion of workers saying they “don’t know where to start with AI” has dropped from 44 per cent to 36 per cent, while more employees say they understand how their company uses the technology.
For many, the motivation is actually practical rather than rebellious. For example, AI chatbots help draft content, summarise notes, create reports and presentations, or even analyse spreadsheets. When deadlines are tight and workloads are high, these capabilities can make a tangible difference, especially if the employer’s own tools are limited or slow to adopt new technology.
A Balanced View
While much of the discussion has focused on the dangers of shadow AI, some experts suggest it can also be a useful indicator of where innovation is happening inside a business. For example, at the Gartner Security and Risk Management Summit in London, analysts Christine Lee and Leigh McMullen argued that rather than trying to eliminate shadow AI entirely, companies could benefit by identifying which tools employees are already finding valuable. With the right governance and security controls, those tools could be formally adopted or integrated into approved workflows.
In this sense, shadow AI can act as an early warning system for unmet needs. If, for example, marketing teams are using public generative AI tools to create campaign content, that may reveal a gap in internal creative resources or digital support. Security teams could then review those external tools, assess the risks, and replace them with enterprise-grade equivalents that meet the same needs safely.
Gartner’s approach reflects a growing recognition that employees are often ahead of policy when it comes to technology adoption. Turning shadow AI into an opportunity for collaboration, rather than conflict, could help businesses strike a balance between innovation and security.
What Organisations Can Do Next
Analysts and security experts are urging employers to start by improving visibility. That means identifying which AI tools are already being used across the organisation, and for what purposes. With this in mind, many companies are now running staff surveys or using software discovery tools to build a clearer picture of how generative AI is being adopted.
Once the extent of use is known, companies can then focus on education. Clear, accessible policies are essential, i.e., explaining in plain English what kinds of data can be entered into AI tools, what cannot, and why. Training should emphasise the risks of using consumer AI platforms, particularly when handling client, financial, or personal information.
Enterprise Grade Safer
The final step is to offer secure alternatives. Enterprise-grade AI assistants, such as those integrated into Microsoft 365 or other workplace systems, are designed to protect sensitive data and maintain compliance. These tools include encryption, access controls, audit logs, and data-loss prevention measures that consumer apps typically lack. As Microsoft’s Darren Hardman put it: “Only enterprise-grade AI delivers the functionality employees want, wrapped in the privacy and security every organisation demands.”
Where Shadow AI Is Most Common
Microsoft’s data shows that shadow AI use is most prevalent among employees in IT and telecoms, sales, media and marketing, architecture and engineering, and finance and insurance. This is likely to be because these are industries where high workloads, creative output, or data handling make AI assistants especially appealing. As confidence grows and tools become more sophisticated, use across sectors is expected to increase further.
Shaping Culture
The Microsoft research suggests this trend is already reshaping workplace culture. For example, more employees now see AI as an essential part of their organisation’s success strategy, a figure that has more than doubled from 18 per cent in January to 39 per cent in October. Globally, Microsoft’s Work Trend Index reports that 82 per cent of business leaders view 2025 as a turning point for AI strategy, with nearly half already using AI agents to automate workflows.
What Does This Mean For Your Business?
The rise of shadow AI appears to present UK businesses with a clear crossroads between risk and reward. Employees are demonstrating that AI can deliver genuine productivity gains, but their widespread use of unapproved tools exposes gaps in governance and digital readiness. For many organisations, this is not simply a security issue but a sign that workplace innovation is moving faster than policy.
In practical terms, the Microsoft findings suggest that companies which fail to provide secure, accessible AI tools will continue to see staff seek out consumer alternatives. That makes the issue as much about culture and leadership as it is about technology. Building trust through transparency, and ensuring employees understand how and why AI is being managed, will be critical to balancing productivity with protection.
For IT leaders, the challenge now lies in developing frameworks that enable safe experimentation without undermining compliance. That means investing in enterprise-grade AI infrastructure, tightening oversight of data use, and introducing training that connects security policy with real-world tasks. Businesses that achieve this balance will be able to harness AI’s benefits while maintaining control over how it is deployed.
The implications extend beyond individual firms. For example, regulators, industry bodies, and even customers have a stake in how securely AI is used in the workplace. As more sensitive data flows through AI systems, the pressure will grow for clear accountability and transparent governance. The Microsoft findings make it clear that AI adoption in the UK is no longer confined to innovation teams or pilot projects; it is now embedded in everyday work. How organisations respond will determine whether this new era of AI-driven productivity strengthens trust and competitiveness, or exposes deeper vulnerabilities in the digital workplace.
Government to CEOs: “Print Backups Of Cyber Plans”
The UK government has written to chief executives across the country urging them to keep physical, offline copies of their cyber contingency and business continuity plans, as the number of severe cyber attacks continues to rise.
Why The Government Is Acting Now
The move follows a sharp increase in what officials call “nationally significant” cyber incidents. In its latest annual review, the National Cyber Security Centre (NCSC) reported handling 429 cyber incidents over the past year, of which 204 were classed as nationally significant, more than double the previous year’s total of 89. Eighteen of those were categorised as “highly significant”, marking a 50 per cent rise.
These figures highlight a growing problem for UK organisations. Attacks on major companies have recently disrupted production lines, logistics operations, and supply chains. The government says this shows how cyber threats now pose not only a security risk but also a direct threat to jobs and the wider economy.
Cyber Resilience Should Be A Board Level Priority
Technology Secretary Liz Kendall, Chancellor Rachel Reeves, Business Secretary Peter Kyle, Security Minister Dan Jarvis, and the heads of both the NCSC and the National Crime Agency have jointly signed letters to business leaders, including all FTSE 350 companies. The message is that cyber resilience must become a board-level priority, and organisations must be ready to operate without IT systems for extended periods if necessary.
What The Letter Tells CEOs To Do
The letter from the government essentially makes three key points/recommendations to company leaders, which are:
1. It says they should treat cyber resilience as a governance issue and align with the government’s new Cyber Governance Code of Practice.
2. It recommends that all organisations sign up to the NCSC’s Early Warning service, which alerts firms to potential vulnerabilities or active threats.
3. It advises implementing the Cyber Essentials scheme, both within their own operations and throughout their supply chains.
Crucially, the letter also stresses the importance of keeping copies of critical plans “accessible offline or in hard copy”, including details of how to communicate and coordinate during an IT failure. This is actually part of a wider government effort to embed what the NCSC calls “resilience engineering”, which can basically be described as an approach that focuses on anticipating, absorbing, recovering from, and adapting to cyber attacks.
The Logic Behind Paper Copies
Although it may sound strange in what is increasingly a digital world, the advice to hold printed plans is intended to be a practical response to one of the key realities of modern cyber incidents. For example, when ransomware or destructive malware locks or wipes digital systems, even backups stored in the cloud can become inaccessible. In those situations, an organisation needs something it can rely on immediately, i.e., contact lists, instructions, and decision trees that are available without power, network access, or authentication.
The NCSC’s annual review explains that organisations should have “plans for how they would continue to operate without their IT, and rebuild that IT at pace, were an attack to get through.” Storing that information offline ensures that teams can still coordinate a response even if email, messaging, or identity systems have been taken down.
From Prevention To Resilience
The government’s letter reflects a wider change in strategy from simply preventing attacks to building the ability to withstand them. For example, the NCSC now encourages what it calls resilience engineering, i.e., designing systems and processes that can recover quickly after disruption.
That includes maintaining immutable backups that cannot be encrypted or tampered with, segmenting networks to prevent attacks spreading, testing recovery procedures, and running scenario exercises that simulate complete loss of IT. This approach assumes that no organisation can be completely immune to attack, so readiness and rapid recovery become essential.
Warnings From The NCSC
In its latest report, the NCSC said cyber security had become “a matter of business survival and national resilience.” The agency noted that half the incidents it managed in the past year met the top three severity categories, which cover impacts to government, essential services, or large sections of the public and economy.
The NCSC is urging organisations to make themselves as hard a target as possible, warning that hesitation in improving resilience leaves them exposed. It is also promoting its Cyber Action Toolkit for smaller firms, which provides simple step-by-step measures to improve security and response capabilities.
Support From The Security Industry
Cybersecurity professionals appear to have broadly supported the government’s message, saying it reflects lessons learned from recent incidents where businesses lost access to key systems for weeks. Industry experts have described the advice as practical rather than symbolic, noting that while printed plans may seem old-fashioned, they can be vital when digital tools fail.
The concept of treating cyber security like health and safety, something every employee understands as part of everyday working life, has gained traction in recent years. The government’s call reinforces this by urging boards to build resilience into core operations rather than treating it as an optional add-on.
Preparation
For larger companies, the message essentially means that cyber risk must now be reported and discussed at board level, with directors accountable for ensuring readiness. That includes confirming who would take charge in an emergency, how to communicate without email, and where physical copies of key documents are stored.
For smaller firms, the focus is more on preparation. For example, the NCSC’s free services, including the Early Warning system and Cyber Essentials certification, are designed to reduce the burden of building basic protection. Having physical backup plans does not replace digital defences, but it ensures that even in the worst-case scenario, there is a clear process for keeping the business running.
The government also highlights the benefits of requiring suppliers to meet similar standards, as supply chain weaknesses can often be exploited by attackers. Making resilience part of procurement policies helps reduce the risk of disruption spreading between organisations.
The Advantage of Offline Contingency Plans
A key advantage of offline contingency plans is that they allow teams to act immediately when systems go down. For example, staff can access emergency contacts, escalate issues, and follow recovery steps without waiting for IT access to return. In critical industries, such as healthcare, manufacturing, and logistics, those minutes or hours can make the difference between a temporary disruption and a complete operational shutdown.
Organisations that follow the NCSC’s guidance can also expect tangible benefits. The agency notes that companies meeting Cyber Essentials standards are significantly less likely to make cyber insurance claims. Better planning also tends to reduce recovery times and financial losses.
Challenges And Concerns
Although there is broad support for the government’s recommendations, there are (inevitably) some practical and logistical challenges. For example, paper copies need to be updated regularly to reflect new systems and staff changes, and they must be stored securely to prevent sensitive information from being accessed or lost. Some companies have also expressed concern about the administrative burden of maintaining both digital and physical documentation.
Others question whether a focus on manual fallbacks could distract from investment in prevention. However, security experts argue that resilience and defence are complementary, i.e., both are necessary, and neither alone is sufficient.
For small and medium-sized enterprises, limited resources remain a concern. Even with free government tools, implementing and maintaining robust resilience measures can take time and expertise. Nonetheless, the government’s stance is that preparedness is no longer optional, given the rising frequency and severity of attacks.
The Bigger Picture
Ministers have said that further steps will follow, including continued promotion of the Cyber Governance Code of Practice and potential new requirements under the forthcoming Cyber Security and Resilience Bill.
The letters sent this month highlight a clear change in tone, to one where cyber resilience is no longer being treated as an IT issue, but as a matter of national and economic security. For UK businesses, the message is simply that if the screens go dark, the organisation should still be able to function, and that begins with having the right plans on paper.
What Does This Mean For Your Business?
The government’s intervention could be said to mark a notable moment in how cyber risk is now being framed, i.e., as a question of continuity and national resilience rather than purely technical defence. The decision to write directly to company chiefs shows the extent to which cyber attacks have moved from the IT department to the boardroom, becoming an operational, financial, and reputational issue that demands visible leadership. The emphasis on hardcopy plans might appear unusual in a digital economy, yet it underlines an uncomfortable truth, which is that digital systems are not invincible and that planning for their failure is now a core part of responsible management.
For UK businesses, this change could prove both challenging and beneficial. For example, it requires time, training, and discipline to maintain offline contingency plans and rehearse manual processes, but it also forces a clearer understanding of dependencies and critical operations. Those already investing in resilience may find themselves better protected from both financial losses and prolonged service disruption. Smaller firms, meanwhile, stand to gain from the free support and practical guidance now being promoted by the NCSC, which aims to bring consistent standards across the economy.
The wider implications reach beyond business. For government and regulators, the campaign is part of a long-term effort to build systemic strength in the face of increasingly complex attacks. For insurers and investors, it offers a signal that resilience planning is becoming a measurable component of good governance. For the public, it reinforces the expectation that essential services, from food distribution to healthcare, should be able to keep operating even when technology fails.
The government’s advice accepts that no cyber defence is perfect, but that preparedness can dramatically limit the impact. By putting resilience on paper as well as on screen, the UK’s leadership is attempting to bridge the gap between digital ambition and practical survivability. If businesses take that message seriously, the result may be a more stable and dependable digital economy, and one that can withstand not just the next attack, but the inevitable disruptions still to come.
Waymo’s Driverless Rides Shortly in London
Waymo has confirmed plans to bring its fully autonomous, driverless ride-hailing service to London in 2026, beginning supervised testing on public roads in the coming weeks.
Waymo, And What It Has Announced
Waymo, Alphabet’s (Google’s) autonomous driving company that began as Google’s self-driving car project in 2009, has announced its first major European expansion (to London) with the goal of offering rides in London with no human driver next year. The service will start with Jaguar I-PACE electric vehicles fitted with the company’s “Waymo Driver” system, initially running with safety drivers as part of supervised trials before progressing to fully driverless testing. Once approved, passengers will be able to hail a Waymo ride using the company’s mobile app.
Working With Moove
The company said in its announcement that it is working closely with its London fleet operations partner, Moove, to handle vehicle readiness, charging, and cleaning, while Waymo says it will monitor the autonomous driving systems and provide roadside and rider support. Moove already manages Waymo’s fleets in the United States, where the company operates in Phoenix, San Francisco, Los Angeles, and Austin.
Commercial Launch Once Safety Benchmarks Are Met
Waymo says its driverless technology has already logged more than 100 million fully autonomous miles on public roads and completed more than 10 million paid rides. It will now begin a similar staged rollout in the UK, starting with data collection on London streets within weeks. The government’s fast-tracked pilot framework for self-driving taxis, due to begin in spring 2026, will allow Waymo to move towards a commercial launch once safety benchmarks are met.
Transport Secretary Heidi Alexander has publicly welcomed the news, describing it as “cutting-edge investment that will help us deliver our mission to be world leaders in new technology and spearhead national renewal that delivers real change in our communities.”
Why It Matters For London And The UK
Waymo’s arrival has essentially been hailed as a potential boost to innovation, jobs, and transport accessibility. For example, the UK government estimates that autonomous vehicle technology could create up to 38,000 skilled jobs and contribute billions to the economy over the next decade.
In transport terms, Waymo’s entry could add a new layer of mobility alongside London’s public transport network. The company has positioned its service as complementary rather than competitive, offering on-demand journeys for people who cannot easily use buses or trains, including those with visual impairments. The Royal National Institute of Blind People (RNIB) has called it “the potential dawn of a new era in independent mobility options for blind and partially sighted people.”
Waymo also argues that its technology could help make London’s roads safer. For example, the firm claims its vehicles are involved in five times fewer injury-causing collisions and twelve times fewer pedestrian injury crashes than human drivers. In the United States, Waymo’s internal safety data shows a 57 per cent reduction in police-reported crashes compared with human benchmarks.
Also, for businesses, the arrival of a dependable, 24/7 autonomous service could make cross-city travel faster and more predictable, helping business users and clients move between meetings or sites without relying on public transport schedules or limited late-night options.
How Safe Is It?
As could be expected, Waymo’s leadership insists that its technology is safe, and also that it already exceeds human performance under comparable conditions. The company is keen to highlight safety features, such as the system’s ability to continuously analyse surroundings using a combination of lidar, radar, and cameras to detect and respond to hazards faster than human reaction times.
However, independent research paints a bit more of a complex picture. For example, in the U.S., safety reporting shows that as the number of self-driving vehicles on the road increases, so too does the number of reported incidents. Between 2023 and 2025, reported monthly crashes involving automated driving systems in the U.S. rose from around 17 to over 100, according to federal data. Analysts note that this rise likely reflects broader deployment rather than declining safety, but it nonetheless highlights the technology’s current limitations.
It’s also worth noting here that Tesla’s assisted-driving software remains under investigation in the U.S. following reports of vehicles running red lights or drifting into the wrong lane, underlining the challenges of ensuring consistent safety in mixed-traffic environments. Waymo’s system differs significantly, i.e., it is fully autonomous, with no driver input, but regulators will expect similarly high levels of accountability once operations begin in the UK.
Oversight For The Pilot
Waymo’s UK pilot will also take place under the oversight of the Automated Vehicles Act, passed earlier this year. This legislation sets out the requirement that all autonomous vehicles must demonstrate safety “equivalent to or higher than that of a competent and careful human driver,” placing a clear burden of proof on companies before services can operate commercially.
What About The M25?
One question raised by London’s pilot is what the impact will be on traffic flow, especially around major routes such as the M25. Waymo’s vehicles, if proven capable of consistent speed regulation and lane discipline, could actually contribute to smoother traffic and fewer sudden braking incidents, both common causes of motorway congestion.
At the same time, however, an increase in ride-hailing trips could add more vehicles to already congested zones if services are not integrated with wider transport policies. The M25 corridor, where early testing will reportedly occur, may serve as a benchmark for how autonomous vehicles interact with dense, high-speed traffic and variable weather conditions. Transport analysts say this will be a critical test for proving the technology’s readiness in Europe’s busiest traffic environment.
Context And Competition
Waymo’s UK debut follows years of international expansion. For example, after launching in Phoenix in 2020, it has since added driverless services in Los Angeles, Austin, and San Francisco, and announced testing in Tokyo earlier this year.
London will be its second international location, but the competition is growing. Uber has signalled it is ready to put autonomous taxis on UK roads as soon as regulations allow, working with British AI startup Wayve on its self-driving platform. Tesla has also been testing its “Cybercab” concept in London, while in China, Baidu’s Apollo Go reported over two million driverless rides in the second quarter of 2025. In the United Arab Emirates, a driverless taxi trial is underway in Dubai.
These developments suggest that London is positioning itself at the forefront of Europe’s autonomous mobility race. For other UK cities, from Manchester to Bristol, Waymo’s announcement sends the message that regulators, infrastructure planners, and local authorities will need to prepare for autonomous vehicles becoming part of their long-term transport landscape.
What Passengers Can Expect
Waymo’s typical rollout pattern starts with supervised journeys for mapping and data validation, followed by fully driverless rides for invited users, before eventually opening to the public. In the United States, pricing is broadly comparable with services like Uber or Lyft, though initial service areas are often limited.
London passengers will most likely see Waymo’s distinctive Jaguar I-PACE vehicles operating in small zones at first, expanding as safety validation continues. The company says it will work with local authorities to ensure safe pick-up and drop-off points, manage kerbside access, and integrate with existing transport systems.
Accessibility and inclusivity will also be central themes. For example, Waymo has pledged to engage with disability groups and city planners to ensure the service supports those currently underserved by traditional transport options.
What About The Taxi Industry And Urban Transport?
The arrival of autonomous taxis will, no doubt, be closely watched by London’s black cab and private-hire drivers. If Waymo’s service proves reliable, it could capture demand for late-night or outer-London trips where traditional services are limited or expensive. However, human-driven taxis retain key advantages such as flexible routing, passenger reassurance, and the iconic status of London’s licensed cab trade.
Urban planners will also be watching how autonomous taxis affect congestion, parking, and emissions. If fleets can minimise “dead miles”, i.e., time spent driving empty between fares, there could be net benefits for efficiency. If not, extra vehicles could add to pressure on busy roads. The city’s Clean Air and Vision Zero targets will make regulators cautious about expanding operations too rapidly.
Caveats, Challenges And Public Perception
Despite the optimism, public trust in driverless taxis remains low. For example, a recent YouGov poll found that only 3 per cent of Britons said they would trust a driverless taxi “a great deal,” while 44 per cent said they would not trust one at all. When cost and convenience were equal, 85 per cent said they would still prefer a human driver.
This scepticism may take quite a bit of time to overcome. The rollout will, to a large extent, depend on demonstrable safety, transparent incident reporting, and collaboration with city authorities. London’s unpredictable weather, dense pedestrian zones, and historic road layouts will present significant technical challenges for any autonomous system.
Regulatory processes will also take time. Although the UK has set out an ambitious timeline with pilots from 2026 and full approval from 2027, every stage will require rigorous testing and certification. Technical setbacks, data-sharing requirements, or policy delays could easily shift those dates.
One bit of reassurance for potential users is that Waymo’s experience in the U.S. provides a fairly strong foundation, but proving itself in London’s unique environment will certainly be the company’s most complex challenge yet.
What Does This Mean For Your Business?
If Waymo succeeds in meeting its 2026 target, London could become a global proving ground for autonomous mobility rather than just a test site. The combination of dense traffic, unpredictable weather, and strict regulation makes it one of the toughest cities in the world for driverless technology. Delivering consistent safety performance here would give Waymo and its partners a powerful validation that could shape how similar services expand across Europe. If the technology falters, however, public trust could regress for years, delaying wider adoption and weakening investor confidence in the sector.
For the UK government, the pilot will test more than vehicles. For example, it will also measure how ready policy, data infrastructure, and local authorities are to manage driverless services at scale. The Automated Vehicles Act has created the legislative framework, but the next year will determine how those rules translate into real-world oversight and accountability. This will also be an early test of whether the government’s promise of thousands of skilled jobs and a multibillion-pound autonomous industry can be realised in practice.
Businesses will be watching closely too. Reliable autonomous ride-hailing could reduce employee travel time, improve logistics efficiency, and create new service opportunities across insurance, software, fleet maintenance, and data management. It could also reshape corporate transport strategies, particularly for firms operating across multiple city sites or late-night industries that rely on flexible mobility. However, companies will still need assurance that the service is secure, affordable, and operationally reliable before integrating it into everyday business use.
For Londoners, the arrival of Waymo’s driverless taxis could bring a change in how the city moves, interacts, and regulates shared transport space. Also, cyclists, pedestrians, and other road users will be watching closely to see whether automation genuinely reduces collisions or simply adds danger and complexity. For the taxi industry, it will raise new questions about fair competition and employment. For regulators, it will challenge how to ensure that technology designed to make roads safer also makes them fairer and more efficient for everyone.
In the end, what happens next will depend less on the technology itself and more on how responsibly it is deployed. Waymo has the experience and the data to make a strong case for safety and innovation, but London’s streets will be a real test. If the rollout is careful, transparent, and genuinely improves safety and access, it could mark the start of a quiet but historic transformation in how people and businesses move around one of the world’s most complex cities.
Company Check : Google’s New ‘Recovery Contacts’: Unlock Lost Accounts
Google has introduced Recovery Contacts, a new way for users to regain access to a locked Google Account by asking trusted friends or family members to confirm their identity.
What Google Has Announced
Google says Recovery Contacts is “a new option that lets users choose trusted friends or family members to help if they ever get locked out of their Google Account.” It is designed for situations where standard recovery routes, such as SMS codes or a passkey on a lost phone, are not available. The feature is rolling out now and can be set up at g.co/recovery-contacts for eligible personal accounts.
How Recovery Contacts Works
In terms of how it’s supposed to work, Google says users nominate people they trust as recovery contacts in the Security & sign-in section of their Google Account. If a user becomes locked out, they can select one of those contacts during the recovery flow and tap “Get number.” Google then shows a code that expires after 15 minutes. The user shares that code with their contact, who will see three options on their device and must choose the one that matches. If the correct number is selected, Google says it treats that as a strong signal of legitimate identity and proceeds with account recovery. Recovery contacts cannot see any data or access the user’s account at any stage.
Limits, Timing and Eligibility
Google says several safeguards have been put in place to prevent misuse. For example, up to 10 recovery contacts can be added, and each person must accept the invitation before being included. After acceptance, there is a seven-day waiting period before that contact becomes active for recovery. If someone declines, the user must wait four days before sending another invite. When a recovery contact is used, the code received is valid for only 15 minutes, meaning both parties must act promptly. Google notes that child accounts, Advanced Protection accounts, and Google Workspace accounts cannot add recovery contacts, although those same accounts can still serve as a contact for someone else. A single person can act as a recovery contact for up to 25 different primary accounts.
Why Is Google Doing This Now?
Account recovery has long been one of the most stressful aspects of online account management and many users lose access when their phone number changes or a device with a passkey is lost. Google says the goal is to “strengthen account recovery and ensure access when it matters most.” The company has been steadily building towards a password-free future through technologies such as passkeys, and Recovery Contacts adds another layer of reassurance.
The move actually forms part of a wider package of privacy and security updates announced in mid-October, which also includes “Sign in with Mobile Number” for Android, spam link detection in Google Messages, and a “Key Verifier” for confirming encrypted chats. These collectively aim to reduce both account lockouts and the success rate of scams targeting Android users.
A New Type of Recovery
Traditionally, account recovery has relied on “something you have” such as a phone, or “something you know” such as a password. The difference with Recovery Contacts is that it introduces “someone you trust” into the process, i.e., it formalises what many people already do informally when locked out, which is turning to a friend for help. Google describes it as “a simple, secure way to turn to people you trust when other recovery options aren’t available.”
The Practical Benefits
For everyday users, Recovery Contacts should provide a safety net against permanent lockout from accounts holding vital information such as photos, documents and personal messages. The short validity of the recovery code makes it difficult for attackers to intercept, and the multiple-choice verification on the contact’s device prevents accidental approval of a fraudulent request.
For Android users, the linked “Sign in with Mobile Number” feature adds another useful safeguard in that it identifies all accounts linked with a particular phone number and allows verification using the previous device’s lock-screen passcode or pattern. This feature is being rolled out globally.
Who Can Use It and When
Recovery Contacts are rolling out now, though not every account will see the feature immediately. Google advises users to check eligibility through their account settings. Personal accounts are the primary focus, while Google Workspace and Advanced Protection users remain excluded. Workspace environments typically use hardware keys and administrator-managed recovery processes, which are considered more appropriate for professional use.
Business Users
For small businesses and sole traders using personal Google Accounts, Recovery Contacts could offer a straightforward but effective layer of protection. For example, losing access to a main account could halt operations if email and documents are tied to it, potentially costing time and money. Adding trusted family members or colleagues could, therefore, prevent prolonged downtime.
However, for organisations using Google Workspace, there is currently no change. Workspace recovery processes remain under administrator control, built around strict security policies that do not permit social recovery mechanisms. The seven-day activation delay after adding a contact also means businesses should prepare in advance rather than waiting until an issue arises.
Competitors and Industry Context
Apple actually introduced a similar system for iCloud users back in 2021, allowing trusted contacts to verify identity for Apple ID recovery. Meta also experimented with “trusted contacts” for Facebook accounts, although that feature was later discontinued. By adopting a comparable model, Google is bringing its ecosystem closer in line with other major platforms while maintaining a strong emphasis on user privacy.
Industry analysts note that this reflects a broader trend toward combining human trust with technical verification. While passkeys and biometrics have strengthened access control, human-assisted recovery provides a fallback that purely technological solutions cannot always guarantee.
Security Considerations and Criticisms
Cybersecurity experts, however, caution that introducing a human element can open new avenues for manipulation. For example, social engineering, where attackers trick people into taking harmful actions, remains a major risk. A fraudster could attempt to pressure a recovery contact into approving a request within the 15-minute window.
That said, Google has added several protections to counter this. For example, the contact must choose the correct number from three randomised options, making it harder to fake a legitimate request. Temporary security holds may also trigger if suspicious activity is detected, giving the account owner time to intervene. The mandatory waiting periods between invitations and activations slow down potential large-scale exploitation attempts.
Security specialists recommend selecting contacts carefully and ensuring those individuals understand the verification process. Any unexpected recovery request should be confirmed through another communication channel before approval.
A Broader Anti-Scam Backdrop
Recovery Contacts appears to sit within Google’s wider effort to limit scams and unauthorised access across its services. Alongside the new recovery feature, the company has expanded phishing and spam protections in Google Messages, adding link warnings and QR-based encryption verification. It has also launched “Be Scam Ready,” an interactive game designed to help people recognise fraudulent tactics before falling victim to them.
What Google Says
In announcing the feature, Google Product Manager Claire Forszt and Group Product Manager Sriram Karra said, “It’s a simple, secure way to turn to people you trust when other recovery options aren’t available.” They also emphasised that recovery contacts “will not have access to your account or any of your personal information,” presenting the feature as another step toward “a password-free future” where account access remains reliable even if devices are lost.
The Key Takeaway for Users
In terms of the key takeaway from Google’s announcement of this feature, individuals with personal Google Accounts are basically being encouraged to set up Recovery Contacts in advance to avoid disruption later. Adding at least two trusted people, ideally those easy to reach quickly, can provide an effective safeguard if account access is lost. Users should also ensure their recovery phone numbers and email addresses are up to date and enable passkeys for secure sign-in wherever possible.
For business users, particularly those on Workspace, existing enterprise recovery policies remain the standard route. That said, Recovery Contacts reflects how identity verification is evolving toward a trust-based model. As accounts become increasingly linked to devices and biometrics, social recovery may soon become a common feature across all major digital ecosystems.
What Does This Mean For Your Business?
The introduction of Recovery Contacts highlights how identity management is now expanding beyond devices and credentials to include human trust as part of digital security. By creating a formal mechanism for involving trusted individuals in account recovery, Google is addressing one of the most frustrating weak points in its ecosystem: the difficulty of regaining access when every technical safeguard fails. This may also signal that major platforms are beginning to view social verification as a legitimate part of cybersecurity, not simply an emergency workaround.
For UK businesses, the change could have mixed implications. For example, sole traders and micro-businesses that still depend on personal Google Accounts gain an extra safety net that could prevent costly downtime if an account is locked. Larger organisations using Workspace, on the other hand, will see little change for now, as enterprise-grade recovery remains tied to administrative controls and hardware security keys. However, as account recovery becomes more reliable for individuals, it could also encourage stronger adoption of passkeys and multi-factor authentication across small firms that have previously avoided them for fear of being locked out.
The move will likely put pressure on other technology providers to strengthen their recovery options while maintaining user privacy. Apple’s earlier adoption of a similar approach shows that users expect this kind of fallback, and Google’s rollout makes it effectively mainstream. For cybersecurity professionals, it raises fresh questions about how to balance convenience and human trust without increasing the risk of manipulation. While the layered protections Google has built in should deter most opportunistic attacks, the feature still depends on the judgement and caution of the chosen contact.
In a broader sense, Recovery Contacts shows that security design is becoming more people-centred. The addition of human trust to the authentication process reflects an acknowledgment that no digital system can ever be entirely self-sufficient. For users, it introduces a practical, transparent safeguard that may one day be as familiar as password resets or two-factor codes. For Google, it reinforces its role as a standard-setter in account protection and signals a future where human support becomes a built-in part of online identity recovery rather than an afterthought.
Security Stop-Press: ‘Pixnapping’ Attack Can Steal 2FA Codes From Android Phones
Researchers have discovered a new Android attack called “Pixnapping” that can secretly steal sensitive on-screen data, including two-factor authentication (2FA) codes, private messages, and financial information.
Developed by a team at Carnegie Mellon University, the attack exploits Android APIs and a GPU hardware side channel known as GPU.zip to capture pixels from other apps. In tests, a malicious app stole a 2FA code from Google Authenticator in under 30 seconds without permissions or visible signs.
The flaw affects recent Google and Samsung phones, including the Pixel 6–9 and Galaxy S25, running Android 13 to 16. Research lead Riccardo Paccagnella described it as “a fundamental violation of Android’s security model.” Google has logged the issue as CVE-2025-48561 and issued partial fixes, though researchers say Android remains vulnerable.
Experts advise users and businesses to keep devices updated, avoid untrusted apps, and limit the display of sensitive data until a full patch is released.