Microsoft Copilot Bug Exposes Confidential Emails To AI Tool
A coding error inside Microsoft 365 Copilot briefly allowed the AI tool to read and summarise emails that businesses had explicitly marked as confidential.
A Safeguard That Didn’t Hold
In January, Microsoft detected an issue inside the “Work” tab of Microsoft 365 Copilot Chat. The problem, tracked internally as CW1226324, meant Copilot could process emails stored in users’ Sent Items and Drafts folders, even when those messages carried sensitivity labels designed to block AI access.
Inbox folders appear to have remained protected. The weakness sat in a specific retrieval path affecting Drafts and Sent Items.
Microsoft confirmed the bug was first identified on 21 January 2026. A server-side fix began rolling out in early February and is still being monitored across enterprise tenants.
The company said in a statement:
“We identified and addressed an issue where Microsoft 365 Copilot Chat could return content from emails labelled confidential, authored by a user and stored within their Draft and Sent Items in Outlook desktop.”
It added:
“This did not provide anyone access to information they weren’t already authorised to see. While our access controls and data protection policies remained intact, this behaviour did not meet our intended Copilot experience, which is designed to exclude protected content from Copilot access.”
That distinction matters. Microsoft’s position is that no unauthorised user gained access to restricted data. The issue was about Copilot processing information it was supposed to ignore.
How Did This Happen?
Copilot relies on what’s known as a retrieve then generate model. It first pulls relevant content from emails, documents or chats. It then feeds that material into a large language model to produce summaries or answers.
The enforcement point is the retrieval stage. If protected content is fetched at that stage, the AI will use it.
In this case, a code logic error meant sensitivity labels and data loss prevention policies were not correctly enforced for Drafts and Sent Items. Emails marked confidential were picked up and summarised inside Copilot’s Work chat.
That creates obvious concerns. Draft folders often contain unfinalised legal advice, internal assessments or sensitive negotiations. Sent Items frequently hold commercially sensitive exchanges.
Even if summaries stayed within the same user’s workspace, the principle of exclusion had failed.
Why It Happened At An Awkward Moment
Microsoft has been aggressively positioning Microsoft 365 Copilot as a secure enterprise AI assistant. Businesses pay a premium licence fee on top of their Microsoft 365 subscriptions. The selling point is productivity without compromising governance.
This incident seems to undermine that message.
It also comes amid heightened scrutiny of AI tools in regulated environments. The European Parliament recently banned AI tools on some worker devices over cloud data concerns. Regulators are watching closely.
Industry analysts have long warned that the rapid rollout of enterprise AI features increases the likelihood of control gaps and configuration errors. As vendors compete to embed generative AI deeper into core productivity tools, governance frameworks are often forced to catch up. This incident reinforces a wider concern that AI functionality can move faster than internal compliance oversight.
Security researchers have previously highlighted vulnerabilities in retrieval augmented generation systems, including those used by Copilot. The lesson is consistent. If policy enforcement fails at retrieval, downstream safeguards cannot fully compensate.
What This Means For Microsoft And Its Rivals
Copilot sits at the centre of Microsoft’s enterprise AI strategy, so any weakness in its data controls lands hard. Businesses are being asked to trust an assistant that can read across emails, documents and internal chats. That trust is commercial currency.
In Microsoft’s defence, it must be said that the company moved quickly to contain the issue. The fix was applied server-side, so customers did not need to install patches, and the company says it is contacting affected tenants while monitoring the rollout. From a technical response standpoint, the reaction has been swift.
Microsoft has yet to publish tenant-level figures or detailed forensic logs showing exactly which confidential items were processed during the exposure window. For organisations with regulatory obligations, reassurance alone will not be enough. They will want clear evidence of what was accessed, when and under what controls.
Rivals will also be paying attention. Google Workspace with Gemini, Salesforce’s AI integrations and other embedded assistants rely on similar retrieval architectures. The risk exposed here is not unique to one vendor. It reflects a broader design challenge facing every platform embedding generative AI into live corporate data environments.
What Does This Mean For Your Business?
If your organisation is using Microsoft 365 Copilot, this is a governance story, not a crisis story.
Microsoft insists no unauthorised access took place and there is no evidence of data being exposed outside permitted user boundaries. That matters. Yet the episode highlights something more structural. AI controls can fail quietly inside systems businesses assume are ring-fenced.
Copilot is not a standalone chatbot. It operates across your email, documents and collaboration tools. It reads broadly. It summarises intelligently. It relies on retrieval rules working exactly as designed. When those rules misfire, even briefly, sensitive material can be processed in ways you did not intend.
That is why access decisions matter. Embedding AI into legal, HR, finance or executive workflows is not simply a productivity choice. Draft emails often contain unfiltered strategy, regulatory advice or negotiation positions. Those are precisely the communications organisations most want tightly controlled.
This is also a moment to test assumptions. Sensitivity labels and data loss prevention policies are only effective if they behave as expected under real conditions. Enabling new AI features should trigger validation, not blind trust.
Copilot can deliver genuine efficiency gains. Faster document drafting, quicker retrieval of buried information and less manual searching all translate into time saved. The value is real. Yet tools with that level of visibility into your data estate deserve the same scrutiny you would apply to any system handling commercially sensitive information.
Businesses that combine productivity ambition with disciplined oversight will benefit. Those that treat embedded AI as frictionless and risk-free may find the learning curve steeper than expected.
The Truth About Cyber Insurance
Cyber insurance has grown into a multi-billion-dollar global market, yet when a serious breach occurs, the real story often lies in the small print, the exclusions, and the security controls that should have been in place long before the policy was signed.
Once Just An Add-On
Cyber insurance was once treated as a niche add-on to professional indemnity cover. Today it sits at the centre of boardroom risk discussions. The reason is simple. Cyber incidents are no longer rare. They are routine, costly and increasingly disruptive.
So what exactly is cyber insurance, how large has the market become, and when does it actually pay out?
What Cyber Insurance Really Covers
At its core, cyber insurance is designed to cover two broad categories of loss. First-party losses include incident response, forensic investigation, legal advice, customer notification, system restoration, business interruption and, in some cases, ransom payments. Third-party cover addresses claims brought by customers, partners or regulators following data breaches or operational failures.
The detail, however, varies significantly between policies. Cover is often conditional on specific security controls being in place, such as multi-factor authentication, tested backups and patch management processes. In practice, cyber insurance now operates as a form of security gatekeeper. Insurers increasingly assess a firm’s cyber hygiene before agreeing terms or setting premiums.
How Big Is The Market?
According to Munich Re (Münchener Rückversicherungs-Gesellschaft), one of the world’s largest reinsurance companies, the global cyber insurance market was worth around $15.3 billion in 2024 and is expected to reach $16.3 billion in 2025. Munich Re projects that global premium volume could more than double by 2030, with annual growth exceeding 10 percent.
North America accounts for roughly 69 percent of global premiums, with Europe representing around 21 percent. Growth in Europe has been particularly strong over the past few years as regulatory pressure and ransomware attacks have increased awareness.
In the UK, the Association of British Insurers reported that insurers paid out £197 million in cyber claims to UK businesses in 2024. That figure represents a 230 percent increase on the previous year. Malware and ransomware accounted for 51 percent of all UK cyber claims, up from 32 percent in 2023.
These numbers underline two trends. Claims are rising sharply, and insurers are paying substantial sums.
But what do claims actually look like in practice?
Claims And Payouts
There is no universal “claim approval rate” published across the market, but available industry data offers some insight into how incidents unfold.
Coalition’s 2025 Cyber Claims Report, covering incidents in 2024 across several markets including the UK, found that 60 percent of claims arose from business email compromise and funds transfer fraud. These are not sophisticated zero-day exploits. They are often payment diversion scams targeting finance teams.
The same report noted that 44 percent of policyholders affected by ransomware chose to pay the ransom when it was deemed reasonable and necessary. Meanwhile, 56 percent of reported matters required no out-of-pocket payment from the policyholder, often because insurer-provided incident response support mitigated losses before they escalated.
The key takeaway here is that many cyber claims are not dramatic data centre shutdowns. They are invoice fraud, stolen credentials and misdirected payments.
That said, some cases have tested the boundaries of cover entirely.
When The Small Print Becomes The Story
One of the most widely reported examples of a major cyber insurance coverage dispute followed the 2017 NotPetya attack (a malware attack attributed to the Russian military). Pharmaceutical giant Merck said the malware disrupted around 40,000 machines and ultimately caused losses of approximately $1.4 billion. Several of its insurers sought to rely on traditional “war exclusion” clauses, arguing that the attack was attributable to a state actor and therefore not covered. In 2022, a New Jersey court ruled that the wording of the war exclusion did not apply to the cyber attack in question. The parties later reached a confidential settlement.
The Merck case became a landmark moment in cyber insurance interpretation. It highlighted how state-linked cyber operations can blur the boundary between criminal activity and geopolitical conflict, and exposed the limits of legacy policy wording when applied to modern cyber warfare.
Exclusions
In the wake of disputes linked to NotPetya and similar incidents, Lloyd’s of London issued a market bulletin requiring, from 31 March 2023, that standalone cyber policies include clearly defined exclusions addressing state-backed cyber attacks unless expressly covered. The intention was to reduce ambiguity around systemic cyber risk and clarify how attribution would be handled within policy terms.
Other Examples
Other incidents illustrate the potential scale of insured losses. Colonial Pipeline paid a $4.4 million ransom in 2021 following a ransomware attack, with US authorities later recovering approximately $2.3 million in cryptocurrency. CNA Financial was widely reported to have paid $40 million after a ransomware attack the same year. Norsk Hydro, by contrast, refused to pay ransom after its 2019 attack and later disclosed financial impacts in the region of $60–70 million, supported in part by insurance arrangements.
Taken together, these cases demonstrate both the scale of financial exposure and the growing legal and structural complexity surrounding cyber insurance. Insurance can provide vital financial cushioning when an attack hits, yet it can just as quickly become the subject of dispute, interpretation and courtroom argument when definitions, exclusions or attribution are tested.
Why Cyber Insurance Is Interesting Now
Three structural shifts are fundamentally reshaping the cyber insurance market and changing how organisations think about risk, cover and accountability.
Cyber insurance is increasingly acting as a de facto regulator. Insurers demand evidence of MFA, endpoint protection, network segmentation and backup testing before binding cover. Organisations seeking insurance often upgrade security controls simply to qualify.
There is a clear protection gap. Swiss Re estimates that SMEs account for around 30 percent of global cyber premiums, yet penetration rates among smaller firms remain modest. Many UK SMEs remain uninsured despite rising threat levels.
Systemic risk looms large. Supply chain attacks, cloud provider outages and state-linked campaigns raise questions about correlated losses. Insurers must balance growth with exposure to events that could trigger thousands of simultaneous claims.
What Does This Mean For Your Business?
For UK organisations, cyber insurance is neither a silver bullet nor a formality. It is a financial resilience tool that sits alongside prevention, not in place of it.
Policies can provide rapid access to specialist incident response teams, legal advisers and negotiators at moments of crisis. That support can materially reduce downtime and reputational damage, yet cover is conditional. Failure to implement agreed controls can jeopardise claims.
Businesses should therefore treat cyber insurance procurement as part of a broader risk management strategy. That means reviewing exclusions, understanding sub-limits for ransomware and business interruption, and aligning technical controls with policy requirements.
The market is growing, claims are increasing, and insurers are paying out significant sums. The most important lesson from the past decade is that buying cyber insurance is not the end of the story. It is the point at which scrutiny, obligations and real risk management truly begin.
Hard Drive Makers Sell Out 2026 Output To AI Data Centres
The world’s biggest hard drive manufacturers have already allocated all the units they will produce this year after hyperscale AI and cloud operators secured the bulk of available capacity.
AI Infrastructure Buys Up The Year
Western Digital and Seagate have both confirmed that their nearline hard drive production for calendar year 2026 is effectively spoken for.
Western Digital chief executive Tiang Yew Tan told analysts: “We’re pretty much sold out for calendar ’26. We have firm purchase orders with our top seven customers. And we’ve also established long-term agreements with two of them for calendar year ’27 and one of them for calendar year ’28.”
Seagate CEO Dave Mosley was equally direct: “Our nearline capacity is fully allocated through calendar year 2026, and we expect to begin accepting orders for the first half of calendar year 2027 in the coming months… multiple cloud customers are discussing their demand growth projections for calendar 2028, underscoring that supply assurance remains their highest priority.”
In simple terms, the hyperscalers have moved first and bought ahead.
Nearline drives are the high-capacity workhorses used in data centres for bulk storage. They are not consumer PC drives. They are 30TB-plus, 40TB-class disks that underpin cloud storage, AI training datasets and archival systems.
Why AI Is Driving The Squeeze
The AI boom has created a double demand curve.
Training large models requires vast amounts of storage for datasets, checkpoints and logs. Inference workloads generate new data that also needs to be stored, replicated and backed up. Cloud providers are scaling capacity aggressively.
Technology market research firm Omdia now forecasts total server spend in 2026 at around $590 billion, with datacentre capex exceeding $1 trillion. The top ten cloud providers are expected to account for more than 70 percent of that spend, with AI-optimised servers representing roughly 80 percent of total server investment.
Storage sits at the heart of that build-out.
Western Digital has pivoted heavily towards this segment. Around 89 percent of its revenue now comes from cloud customers, compared with just 5 percent from consumers. This is no longer a PC storage business. It is AI infrastructure plumbing.
Implications For The Wider Market
For hyperscalers, long-term supply agreements bring certainty. For everyone else, the cupboard looks thinner.
Analysts have warned that discretionary buyers, including mid-sized enterprises and traditional server customers, may struggle to secure high-capacity drives at predictable prices. Corporate IT projects that assumed hard drives would provide a cost-effective capacity tier may need to revisit budgets.
There is also a ripple effect. AI demand has already strained DRAM and NAND flash markets. If SSD prices rise, some buyers will pivot back to HDDs for bulk storage, adding further pressure to supply.
Andrew Buss, from global market intelligence and research firm International Data Corporation (IDC), recently noted that AI growth is consuming “large amounts of fast flash-based NVMe SSDs”, pushing up prices and prompting a reconsideration of HDD-based arrays where workloads allow.
The result is an unusual reversal. Hard drives, once seen as legacy technology, are back at the centre of infrastructure planning.
Technology Race Intensifies
At the same time, the technical arms race continues.
Western Digital is pushing towards 40TB and 44TB drives this year and has outlined a roadmap to 100TB by 2029, supported by new 14-platter designs. Seagate is advancing its HAMR technology and has publicly targeted 100TB drives by the end of the decade.
These capacity gains matter. Hyperscalers want more storage per rack, per watt and per square metre. Increasing areal density and platter counts is now a strategic priority, not an incremental upgrade.
The challenge is manufacturing capacity. HDD production cannot be scaled overnight. Tooling, media, heads and assembly lines require long lead times. When hyperscalers lock in output years in advance, smaller buyers sit further back in the queue.
What Does This Mean For Your Business?
For Western Digital and Seagate, the sell-out provides revenue visibility rare in the storage sector. Multi-year agreements reduce demand uncertainty and underpin capital investment plans.
For AI infrastructure players, it reinforces concentration. The largest cloud providers are able to secure supply at scale, strengthening their competitive position.
For enterprises and SMEs, it raises practical questions. If you are planning a server refresh or building on-premise storage, availability and pricing assumptions may need adjustment.
There is also a structural concern here. When the majority of global HDD output is effectively pre-booked by a small number of hyperscalers, the market becomes less flexible. Innovation may skew even further towards the needs of AI data centres rather than general-purpose enterprise workloads.
Critics argue that the AI infrastructure boom is distorting supply chains across silicon, memory and now spinning disk. Supporters counter that it is driving investment, accelerating innovation and revitalising a technology many had written off.
What is clear here is that the humble hard drive, long overshadowed by flash, has become a strategic asset again. In an AI-first world, bulk storage is no longer a commodity. It is strategic leverage.
Tech Firms Face 48 Hour Deadline To Remove Abusive Images
The UK government is moving to force tech platforms to remove non-consensual intimate images within 48 hours of being flagged or face fines of up to 10 percent of global turnover.
The 48 Hour Rule
On 19 February 2026, ministers confirmed an amendment to the Crime and Policing Bill that will place a strict 48 hour takedown duty on platforms hosting intimate images shared without consent.
Under the proposed law, any non-consensual intimate image reported to a platform must be removed within two days. Failure to comply could trigger fines of up to 10 percent of qualifying worldwide revenue or, in extreme cases, service blocking in the UK.
The government is also clear that victims should not have to chase individual platforms. The intention is that an image will only need to be reported once, with removal applied across multiple services and future uploads automatically blocked.
Prime Minister Sir Keir Starmer said: “The online world is the frontline of the 21st century battle against violence against women and girls. That’s why my government is taking urgent action against chatbots and ‘nudification’ tools.
“Today we are going further, putting companies on notice so that any non-consensual image is taken down in under 48 hours.”
Why The Government Is Escalating This
Ministers have highlighted intimate image abuse as part of a wider violence against women and girls strategy, which the government has labelled a national emergency.
Technology Secretary Liz Kendall said: “The days of tech firms having a free pass are over. Because of the action we are taking, platforms must now find and remove intimate images shared without consent within a maximum of 48 hours.
“No woman should have to chase platform after platform, waiting days for an image to come down. Under this government, you report once and you’re protected everywhere.”
The government has also signalled that non-consensual intimate images will become a “priority offence” under the Online Safety Act. Ofcom is expected to treat such material with the same severity as child sexual abuse content and terrorist material, including exploring digital marking techniques so that flagged images are automatically detected and blocked on re-upload.
Internet service providers may also receive guidance on blocking access to rogue websites that fall outside the reach of mainstream regulation but host abusive content.
What This Means For Platforms
For large social media firms, messaging services and content hosts, the message from government is that platforms must act fast.
The 48 hour window will require robust detection systems, clear reporting mechanisms and sufficient human moderation capacity to assess complex cases. Automated tools may help, particularly where digital fingerprints are applied to known abusive material, yet borderline cases will still require judgement.
The financial stakes are high. A 10 percent global revenue fine is significant for multinational platforms, and the threat of service blocking in the UK raises further commercial risk.
There are also operational challenges to consider. Images may be edited, cropped or slightly altered to evade automated detection. Smaller platforms may lack the infrastructure of larger tech companies. Critics argue that strict timelines could lead to over-removal, particularly where context is disputed.
Civil liberties groups have historically warned that rapid takedown mandates risk curbing legitimate expression if not carefully implemented. Platforms will need clear guidance from Ofcom on evidential thresholds and appeals processes.
What Does This Mean For Your Business?
The impact of this measure extends beyond consumer social media. Any UK business operating user-generated content, community forums, file sharing or messaging functionality will need to understand its exposure. If intimate content is hosted or shared on a corporate platform, the 48 hour rule will apply once flagged.
Even organisations that don’t host content directly need to pay attention. Investors, customers and partners now expect clear and proactive safeguards against online abuse, and there is far less tolerance for getting this wrong.
This law is also designed to reinforce a broader compliance trend. The Online Safety Act already imposes duties of care on platforms, and this amendment tightens expectations around response time and cross-platform coordination.
For SMEs building apps or digital services, moderation strategy can no longer be an afterthought. Clear reporting channels, defined internal processes and documented escalation routes will be essential.
This legislation marks a significant escalation in how the UK treats online intimate image abuse. It shifts responsibility firmly onto platforms and signals that enforcement will be measured not only by policy statements, but by speed and action.
Company Check : Microsoft’s Glass Storage and the Future of Long Term Data
Microsoft has published peer-reviewed research demonstrating that data can be written into ordinary borosilicate glass and preserved for more than 10,000 years, positioning its ‘Project Silica’ work as a potential long-term archival storage platform for the cloud era.
The Challenge
This development addresses a persistent challenge for hyperscale cloud providers and large enterprises, i.e., how to store growing volumes of data reliably, economically and sustainably for decades.
Why Long-Term Storage Is Becoming a Strategic Issue
Global data volumes are growing at an exponential rate. Much of that data does not need high-performance storage. It needs durable, low-cost archival storage that can be retrieved if required, often for regulatory, legal or historical reasons.
Traditional archival media have limits. Magnetic tape, still widely used for cold storage, degrades over time. Hard disk drives and solid-state systems are not designed for century-scale retention. All require periodic migration to new media generations. That migration cycle consumes energy, equipment, labour and budget.
Microsoft’s Project Silica is designed to remove that recurring migration requirement. The central proposition is simple: store data once, in a chemically and thermally stable medium, and leave it in situ for its entire retention life.
How The Technology Works
Project Silica uses femtosecond lasers to write data inside glass. The laser modifies the optical properties of microscopic regions within the material, creating three-dimensional data structures known as voxels. These voxels encode information in multiple layers within a 2 mm thick glass platter.
In its latest Nature publication, the Microsoft Research team reports:
– A data density of 1.59 Gbit per cubic millimetre
– 301 data layers within a 120 mm square glass piece
– A usable capacity of approximately 4.8 TB per platter
– Write throughput of 25.6 Mbit per second per beam
– Energy efficiency of around 10 nJ per bit
Crucially, the team has extended the technology beyond high-purity fused silica to borosilicate glass, the same class of material used in cookware and industrial glazing. This change addresses one of the barriers of cost and material availability to commercialisation.
The research also demonstrates accelerated ageing tests suggesting data lifetimes could exceed 10,000 years at room temperature.
Why Borosilicate Changes the Equation
Earlier glass storage demonstrations relied on specialised fused silica, which is expensive and available from limited suppliers. Borosilicate is far more common and significantly cheaper.
Moving to borosilicate reduces media cost and simplifies manufacturing. It also allows Microsoft to streamline the read hardware. The latest phase-voxel method requires only a single camera in the reader, rather than multiple polarisation-sensitive cameras.
From a systems perspective, that reduction in mechanical and optical complexity matters. Archival infrastructure must be robust, scalable and economically viable at datacentre scale. The shift to borosilicate makes that discussion more realistic.
Security and Air Gap by Design
One notable feature of the Silica architecture is its inherent immutability (it can’t be altered, overwritten or deleted without leaving evidence). Reading the glass requires regular light microscopy, which does not have sufficient power to modify the material. Writing requires high-energy femtosecond laser pulses.
As a result, the medium cannot be overwritten accidentally during read operations. Microsoft describes this as “true air gap by design”. In practical terms, it offers strong protection against ransomware and unauthorised modification of archived data.
For organisations with strict evidential retention requirements, that immutability is significant.
Performance Is Not the Primary Objective
Silica is not competing with SSDs, HDDs or even active tape libraries for performance workloads. It is designed for deep archival storage.
The write throughput, while technically impressive, remains modest compared to high-performance systems. Read operations rely on wide-field microscopy and machine-learning-based decoding to reconstruct data from voxel patterns. Error correction is handled using forward error correction and low-density parity-check codes.
The system has been engineered end-to-end, from writing and reading hardware to machine-learning decoding models. That full-stack approach distinguishes it from earlier academic demonstrations that focused only on materials science.
This is really a storage system design project, not simply a physics experiment.
Sustainability and Cloud Economics
Microsoft is also keen to frame Project Silica within a sustainability context. Magnetic media requires periodic data refresh cycles. Each refresh involves powering up systems, copying data, validating integrity and decommissioning ageing media.
A medium that can remain stable for millennia reduces the need for repeated migrations. That lowers energy use, operational complexity and embodied carbon associated with replacement hardware.
For hyperscale cloud providers operating at massive archival volumes, even incremental reductions in refresh cycles translate into meaningful cost and energy savings.
The broader strategic implication is that long-term archival storage may become more media-centric and less migration-dependent over time.
Where This Sits in Microsoft’s Strategy
Project Silica sits within Microsoft Research and has been developed alongside Azure storage architecture research. It has already been used in proofs of concept, including archival storage of Warner Bros.’ Superman film and collaborations with preservation initiatives.
Microsoft describes the research phase as complete, and the company is now evaluating how the learnings translate into production systems.
That distinction matters. This is not yet a commercial Azure tier. It is a demonstrated platform technology that has met key storage system metrics in peer-reviewed publication.
Commercial deployment will require further engineering around robotics, media handling, library design and operational integration within datacentres.
Is This a Near-Term Disruption?
Glass storage will not replace existing archival systems overnight. Tape remains cost-effective and deeply embedded in enterprise infrastructure.
However, the technical barriers that once made glass storage largely theoretical have been reduced. The extension to borosilicate glass, simplified reading systems and validated longevity testing move the concept closer to practical viability.
If Microsoft can industrialise the robotics and system-level integration, Silica could become a credible long-term archival tier within hyperscale cloud platforms.
What Does This Mean For Your Business?
For most organisations, Microsoft’s glass storage technology is certainly not something you will deploy next year.
The more important development here is not the material itself, but what it reflects. Long-term data retention is no longer just an IT housekeeping task. It is becoming a strategic infrastructure issue. Regulatory obligations are extending retention periods. Litigation exposure is expanding. Sustainability commitments are tightening. Meanwhile, data volumes continue to grow.
If your archival strategy relies entirely on periodic media refresh cycles, manual integrity checks and legacy tape rotations, it is worth asking whether that model will remain economically and operationally sustainable over the next ten to twenty years.
Microsoft’s research indicates that the industry is now actively exploring media that reduce migration cycles, lower long-term energy use and improve immutability by design. Whether Silica becomes commercially mainstream is almost secondary. The strategic lesson is that archival architecture is evolving.
For your business, the practical implications are essentially threefold:
1. Treat long-term data retention as part of your infrastructure strategy, not just a compliance checkbox.
2. Understand the full lifecycle cost of your archival estate, including refresh, migration and energy overheads.
3. Recognise that immutability and physical air gap characteristics are becoming increasingly relevant in a world shaped by ransomware and supply chain attacks.
Glass storage may or may not become the dominant archival medium. What is clear is that long-term data stewardship is now a strategic capability. Organisations that plan for that reality early will have greater flexibility, lower long-term risk and a clearer sustainability narrative than those that continue to treat archive storage as static background plumbing.
Security Stop-Press : Employee Monitoring Tools Hijacked For Ransomware
Ransomware gangs are abusing legitimate employee monitoring software to break into business networks.
Security firm Huntress uncovered two recent incidents in which attackers used Net Monitor for Employees alongside remote management platform SimpleHelp to gain persistent access. Instead of custom malware, they relied on commercial tools to blend in with normal IT activity.
Net Monitor includes remote shell and command execution features. Huntress said attackers used it for “hands-on-keyboard reconnaissance” before attempting to deploy Crazy ransomware. In one case, access began through a compromised vendor SSL VPN account, with the monitoring agent disguised as a legitimate Windows service.
The attackers also configured SimpleHelp to monitor cryptocurrency-related keywords, indicating financial motives beyond ransomware alone. Huntress said the shared infrastructure and tactics “strongly suggest a single threat actor or group behind this activity.”
Businesses should tighten remote access controls, enforce multi-factor authentication and closely audit any monitoring or RMM software in use. These intrusions relied on stolen credentials and the misuse of trusted tools, not sophisticated zero-day exploits.