Google Brings ‘Q-Day’ Closer With 2029 Encryption Warning
Google has warned that the moment quantum computers can break today’s encryption may arrive within the next few years, accelerating timelines for businesses to prepare for a fundamental change in digital security.
What Is ‘Q-Day’?
Q-Day refers to the point at which a quantum computer becomes powerful enough to break widely used cryptographic systems such as RSA and elliptic curve encryption, which underpin everything from online banking to software updates.
Google’s position is that this is no longer a theoretical concern for the distant future. As the company warned in its earlier guidance, “the encryption currently used to keep your information confidential and secure could easily be broken by a large-scale quantum computer in coming years.”
The Risk Is Already Emerging
Attackers are also believed to be collecting encrypted data today with the intention of decrypting it later once quantum capabilities become available, a tactic often referred to as ‘store now, decrypt later’.
Google Revises Its Timeline
In a recent update, Google has set out a more urgent timeline for the transition to post-quantum cryptography, signalling that the industry may have less time than previously expected to prepare for this moment.
The company has now introduced a 2029 target for completing its migration to quantum-resistant cryptography, bringing forward urgency compared to earlier industry expectations that placed large-scale quantum threats in the mid-2030s, and stating: “We’re setting a timeline for post-quantum cryptography migration to 2029.”
Not A Direct Prediction
It’s worth noting here that this isn’t a direct prediction from Google of when exactly quantum computers will most likely break encryption, but it provides some guidance and a reassessment of how quickly organisations need to act.
Why The Updated Timeline?
Google said the change is based on recent progress in “quantum computing hardware development, quantum error correction, and quantum factoring resource estimates”.
In simple terms, it seems the technical barriers that once made quantum threats feel distant are being reduced faster than expected.
Google’s update of Q-Day is not simply about setting a date, it is about creating urgency. The company has made this explicit in a recent blog post about the update, stating: “As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline.” It added that the goal is to “provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”
This reflects a broader concern that organisations are underestimating the scale and complexity of the transition required.
This urgency also reflects the scale of what organisations are being asked to do. For example, moving from current cryptographic standards to post-quantum alternatives is not a simple upgrade. It involves identifying where encryption is used, replacing algorithms across systems, updating infrastructure, and ensuring compatibility across supply chains and partners.
The UK’s National Cyber Security Centre has already described this transition as a “complex change programme”, highlighting the scale of the task facing organisations.
The Gap Between Awareness And Readiness
Despite growing awareness of quantum risks, most organisations are not ready.
Part of the challenge is that the threat itself is difficult to fully understand. Quantum computers are often described as vastly more powerful than today’s systems, and for many businesses, this means the practical implications are unclear. Understanding how and when these machines could break existing encryption, and what that means for real-world systems, is not straightforward without some specialist knowledge.
Research cited in industry reports suggests that while a majority of businesses expect quantum-enabled attacks within the next five years, only a small proportion have a clear roadmap in place to address them.
This means that while many organisations accept that quantum threats are coming, there is still uncertainty about how serious those risks are, when they are likely to materialise, and what practical steps should be taken. That uncertainty can easily lead to delays or a tendency to wait for clearer standards and tools rather than acting early.
Google’s revised timeline challenges that assumption by bringing forward its own migration target and signalling that waiting may not be a viable strategy.
What Google Is Already Doing To Help
Alongside announcing its timeline update, Google says it is actively deploying post-quantum cryptography across its own platforms.
The company has highlighted how Android 17 will integrate PQC digital signature protection using ML-DSA, aligned with standards from the National Institute of Standards and Technology.
This is part of a broader effort to build what Google describes as a “new, quantum-resistant chain of trust”, ensuring that systems remain secure even as computing capabilities evolve.
Google says it has also been working on PQC for several years, including deploying quantum-resistant key exchange mechanisms in Chrome and internal systems, and contributing to global standards development, all of which points to the fact that the transition is not only necessary, but already underway.
Why This Matters
The implications extend far beyond large technology providers. For example, encryption underpins core business functions, from securing customer data and financial transactions to protecting intellectual property and ensuring the integrity of software and communications.
If current cryptographic systems become vulnerable, the impact will not be limited to future systems. Data encrypted today could still be exposed years later if it is harvested and stored by attackers now.
That means the risk is already present, even if the technology required to exploit it fully is not yet available.
What Does This Mean For Your Business?
For most organisations, the key issue here is not whether quantum computing will affect them, but how prepared they are for the transition it will require.
Google’s updated timeline suggests that preparation needs to begin sooner rather than later, particularly for systems that rely on long-lived data or digital signatures that must remain secure for many years.
This will involve building what is often referred to as crypto agility, the ability to update cryptographic algorithms without disrupting services, as well as developing a clear inventory of where and how encryption is used across the organisation. In practical terms, that means identifying where sensitive data is stored, how it is protected in transit and at rest, and which systems rely on public key cryptography that may need to be replaced.
It also means starting to assess whether existing platforms, applications and suppliers are capable of supporting post-quantum cryptography, and whether updates, migrations or architectural changes will be required. Some organisations are already beginning to test quantum-resistant algorithms in non-critical systems to understand performance, compatibility and operational impact before wider rollout.
Engagement with suppliers and partners will also be important, as cryptographic systems rarely operate in isolation and weaknesses in third-party systems can undermine otherwise secure environments.
Taken together, Google’s update suggests that the window for treating quantum security as a future concern is narrowing, and that organisations that begin mapping, testing and planning now will be in a far stronger position than those that wait.
Scammers Using Virtual Smartphones To Slip Past Fraud Checks
Fraudsters are increasingly using rentable “cloud phones” that look and behave like real smartphones, creating a new problem for banks, fintechs and businesses that have come to trust the device in a customer’s hand.
Now Using Cloud Phones
According to a recent report by security firm Group-IB, a growing number of scammers are no longer relying on crude emulators or racks of physical handsets to run fraud at scale. Instead, they are turning to cloud phones, effectively remote Android devices running in datacentres, which can be rented cheaply and accessed over the internet.
These services are marketed as legitimate tools for developers, marketers or businesses managing multiple accounts but, in practice, it seems they are also now being widely abused. As the report explains, “what began as a simple scheme to inflate social media metrics has evolved into a sophisticated threat that is quietly reshaping the economics of digital fraud.”
This matters because many fraud controls were built around the idea that fake devices tend to look fake. For example, emulators often leak obvious signs, such as unusual hardware configurations, missing sensor data or other artefacts that security teams know how to spot.
Cloud phones, however, don’t give off these more obvious signals. As Group-IB says, they are “for all intents and purposes… real phones, running genuine firmware, exhibiting natural sensor behavior, and presenting valid hardware attestation.” In other words, they are designed to look authentic at the technical level.
Why They Are So Hard To Detect
Fraud detection systems have traditionally relied on identifying unusual devices, spotting changes in device identity, or flagging suspicious technical signals, all of which have proven effective against earlier generations of emulators and virtual environments.
Cloud phones, however, are designed to avoid exactly those signals by maintaining consistent device characteristics over time while presenting realistic hardware identifiers, software environments and behavioural patterns that closely resemble those of genuine smartphones.
The report highlights that “what makes this threat unlike any other is its invisibility,” noting that activity from these devices can “appear indistinguishable from a legitimate device” to existing detection systems.
Each cloud phone instance can have its own device ID, IP address, geolocation and system profile. Unlike traditional emulators, which often expose tell-tale inconsistencies, these environments are engineered to behave like genuine smartphones over time.
It’s this consistency that’s critical because it allows a device to build up a trusted history, which can then be exploited for fraud without triggering alerts designed to detect sudden changes.
How The Fraud Works In Practice
Group-IB’s report traces how this technology has moved from social media manipulation into financial crime. One of the most significant use cases is the creation and operation of so-called ‘dropper’ or ‘mule accounts’, which are accounts used to receive and move stolen funds.
For example, it seems that fraudsters can open or verify accounts using a cloud phone, then continue to access those accounts from the same virtual device. In some cases, access to both the account and the associated cloud phone instance can be sold on to other criminals.
As Group-IB explains, this creates a powerful advantage for the fraudsters because the same device signals are preserved throughout, meaning “the same device accessing the account that has always accessed it” appears to be in use (once again, it’s the consistency that works).
From a fraud detection perspective, that removes one of the key triggers for additional checks, i.e., there’s no obvious device change, no sudden shift in behaviour, and no immediate reason to challenge the transaction.
The Scale Of The Problem
This development comes at a time when authorised push payment fraud (where victims are tricked into sending money directly to a scammer, often through social engineering) is already a major issue. For example, in the UK alone, losses reached £485.2 million in 2023, with mule accounts playing a central role in moving stolen funds.
Cloud phones make these accounts easier to create, operate and scale. Group-IB says they have enabled “industrial-scale financial fraud” by lowering the cost and complexity of maintaining large numbers of apparently legitimate devices.
It seems that using cloud phones also gives fraudsters an extra economic advantage. Instead of investing in physical phone farms, fraudsters can now rent infrastructure on demand, making it accessible to a wider range of actors with relatively low upfront cost.
Why This Challenges Existing Security Models
For years, device fingerprinting has been a reliable layer in fraud prevention. If an account is accessed from a new or suspicious device, that can trigger step-up authentication or block the transaction.
Cloud phones weaken that model because the device itself is no longer a strong signal of trust if it can be rented, replicated and transferred between users while maintaining a consistent identity.
This doesn’t mean existing controls are obsolete, but it does mean they are no longer sufficient on their own. Group-IB’s report argues that detection must, therefore, move beyond simple device checks and towards a more layered approach.
Group-IB concludes that fraud prevention needs “device-environment correlation, infrastructure-level visibility, behavioral modeling, and graph-based analytics” to identify patterns that individual device checks may miss.
What Does This Mean For Your Business?
For financial institutions, the message from this report is clear. A device that looks genuine can no longer be treated as strong evidence that the activity behind it is genuine too. Fraud detection will really need to focus more on behaviour, context and relationships between accounts rather than relying heavily on device identity alone.
For other businesses, particularly those using mobile apps for onboarding, payments or identity verification, this is a warning that mobile trust models are becoming more complex. Controls that once worked well may now need to be reassessed.
There is also a broader operational implication. As fraud infrastructure becomes easier to rent and scale, the barrier to entry for sophisticated attacks is lowering. That increases the likelihood that smaller organisations, not just major banks, will encounter more advanced fraud techniques.
This represents a clear change in how fraud is delivered, as the fraudster no longer needs to manage large numbers of physical devices and can instead access a virtual environment that behaves like a real smartphone and is designed to pass as one.
Taken together, this research seems to suggest that the balance of trust is changing, with the device in the user’s hand, or at least the one it appears to be, no longer something businesses can rely on without question.
Most IT Leaders Don’t Fully Trust Their Cybersecurity Vendors
New global research shows that while organisations rely heavily on cybersecurity providers, only a small minority fully trust them, exposing a growing gap between dependence and confidence.
A Critical Dependency (With Limited Confidence)
Cybersecurity vendors essentially sit at the heart of modern business operations, responsible for protecting systems, data, and day-to-day continuity. For many organisations, particularly those without large internal IT teams, these providers effectively act as an extension of the business itself.
However, new research from Sophos suggests that this reliance is not matched by confidence. Its Cybersecurity Trust Reality 2026 report, based on a survey of 5,000 IT and security leaders across 17 countries, found that only 5 per cent of respondents say they fully trust their cybersecurity vendors.
This disappointing statistic suggests that businesses are placing critical operational resilience in the hands of providers they don’t completely trust, which raises questions about how risk is actually being managed in practice.
Why Is There A Trust Issue?
One of the most striking findings is not just the lack of trust, but how difficult organisations find it to assess vendors in the first place.
According to the report, 79 per cent of organisations struggle to evaluate the trustworthiness of new cybersecurity providers, while 62 per cent report the same challenge with vendors they already use. This suggests that trust gaps do not disappear once a contract is signed.
The reasons for this are largely practical rather than emotional. For example, many organisations report that vendor information is either not detailed enough, difficult to interpret, or inconsistent across sources. Others admit they lack the internal expertise needed to properly assess technical claims.
As the report explains, organisations are often left trying to validate complex security capabilities without clear, standardised evidence, making meaningful comparisons between providers difficult.
This is where trust begins to shift from a perception issue to a structural one. If organisations cannot independently verify what vendors claim, trust becomes inherently fragile.
Trust As A Measurable Risk Factor
The report makes the important point that, within organisations, trust is no longer seen as a soft or abstract concept, but as something that directly influences risk.
As Sophos notes, “Trust is not an abstract concept in cybersecurity, it’s a measurable risk factor,” highlighting how uncertainty around vendor capability feeds directly into business risk assessments and decision-making.
The report reinforces this further, stating that “CISOs are being asked to prove trust, not assume it,” reflecting the growing expectation that confidence in vendors must be backed by evidence rather than reputation.
This is reflected in how organisations report the impact of low trust. More than half, 51 per cent, say it increases concern that they are more likely to experience a significant cyber incident.
Other consequences are more operational. For example, 45 per cent say it makes them more likely to switch vendors, while others report increased oversight requirements and reduced confidence in their overall security posture.
In effect, a lack of trust doesn’t just create anxiety, it drives cost, complexity, and ongoing disruption.
A Disconnect Between IT And Leadership
Another layer of complexity seems to come from internal misalignment. The report found that 78 per cent of organisations experience differences of opinion between IT teams and senior leadership when assessing vendor trustworthiness.
This reflects the different priorities at play. For example, technical teams tend to focus on performance, reliability, and day-to-day effectiveness, while leadership is more concerned with accountability, compliance, and reputational risk.
When those perspectives do not align, decision-making becomes more difficult. Vendor selection, contract renewal, and incident response planning can all be affected by differing views on how much confidence should be placed in a provider.
What Builds Trust?
The research also highlights a clear shift in what organisations look for when evaluating vendors.
Across both IT teams and senior leadership, the strongest driver of trust is no longer brand reputation or marketing claims, but verifiable evidence. This includes independent certifications, third-party assessments, documented vulnerability disclosures, and demonstrable operational maturity.
Transparency also plays a central role. Organisations increasingly expect clear communication during incidents, visibility into how security processes operate, and evidence that issues are identified and resolved effectively.
As the report makes clear, trust is something that must be demonstrated continuously, not assumed.
This becomes even more important as AI is integrated into cybersecurity tools. Organisations are now asking not just what a system does, but how it makes decisions, how it is governed, and how risks are managed.
What Does This Mean For Your Business?
For UK businesses, this research highlights a critical issue that often sits beneath the surface of cybersecurity strategy.
Most organisations assume that choosing a reputable vendor is enough to reduce risk. In reality, the challenge is not just selecting a provider, but being able to verify, monitor, and validate what that provider is doing over time.
This means trust can no longer be treated as a one-off decision made during procurement. It needs to be actively maintained through ongoing oversight, clear reporting, and defined accountability.
It also suggests that businesses should place greater emphasis on evidence when assessing vendors. Certifications, independent testing, and transparent disclosure practices are becoming essential, not optional.
There is also a need to address internal alignment. Ensuring that IT teams and leadership share a common understanding of vendor risk can help avoid fragmented decision-making and improve overall resilience.
Ultimately, the findings show that cybersecurity is not just about technology, but about confidence in the organisations delivering it. When that confidence is missing, even the most advanced tools can leave businesses feeling exposed.
AI That Always Agrees May Be Harming Our Judgement
New research shows that leading AI systems frequently tell users they are right, and that this behaviour may be subtly weakening people’s ability to reflect, take responsibility, and repair relationships.
What The Research Found
A major study by Stanford researchers, published in Science, has found that sycophancy, i.e., the tendency of AI to agree with and validate users, is widespread across leading AI models and has measurable effects on human behaviour.
Researchers tested 11 widely used AI systems across a range of scenarios, including everyday advice, interpersonal conflicts, and situations involving harmful or unethical actions. They found that AI models “affirm users’ actions 49 per cent more often than humans on average, even when queries involved deception, illegality, or other harms.”
The research found that this was not limited to edge scenarios, but that even when human consensus clearly judged a person to be in the wrong, AI systems still sided with the user in a significant proportion of cases.
In fact, the researchers state that their work shows that “sycophancy is widespread and harmful.”
Why This Matters More Than It Sounds
At first glance, this behaviour may seem like a minor issue of tone or politeness. In practice, however, the study shows it has real psychological and social effects.
Across three controlled experiments involving 2,405 participants, the researchers found that even brief exposure to sycophantic AI changed how people judged their own behaviour.
As the paper explains, “even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their own conviction that they were right.”
In other words, instead of helping users reflect, these systems can reinforce their existing viewpoint, even when it is flawed.
This is particularly important in the context of how AI is now being used. Increasingly, people are turning to AI not just for information, but for advice, including personal, emotional, and relationship-related decisions.
How AI Changes Human Behaviour
The research highlights a shift away from what might be called social friction, i.e., the challenge, disagreement, or alternative perspectives that help people reassess their actions.
Sycophantic AI removes much of that friction. Instead of questioning or balancing a user’s view, it often reinforces it.
The result is a measurable change in behaviour. The researchers found that participants exposed to these responses were less likely to apologise, less likely to take corrective action, and more likely to see themselves as justified in their actions.
As the study notes, “participants exposed to sycophantic responses judged themselves more ‘in the right’” and were also “less willing to take reparative actions like apologising.”
Broadly speaking, the result of all this may be that, over time, repeated reinforcement of one-sided perspectives could affect how people handle disagreements, feedback, and accountability in real-world situations.
Why The Problem Is Likely To Persist
One of the most significant findings is that users actually prefer this behaviour.
Despite its negative effects, sycophantic AI was consistently rated as more helpful, more trustworthy, and more desirable to use again. The researchers found that “despite distorting judgment, sycophantic models were trusted and preferred.”
This creates a difficult dynamic for AI developers. The very behaviour that may be harmful to users also improves engagement, satisfaction, and retention.
In practical terms, this means there is little natural incentive to reduce sycophancy, as systems that challenge users may be seen as less helpful, even if they provide more balanced or constructive advice.
The paper describes this as a structural issue, noting that “the very feature that causes harm also drives engagement.”
This seems to show a clear conflict at the heart of the problem.
A Wider Risk Beyond Vulnerable Users
Concerns around AI behaviour have often focused on vulnerable individuals, but this research suggests the issue is far more widespread.
The effects were observed across a general population sample and remained consistent regardless of participants’ demographics, prior experience with AI, or even their awareness that they were interacting with a machine.
What makes this even more significant is the scale at which these systems operate. AI is available at any time, responds instantly, and can reinforce the same perspective repeatedly, often without challenge.
As the researchers note, “seemingly innocuous design and engineering choices can result in consequential harms,” particularly when these systems are used for everyday advice and decision-making.
Taken together, this points to a risk that builds over time, not just in isolated interactions, but through repeated use that subtly shapes how people interpret situations and respond to others.
What Does This Mean For Your Business?
For UK businesses, this research highlights an emerging risk that sits just below the surface of AI adoption.
Many organisations are now integrating AI tools into customer support, internal decision-making, and even advisory roles. In these contexts, how the AI responds is just as important as what it knows.
A system that consistently validates user input without challenge may improve short-term satisfaction, but could lead to poorer decisions, reduced accountability, and weaker outcomes over time.
There is also a reputational dimension here. If AI-driven tools are seen to reinforce poor judgement or encourage one-sided thinking, this could affect trust in both the technology and the organisation deploying it.
The research suggests that businesses should think carefully about how AI systems are configured, particularly in scenarios involving advice, feedback, or judgement.
It also points towards a broader governance question. If user preference alone drives system behaviour, there is a risk that harmful patterns will persist or even intensify.
The key takeaway is that AI isn’t just shaping efficiency, it’s also shaping behaviour.
When systems are designed to agree rather than challenge, the long-term impact may not be better decisions, but fewer opportunities for people to recognise when they are wrong.
Company Check : SpaceX IPO Signals A New Phase Of Tech Power And Funding
It’s been reported that SpaceX has confidentially filed for what could be the largest IPO in history, with the timing and structure of the move suggesting this may be as much about funding pressure and strategic consolidation as it is about market opportunity.
What Has Been Reported?
Multiple sources (including Bloomberg and Reuters) have reported that Elon Musk’s SpaceX company has submitted draft IPO paperwork to the US Securities and Exchange Commission, with plans to raise between $40 billion and $75 billion. An IPO is when a company sells shares to the public for the first time to raise investment, effectively becoming a publicly listed company, similar to a plc in the UK.
Becoming One Of The Most Valuable Companies In The World
At the upper end, this would comfortably exceed Saudi Aramco’s record $29 billion listing and could value SpaceX at up to $1.75 trillion. That would place it among the most valuable companies in the world at the point of listing.
Confidential Filing
It’s been reported that the filing was made confidentially. This is actually quite a common approach that allows companies to receive regulatory feedback before publicly disclosing financial details. A listing could follow as early as June, depending on market conditions.
Why Is SpaceX Going Public Now?
For years, Elon Musk had suggested SpaceX would remain private until its long-term goals, particularly around Mars, were further advanced. That position now appears to have changed, and the most likely reason is financial rather than philosophical.
SpaceX is no longer just a launch provider. It is now a capital-intensive technology platform spanning satellite internet, heavy-lift rocketry, defence contracts, and artificial intelligence. That means each of these areas requires sustained, large-scale investment.
Starship development alone is expected to cost billions, while Starlink requires constant satellite replacement and expansion. On top of this, the integration of Musk’s AI company xAI introduces a further layer of cost, particularly given the expense of compute, data centres, and energy required to train and run large models.
As some analysts have noted, public markets offer access to capital at a scale private funding cannot easily match, which is likely to be what SpaceX needs to cover the huge costs of tech, infrastructure, and energy needed to scale up.
The Business Behind The Valuation
The strongest commercial foundation for the IPO is Starlink, which has become the most financially successful part of the business. Reports suggest it generated over $10 billion in revenue in 2025 with strong margins, driven by rapid global subscriber growth.
This matters because it provides a predictable, recurring revenue stream that investors can understand and value. In effect, Starlink transforms SpaceX from a project-driven aerospace company into something closer to a telecoms and infrastructure provider.
However, the business itself is becoming more complex. The recent merger with xAI, alongside the integration of the X platform, means SpaceX now operates across communications, AI, defence, and media, rather than being focused purely on space and satellites.
While this may strengthen the long-term strategic story, it also makes valuation more difficult. Some analysts have suggested the merger allows less mature or loss-making parts of the business to be supported by Starlink’s cash flow ahead of the IPO.
Governance And Market Scrutiny
Going public will bring a level of scrutiny that SpaceX has largely avoided as a private company. Quarterly reporting, audited financials, and shareholder accountability will become standard.
Conflicts Of Interest?
There are also broader governance questions. For example, the combination of multiple Musk-controlled companies into a single entity, along with his significant personal stake, raises some familiar concerns around decision-making and possible conflicts of interest.
These concerns are amplified by SpaceX’s role in government infrastructure. For example, the company holds major contracts with NASA and the US Department of Defense, and its Starlink network has become critical communications infrastructure in certain geopolitical situations.
The overlap between private commercial activity and public sector dependency is not new, but at this scale it becomes more visible and more relevant to investors.
Why The Structure Of The IPO Matters
One unusual reported feature is the intention to allocate a larger than normal proportion of shares to retail investors.
If confirmed, this would broaden access to the offering but may also create a shareholder base that is more aligned with Musk’s long-term vision and less focused on short-term governance challenges.
This approach echoes earlier tech IPOs that sought to balance institutional control with wider participation, though it can also reduce pressure from activist investors.
What Does This Mean For Your Business?
For UK businesses, the SpaceX IPO is less about space exploration and more about how modern infrastructure is being built and funded.
The company sits at the intersection of connectivity, defence, and AI, all areas that increasingly underpin day-to-day business operations. Its move to public markets reflects the scale of investment now required to compete in these sectors.
It also highlights a broader trend. The most influential technology platforms are no longer narrow products or services. They are integrated systems combining data, infrastructure, and intelligence, often across multiple industries.
From a risk and strategy perspective, this creates both opportunity and dependency. Businesses benefit from faster innovation and more capable platforms, but they also become more reliant on a smaller number of providers whose decisions are shaped by capital markets as much as technology.
There is also a lesson around scrutiny here. As companies grow in scale and importance, transparency becomes unavoidable. The shift from private to public ownership brings greater visibility, but also greater accountability.
In simple terms, this IPO is not just a milestone for SpaceX. It is a signal that the next phase of technology competition will be defined by access to capital, control of infrastructure, and the ability to operate at global scale.
Security Stop-Press : Tech Firms Declared Targets In Iran Conflict
Iran’s Revolutionary Guard has named 18 major US tech firms as “legitimate targets”, highlighting how commercial technology infrastructure is now being drawn directly into conflict.
The list includes Microsoft, Apple, Google, Nvidia, and Palantir, with Iran claiming that “American ICT and AI companies” are involved in identifying targets. It warned that “for every assassination… one facility… will face destruction,” and advised staff in the region to leave immediately.
This comes amid escalating military activity and increasing use of AI in intelligence and targeting systems.
It is notable that private tech infrastructure, including data centres and cloud platforms, is now being treated as part of the battlefield rather than separate from it.
For businesses, the advice is to review where data is hosted, assess regional exposure, and ensure backup, resilience, and supplier diversification plans are in place.