Featured Article : One In Three Adults Turning To AI For Emotional Support
One in three adults in the UK have used artificial intelligence (AI) for companionship, emotional support or social interaction, according to new research from a government-backed AI safety body, a finding that takes on added significance during the Christmas and New Year period when loneliness and mental health pressures often peak.
Frontier AI Trends
The finding comes from the first Frontier AI Trends Report published by the AI Security Institute, a body established in 2023 to help the UK government understand the risks, capabilities and societal impacts of advanced AI systems. The report draws on two years of evaluations of more than 30 frontier AI models and combines technical testing with research into how people are actually using these systems in everyday life.
Emotional Impact
While much of the report focuses on national security issues such as cyber capabilities, safeguards and the risk of loss of human control, it also highlights what AISI describes as “early signs of emotional impact on users”. One of the clearest and most surprising indicators of this is how widely conversational AI is already being used for emotional and social purposes.
How Many People Are Using AI For Emotional Support?
The AISI report highlights how “over a third of UK citizens have used AI for emotional support or social interaction”. AISI explains that this figure was uncovered after it carried out a census-representative survey of 2,028 UK adults. The results showed that 33 per cent had used AI models for emotional support, companionship or social interaction in the past year. Also, it seems that usage was not confined to occasional curiosity. For example, 8 per cent of respondents said they used AI for these purposes weekly, while 4 per cent said they did so daily.
Use A Mix of AI
The report also notes that people were not relying solely on specialist “AI companion” products. In fact, respondents reported using a mix of general-purpose chatbots and voice assistants, suggesting that emotional and social use is emerging as a mainstream behaviour linked to widely available consumer AI tools.
It should be noted here that AISI isn’t presenting these stats as proof of widespread harm. Instead, it frames the figures as an early signal that deserves attention as AI systems become more capable, more persuasive and more deeply woven into everyday routines.
What Happens When AI Companions Go Offline?
To move beyond self-reported survey data, AISI also examined behaviour in a large online community focused on AI companions. Researchers analysed activity from more than two million Reddit users and paid particular attention to what happened when AI services experienced outages.
According to the report, chatbot outages triggered “significant spikes in negative posts”. In one example, posting volumes increased to more than 30 times the average number of posts per hour. During these periods, many users described what AISI calls “symptoms of withdrawal”, including anxiety, low mood, disrupted sleep and neglect of normal responsibilities.
Again, AISI is being careful not to over-interpret these findings and doesn’t seem to be suggesting that most users are dependent on AI systems or that emotional reliance is inevitable. Instead, its analysis can be used as evidence that some users can form emotional attachments or routines around conversational AI, particularly when it acts as an always-available, non-judgemental listener.
Christmas And New Year
The timing of these findings is particularly relevant during Christmas and the New Year, when loneliness, grief and isolation often intensify across the UK. For example, seasonal pressures can amplify the reasons people turn to conversational technology in the first place.
Charities have long warned that Christmas can be one of the loneliest times of the year. Shorter days, cold weather, disrupted routines and the expectation of celebration can all heighten feelings of exclusion or loss. For people who are bereaved, estranged from family, living alone or struggling financially, the festive period can magnify existing emotional strain.
Age UK has repeatedly highlighted the scale of seasonal loneliness among older people, saying that one million feel more isolated at Christmas than at any other time of year. Hundreds of thousands will spend Christmas Day without seeing or speaking to anyone, while millions eat dinner alone. Although AISI’s data focuses on adults of all ages, the festive period provides a clear context in which an always-available AI chatbot may feel like a lifeline rather than a novelty.
Mental health charities also point out that access to support can become more difficult over Christmas and New Year. For example, many services run reduced hours, GP appointments are harder to secure, and waiting lists do not pause just because it is the festive season. For people already waiting weeks or months for help, the gap can feel even wider.
It’s easy to see, therefore, why in that context, AI systems that respond instantly, at any hour, may appear particularly attractive. AISI’s finding that 4 per cent of UK adults use AI for emotional purposes daily suggests that for some people, these tools are already filling gaps that become more visible during holiday periods.
The Youth Mental Health Context In The UK
The adult data from AISI becomes more striking when placed alongside evidence about young people’s mental health and their use of online support tools.
For example, research from the Youth Endowment Fund paints quite a stark picture of teenage mental health in England and Wales. In its Children, Violence and Vulnerability 2025 report, YEF says: “The scale of poor mental health among teenagers is alarming.”
Using the Strengths and Difficulties Questionnaire, a standard 25-item screening tool, YEF found that more than one in four 13–17-year-olds reported high or very high levels of mental health difficulties. YEF says this is equivalent to nearly one million teenage children struggling with their well-being.
Complex and Unmet Needs
Behind this figure lie complex and often unmet needs. For example, a quarter of teenagers reported having a diagnosed mental health or neurodevelopmental condition, such as depression or ADHD. A further 21 per cent suspected they had a condition but had not been formally diagnosed, suggesting many young people are experiencing difficulties without recognition or support.
YEF also reports high levels of distress. Fourteen per cent of teenagers said they had deliberately harmed themselves in the past year, while 12 per cent said they had thought about ending their life. In total, almost one in five teenagers, around 710,000 young people, had self-harmed or experienced suicidal thoughts.
Why Many Young People Are Turning Online
YEF’s research shows that most teenagers with mental health difficulties do talk to someone they trust, usually a parent or friend, but the problem arises when it comes to professional support.
YEF’s research found that more than half of teenagers with a diagnosed mental health condition were receiving no support at all. Also, among those not receiving help, around half were on a waiting list and others were neither receiving treatment nor expecting to receive it.
With services stretched and waiting times long, YEF says it is, therefore, unsurprising that young people are increasingly turning online, e.g., to AI chatbots. In fact, more than half of all teenagers reported using some form of online mental health support in the past year, rising to two-thirds among those with the highest levels of difficulty.
AI Commonly Used
One of the most striking YEF findings is how common AI chatbot use already is. YEF reports that a quarter of all teenage children had turned to AI chatbots for help, making them more widely used than traditional mental health websites or telephone helplines.
Violence
This pattern is even stronger among teenagers affected by serious violence. For example, the YEF found that nine out of ten young people who had perpetrated serious violence said they had sought advice or help online, which is nearly twice the rate of those with no experience of violence.
Festive Pressures And Always-On Technology
Christmas and New Year can be especially challenging for teenagers as well as adults. For example, school routines are disrupted, family tensions can rise, and support services may be harder to reach. For young people already dealing with anxiety, grief or trauma, the festive period can intensify feelings of isolation.
When combined with YEF’s findings about access gaps, this seasonal pressure helps explain why AI chatbots may become a go-to source of support. Unlike helplines or appointments, they do not close for bank holidays, require no waiting, and carry no perceived judgement.
AISI’s report does not suggest that AI should replace human support. Instead, it highlights a reality that becomes particularly visible at Christmas, i.e., conversational AI is already playing an emotional role in people’s lives, not because it was designed as therapy, but because other forms of connection and support are often unavailable when they are needed most.
A Trend With Wider Implications
AISI’s emotional support findings actually sit alongside its broader warnings about rapidly advancing AI capabilities and uneven safeguards. The institute says AI performance is improving quickly across multiple domains, while protections remain inconsistent.
In that context, the growing emotional role of AI raises some difficult questions. As systems become more persuasive and more human-like in conversation, understanding how people use them during periods of heightened vulnerability, e.g., Christmas and New Year, is becoming increasingly important.
Although neither AISI nor YEF presents AI as the root cause of loneliness or poor mental health, both sets of research seem to point to structural issues such as isolation, violence exposure, long waiting lists and gaps in support. The festive season simply brings those pressures into sharper focus, at the same time as AI tools are more accessible than ever.
Looking at this research, the evidence may show that now, for a growing number of people in the UK, AI may be less of a productivity tool or a novelty, and more a part of how they cope, reflect and seek connection.
What Does This Mean For Your Business?
This evidence seems to highlight a gap between emotional need and available human support, with AI increasingly stepping into that space by default rather than by design. Neither the AI Security Institute nor the Youth Endowment Fund suggests that conversational AI is a substitute for professional care or human connection. What their findings do show, however, is that when support is slow, fragmented or unavailable, people will turn to tools that are immediate, private and always on, especially during periods like Christmas and New Year when loneliness and pressure intensify.
For UK businesses, this has practical implications that go beyond technology policy. For example, employers are already grappling with rising mental health needs, winter absenteeism and the wellbeing impact of long waiting lists for NHS and community support. If staff are increasingly relying on AI tools for emotional reassurance, that signals unmet need rather than a tech trend to ignore. Organisations that take mental health seriously may now need to think harder about access to support, signposting, and how seasonal pressures affect staff, customers and communities alike.
For policymakers, regulators, educators and technology developers, the challenge is really achieving the right balance. AI is clearly providing something people value, particularly accessibility and responsiveness. However, the risk lies in leaving that role unexamined as systems become more persuasive and more embedded in daily life. As this research shows, the emotional use of AI is no longer hypothetical, but is already happening at scale, shaped by wider social pressures that Christmas simply makes harder to ignore.
Tech Insight : Problems With Windows 11 Updates Reported
Many MSPs have been reporting that Windows 11 updates are increasingly causing upgrade failures, BitLocker lockouts and unexpected behaviour, and here we look at what may be going wrong, why it is happening now, and what can realistically be done to prevent it.
The Pattern Many MSPs Are Seeing on the Ground
It has been reported across organisations supported by MSPs that Windows 11 feature updates and security patches are failing in ways that feel inconsistent, hard to predict and difficult to explain. In practical terms, this has included devices that meet Microsoft’s published requirements not receiving updates at all, updates failing part way through installation, and systems rebooting directly into BitLocker recovery screens.
What has made this particularly frustrating is that many of the affected machines appear otherwise healthy. For example, disk space is available, policies are applied correctly, and in some cases manual upgrades succeed. At scale, however, manual intervention doesn’t translate into a sustainable approach, particularly when large numbers of devices behave differently. As a result, for MSPs, Windows updates are increasingly becoming a visible support issue rather than a background maintenance task.
Issues Acknowledged
This experience appears to align with wider reporting beyond MSP communities. For example, back in November this year (2025), Microsoft acknowledged issues with specific Windows 11 security updates that caused devices to enter BitLocker recovery mode after installation. These incidents affected supported business versions of Windows 11 and prompted follow-up guidance and remediation updates.
Updates That Refuse to Install or Fail Without Warning
One of the most frequently reported problems is Windows 11 feature updates either not being offered to eligible devices or failing without presenting a clear error message.
A recurring technical factor appears to be the EFI system partition (a small hidden disk area that helps Windows start). Many devices originally deployed with Windows 10 were created with EFI partitions of around 100 MB. While this was sufficient under earlier Windows servicing models, it is increasingly inadequate for modern recovery and update processes.
For many now it seems that when Windows attempts to stage a feature update and can’t write the required boot or recovery components to the EFI partition, the update may fail silently or be blocked entirely. Windows Update does not always highlight this limitation clearly, so investigation often focuses on policies, drivers or hardware compatibility, when the underlying cause is actually related to disk layout and boot configuration.
It’s been reported that this lack of visibility has added complexity to diagnosing update failures, particularly in mixed hardware environments.
Why BitLocker Is So Often Involved
BitLocker, a built in Windows tool that encrypts a device’s data to protect it if lost or stolen, has featured prominently in many reported update issues, not because encryption itself is malfunctioning, but because of how closely it is now integrated into the Windows boot process.
For example, many Windows 11 devices ship with BitLocker or device encryption enabled by default, especially where users sign in using Microsoft or Entra ID accounts during setup. While this improves baseline data protection, it also means that updates interact directly with encrypted boot components.
In mid November, Microsoft confirmed that certain Windows 11 security updates could trigger BitLocker recovery prompts after installation, even when no obvious configuration changes had been made. Users were presented with requests for 48 digit recovery keys, leading to a noticeable increase in support calls where keys were not immediately available.
In some reported cases, recovery environments were also affected, with peripherals such as USB keyboards and mice not responding at the recovery prompt. Microsoft subsequently issued emergency fixes to restore recovery environment functionality, underlining the seriousness of the issue.
Windows 11 Upgrades and the End of Windows 10 Support
These update problems are occurring against the backdrop of a wider transition to Windows 11. Windows 10 reached the end of mainstream support in late 2025, prompting many organisations to accelerate upgrade plans. While extended security updates remain available in limited scenarios, Microsoft has positioned Windows 11 as the primary supported desktop platform going forward.
As a result, businesses that delayed upgrading are now moving in larger numbers, often across device fleets that include both new and older hardware. This has increased the volume of feature updates being deployed and exposed edge cases that may not have appeared as frequently during earlier, more gradual upgrade cycles.
Windows 11 itself has also followed a faster cadence of servicing updates, particularly during the rollout of later builds in 2025. While this approach enables quicker responses to security issues, it also increases the likelihood that update related problems will surface in real world environments before they are fully resolved.
Why These Issues Are Becoming More Common
These problems are becoming more common due to a combination of increased platform complexity, faster update cycles and stronger default security settings within Windows 11. For example:
– Growing platform complexity. Windows 11 is required to operate securely across a broad range of hardware, firmware versions and security configurations. Each update must account for UEFI behaviour (how the system firmware controls the boot process), TPM states (the status of the security chip that stores encryption keys), Secure Boot, encryption, device drivers and third party security software, all interacting simultaneously. As default security settings have been strengthened, the tolerance for inconsistency has narrowed. Relatively small changes in update handling can have disproportionately large effects once deployed at scale.
– Faster update cycles. Microsoft now releases updates more frequently than in previous Windows generations. While this improves responsiveness to vulnerabilities, it reduces the amount of time updates spend being exercised across the full range of business configurations before wide deployment. MSPs often encounter these edge cases early because they support diverse environments rather than uniform device fleets.
– Encryption as a default state. With encryption now widely enabled by default, the consequences of update failures have changed. When issues occur during boot related updates, devices may refuse to start without recovery credentials rather than reverting automatically. This has raised the operational impact of update failures, even where the underlying issue is relatively contained.
What Has Helped Reduce the Impact
Across wider industry reporting and real world experience, several patterns have now emerged around which measures have helped limit disruption when Windows 11 update issues occur.
For example, testing feature updates and major security patches on a small number of representative devices has helped surface issues early. Staged deployment, rather than immediate broad rollout, has allowed problems to be identified before they affect larger user groups.
Centralised storage of BitLocker recovery keys has also proven critical where recovery prompts occur, reducing downtime and support escalation. In environments where EFI partition limitations are known, addressing these during rebuilds or hardware refresh cycles has reduced repeated update failures.
Alongside these technical measures, clearer explanations of how modern Windows updates interact with security features and boot environments have become more important as businesses try to understand whether issues are isolated incidents or part of wider platform behaviour.
What Does This Mean For Your Business?
It seems that recently reported Windows 11 update problems are not just the result of a single fault or a sudden drop in quality, but the outcome of a more complex platform colliding with faster release cycles and a large, overdue upgrade push away from Windows 10. For MSPs, this has changed the nature of updates from something that could largely run in the background into an operational risk that needs closer attention, clearer communication and better preparation. For Microsoft and hardware vendors, it highlights how small changes at the boot or recovery level can have wide consequences once deployed at scale.
For UK businesses, the practical takeaway is that disruption linked to updates does not automatically indicate neglect or mismanagement. For example, many of the issues now being seen are tied to how modern Windows versions handle encryption, recovery environments and legacy device layouts during upgrades. Understanding that context matters, particularly as more organisations complete their move to Windows 11 and rely on it as their primary supported platform.
When update problems do arise, speaking to your IT support provider is often the safest and most effective first step. This is because they are best placed to confirm whether an issue is local or part of a wider pattern, to recover access without risking data, and to put measures in place that reduce the chance of repeat disruption. As Windows continues to evolve, that relationship between businesses, their IT support companies, and the platform itself is becoming more important, not less.
Tech News : No More £100 Contactless Limit From March
The UK’s £100 contactless card payment limit is set to be lifted from March 2026, after the financial regulator confirmed it will remove the fixed cap and give banks greater freedom to decide how contactless payments are handled.
Not Forced To Do It Immediately
The change, announced by the Financial Conduct Authority (FCA), does not force banks to raise limits immediately, but opens the door for higher or unlimited contactless payments where firms believe the fraud risk is low.
How Contactless Limits Currently Work
Under existing rules, shoppers using a physical debit or credit card can make a single contactless payment of up to £100 without entering their four digit PIN. There are also cumulative controls in place, meaning customers are typically asked to verify with a PIN after five contactless transactions or once total spending reaches around £300. These safeguards are designed to limit losses if a card is lost or stolen, while still allowing fast payments for everyday purchases.
Mobile Payments Different
Mobile payments work differently. For example, digital wallets such as Apple Pay and Google Pay do not have a fixed transaction limit, because payments are authenticated using device security such as fingerprint scanning or facial recognition. That distinction has become more noticeable as smartphone payments have grown in popularity.
What Will Change From March 2026?
From March 2026, the FCA will remove the regulatory requirement that sets a single national £100 limit on contactless card payments.
Instead, banks and payment providers with strong fraud controls will be allowed to set their own limits, including the option of having no fixed limit at all. Firms are also being encouraged to give customers more control, such as allowing them to choose their own contactless limit or turn contactless payments off entirely.
The FCA claims that this is about flexibility rather than mandating change. For example, providers will decide if and when they adjust limits, and many are expected to keep the current £100 cap for the foreseeable future.
Why The Change?
The regulator’s argument is that contactless payments have become the default way many people pay, and rigid limits can become less practical over time.
Contactless usage in the UK is now extremely high. For example, research cited by the FCA, carried out by Barclays, found that almost 95 percent of all eligible in store card transactions were contactless in 2024. Against that backdrop, the FCA believes fixed rules set several years ago risk becoming outdated as prices rise and payment technology improves. As David Geale, executive director of payments and digital finance at the FCA, says: “Contactless is people’s favoured way to pay. We want to make sure our rules provide flexibility for the future, and choice for both firms and consumers.”
The FCA has also linked the move to its wider work on supporting economic growth and prioritising digital solutions, describing the change as part of a broader programme of regulatory reform.
Consumer Choice
A key part of the FCA’s announcement is the emphasis on customer control. For example, rather than simply raising limits across the board, the regulator is encouraging banks to allow people to decide what works for them. Many high street banks already let customers set their own contactless limits or disable the feature entirely through mobile banking apps.
This means that someone concerned about fraud could switch contactless off, while someone making frequent higher value purchases could choose a higher personal limit to avoid repeated PIN prompts. Others may decide to keep tighter controls in place to help manage spending.
Any provider that changes its approach will be required, under the FCA’s Consumer Duty rules, to communicate those changes clearly and support good customer outcomes.
Fraud Protection And Reimbursement Rules
Concerns about fraud sit at the heart of the debate around higher contactless limits. The obvious fear is that if a card is stolen, a criminal could spend more before the cardholder realises and cancels it.
The FCA has stressed that existing consumer protections remain unchanged. Banks and payment firms must reimburse customers for unauthorised contactless fraud, such as spending on a lost or stolen card, unless there is evidence of gross negligence or complicity.
The regulator also believes that removing a blunt national cap will push firms to invest more in sophisticated fraud detection rather than relying on fixed limits alone.
In its press release, the FCA said the greater flexibility “will incentivise firms to step up their fraud prevention, giving consumers greater protection and peace of mind”.
How Big A Problem Is Contactless Fraud?
Industry data suggests contactless fraud rates are relatively low compared with other forms of card fraud. For example, figures published by UK Finance, which represents the UK banking sector, show that contactless fraud amounted to around 1.2p for every £100 spent using contactless cards. While any fraud is significant in absolute terms, this rate is lower than for card fraud overall.
The FCA has acknowledged that raising limits could increase potential losses if controls are not robust. In modelling shared during earlier discussions, it warned that higher limits could drive increased fraud if not matched with stronger monitoring, alerts, and transaction analysis.
That risk is one reason the regulator says only firms with strong fraud controls should take advantage of the new flexibility.
Why Most People May Not See Immediate Change
Despite the headline change, many customers may notice little difference in the short term. This is because, based on feedback from banks and payment service providers, the FCA says most firms are likely to maintain their existing contactless limits for now, even after the rules change in March 2026. Not only is a consistent national limit simple for customers to understand, but sudden changes could create confusion or anxiety around fraud. For banks, there are also operational considerations, including customer support, dispute handling, and the need to ensure monitoring systems can cope with higher value transactions.
How The £100 Limit Came About
The UK’s contactless limit has never been static. When contactless cards were introduced, back in 2007, the maximum transaction value was just £10. That figure rose gradually over time, reflecting growing trust in the technology and improved security.
The limit reached £30 by 2015, before increasing more rapidly during the Covid pandemic, when contactless payments were promoted as a hygienic alternative to cash. It rose to £45 in 2020 and then to £100 in October 2021.
The FCA’s latest move marks a move away from a single nationally defined figure, towards a more flexible, provider led model.
Concerns
It’s worth noting here, however, that the regulator has accepted that this is not a change driven by strong consumer demand. For example, in its own survey work during consultation, a large majority of consumers said they did not want the £100 limit changed.
Critics have also raised concerns beyond fraud. For example, some academics argue that reducing friction at the point of payment can make it easier to overspend, particularly on credit cards where people are using borrowed money. Financial abuse charities have also warned that easier spending could be misused in controlling relationships, especially where an abuser has access to a card or monitors transactions online.
Those concerns sit alongside broader debates about the move away from cash, which remains important for some vulnerable groups.
What Businesses And Retailers Think
Parts of the retail and hospitality sector have welcomed the prospect of greater flexibility, arguing that faster payments can improve customer experience and reduce queues. For example, Kate Nicholls, chair of UKHospitality, said: “Making life easier for consumers is a positive for any hospitality and high street business, and I’m pleased the FCA is bringing forward this change.”
She added, “Contactless has increasingly become the preferred payment method of choice for many people and lifting the limit can mean quicker and easier experiences for consumers. While many people still prefer to use cash or chip and PIN, this change adds much needed flexibility for providers and consumers.”
For retailers, much is likely to depend on how consistently banks apply the new freedom, and how clearly changes are explained to customers at the point of payment.
What Does This Mean For Your Business?
What the FCA has actually done is remove a fixed rule rather than impose a new one. The £100 limit is not being abolished overnight, and most people are unlikely to see any immediate difference at the till. Instead, the regulator is handing responsibility back to banks and payment firms, with the expectation that flexibility is matched by stronger fraud controls and clearer communication with customers.
For consumers, the impact will depend largely on how their own bank responds. Some may eventually be offered higher limits or more personalised controls, while others may see no change at all. The emphasis on customer choice suggests that people concerned about fraud, overspending, or personal safety should retain meaningful ways to limit or disable contactless payments if they wish.
For UK businesses, particularly retailers and hospitality venues, the change has the potential to reduce friction at checkout and speed up higher value transactions over time. That could improve customer flow and reduce queueing, but only if changes are applied consistently and explained clearly. For example, a patchwork of different limits across banks could create short term confusion for staff and customers alike.
Banks and payment providers now carry greater responsibility and, if they choose to raise or remove limits, they will need to demonstrate that fraud monitoring, alerts, and reimbursement processes are robust enough to cope with higher risk. The FCA has been clear that flexibility is conditional, not automatic, and firms will be judged on outcomes rather than intent.
More broadly, the move reflects a shift in how the UK approaches everyday payments. As digital and contactless methods dominate, regulation is moving away from fixed national thresholds towards adaptive controls shaped by technology, behaviour, and risk. Whether that balance holds will depend on how carefully the next phase is handled by all sides involved.
Tech News : Pentagon To Deploy Elon Musk’s Grok AI For Government Use
The US Department of War has confirmed plans to integrate Elon Musk’s xAI models, including Grok, into its internal GenAI.mil platform, extending advanced artificial intelligence tools to millions of military and civilian personnel from early 2026.
xAI
The agreement, announced in December, will see the Department of War add xAI for Government to GenAI.mil, a bespoke generative AI environment designed to support everyday administrative work as well as sensitive defence and national security tasks. The move forms part of a broader effort by the Pentagon to scale up artificial intelligence use across the US military and federal workforce, while maintaining strict security controls.
A Note On The Department’s Name
It’s worth quickly noting here that, while recent executive actions and official communications have referred to the organisation as the Department of War, its formal and legal name actually remains the Department of Defense. For example, under US law, a permanent name change would require an Act of Congress, rather than an executive order from President Trump alone. As a result, references to the Department of War in this article currently reflect political direction and branding rather than a completed legislative change, with the Department of Defense still recognised as the official legal entity.
What Is GenAI.mil And Why Does It Matter?
GenAI.mil is the Department of War’s central platform for deploying generative AI tools internally. Launched earlier in 2025, it is designed to give authorised personnel access to large language models and AI agents within a controlled government environment, rather than relying on public consumer tools.
The platform is operated by the Pentagon’s Chief Digital and AI Office and is intended to support a wide range of use cases, from drafting documents and analysing data to supporting logistics planning and operational decision making. Crucially, it is built to operate at Impact Level 5, a US government cloud security standard that allows systems to handle Controlled Unclassified Information, or CUI. CUI refers to sensitive government data that is not classified but still requires protection, such as operational plans, procurement data, and internal communications.
By integrating xAI’s Grok models into GenAI.mil, the Department of War says it will expand the range of frontier grade AI capabilities available to its workforce, while keeping those tools within an environment approved for sensitive government use.
What xAI And Grok Bring To The Platform
xAI is Elon Musk’s artificial intelligence company, launched in 2023 and best known for developing Grok, a large language model closely integrated with the social media platform X. Grok has been positioned by xAI as a real time, reasoning focused AI system, with the ability to draw on live data streams and respond to current events more directly than many competing models.
Under the agreement, Department of War personnel will gain access to xAI’s government specific AI offerings, including application programming interfaces, agentic tools, and AI models optimised for public sector workloads. The Pentagon has confirmed that Grok models will be available within Impact Level 5 environments, allowing them to be used in workflows that involve sensitive but unclassified data.
The Department has also highlighted the availability of real time global insights derived from X as a feature of xAI for Government. According to official statements, this is intended to provide analysts and planners with faster awareness of emerging developments, trends, and public information signals.
xAI described the partnership as part of its mission to deliver advanced AI tools to public institutions. In a statement released alongside the announcement, the company said the agreement reflected its “longstanding support of the United States Government” and its aim to make cutting edge industry technology available for national benefit.
How This Fits Into The Pentagon’s Wider AI Strategy
The agreement with xAI forms part of a broader Pentagon strategy to expand the use of advanced artificial intelligence across defence and government operations. For example, back in July 2025, the Department of War awarded up to $200 million each to four AI companies, including xAI, to support the development of defence ready AI systems. The other companies involved were Anthropic, Google, and OpenAI, reflecting a deliberate multi vendor approach rather than reliance on a single provider.
This strategy has been framed by the Pentagon as a way to avoid reliance on any single AI supplier, while ensuring access to a broad range of models and technical approaches. In early December, the Department integrated Google’s Gemini for Government into GenAI.mil, making xAI the second provider of so called frontier AI models on the platform.
Speaking earlier this year at the launch of Gemini for Government, Secretary of Defense Pete Hegseth described AI as a critical enabler for the modern military. “AI tools present boundless opportunities to increase efficiency,” he said, adding that the Department was committed to seeing AI deliver tangible operational benefits across defence and government.
Why Grok’s Inclusion Has Raised Questions
Despite the Department of War’s emphasis on security controls and oversight, the decision to integrate Grok has attracted scrutiny from politicians, policy experts, and technology analysts for several different reasons. For example, some concerns relate to the behaviour of the model itself, while others focus on governance, political influence, and the wider context surrounding Elon Musk’s relationship with the current administration.
Grok has previously generated controversial and inaccurate outputs in its consumer facing form, including false claims about historical events, natural disasters, and election outcomes, as well as politically charged responses. Critics argue that these incidents raise questions about how reliably the model can be constrained, even when deployed in more tightly controlled government environments.
Other concerns centre on Grok’s training data and real time inputs. For example, much of the model’s context is drawn from content on X, the social media platform owned by Musk, which has undergone significant changes to moderation policies and enforcement since his acquisition. Analysts have warned that this increases the risk of bias, misinformation, or unverified narratives influencing AI outputs, particularly where models are promoted as offering real time global insights.
The partnership has also been viewed through a political lens. For example, Musk has become an increasingly prominent figure within President Donald Trump’s political orbit, including public support during the 2024 election campaign and his role in the short lived Department of Government Efficiency, or DOGE. That initiative, which was framed as a cost cutting and reform effort across federal agencies, led to large scale layoffs before facing legal challenges and being dismantled earlier in 2025.
Against that backdrop, some observers have questioned whether xAI’s expanding role within the Department of War could be perceived as a conflict of interest, particularly given the scale and sensitivity of defence AI programmes. While no evidence has been presented that procurement rules were breached, critics argue that the close alignment between Musk and the administration heightens the need for transparency around how vendors are selected, governed, and overseen.
Political concerns have also been voiced publicly. For example, in September, Senator Elizabeth Warren described the Pentagon’s planned deal with xAI as “uniquely troubling”, citing concerns about Grok’s accuracy when responding to questions about major events and emergencies. She warned that errors produced by AI systems could carry serious consequences when used in government or defence related contexts.
Technology analysts have further questioned the reliance on live social media data for decision support. This is because open platforms such as X are known to be vulnerable to coordinated misinformation campaigns, automated accounts, and rapidly spreading false narratives, particularly during geopolitical crises. Critics argue that without clear safeguards, such data streams could complicate rather than clarify situational awareness for government users.
Safeguards And Limits Highlighted By The Department
The Department of War has, however, sought to address some of these concerns by stressing that Grok’s deployment within GenAI.mil will differ from its public version. For example, officials have said that government deployments will include additional controls, usage policies, and human oversight, and that AI outputs will be used as support tools rather than authoritative sources.
Pentagon officials have also emphasised that GenAI.mil is designed to give users access to multiple models, allowing outputs to be compared and validated rather than accepted at face value. This reflects a growing recognition within defence and intelligence communities that generative AI systems can assist analysis but must not replace professional judgement.
The Department has not published detailed technical information about how real time data from X will be filtered or validated within government environments, though it has said that security and compliance requirements remain unchanged.
What Does This Mean For Your Business?
The Pentagon’s decision to bring Grok into GenAI.mil highlights how quickly generative AI is becoming embedded in the machinery of government, even in environments where errors, bias, or misjudgement carry serious consequences. The US Department of Defense, currently being popularly referred to as the Department of War, is clearly betting that the productivity and analytical gains on offer outweigh the risks, provided models are fenced in by controls, oversight, and a multi vendor approach that avoids dependence on any single supplier. At the same time, the scrutiny surrounding Grok shows that not all frontier models are viewed equally, and that questions around training data, governance, and political proximity now sit alongside technical capability in public sector AI decisions.
For other stakeholders, the move sharpens several fault lines. For example, policymakers and oversight bodies will be under pressure to demonstrate that procurement decisions remain robust and impartial, particularly when suppliers are closely linked to political leadership. Analysts and military users will need to treat real time AI assisted insights as prompts rather than answers, especially when those insights draw from open social platforms vulnerable to manipulation. AI vendors, meanwhile, are being judged not just on model performance, but on transparency, restraint, and their ability to operate credibly in high trust environments.
For UK businesses, the implications are indirect but important. For example, defence and government adoption often sets expectations that later filter into regulated industries, public procurement frameworks, and critical infrastructure projects. This deployment reinforces the idea that AI tools will increasingly be used alongside sensitive data, but only where governance, auditability, and human accountability are clearly defined. UK firms developing or deploying AI will be expected to meet similar standards if they want to work with government or highly regulated clients, while organisations adopting AI internally should take note of the Pentagon’s emphasis on comparison, validation, and professional judgement rather than blind automation.
Company Check : Ofcom Investigates BT and Three Over 999 Call Failures
Ofcom has opened formal investigations into BT and Three following separate UK-wide mobile network failures this summer that left some customers unable to connect 999 emergency calls.
Two Major Outages
The investigations centre on two major outages, one affecting Three customers in June and another impacting BT and EE customers in July, both of which disrupted basic voice services across large parts of the country. Ofcom said it is examining whether the companies took sufficient steps to prevent the incidents and to protect access to emergency services, which are treated as a critical national function under UK telecoms regulation.
What Happened During The Summer Outages?
The first incident occurred on 25 June, when thousands of customers on the Three network reported being unable to make or receive voice calls. The outage was nationwide and affected not only Three customers but also users on virtual operators that rely on its infrastructure, including ID Mobile. While mobile data services largely remained available, voice calls failed to connect, including calls to emergency services.
Three later said the problem was triggered by “an exceptional spike in network traffic” caused by a third-party software configuration change. The company acknowledged that the disruption affected access to 999 services and informed Ofcom at the time.
A second incident followed on 24 and 25 July, when customers on BT and its mobile network operator EE reported similar problems. In this case, BT attributed the disruption to a software issue that affected call interconnection between networks. As a result, some customers were unable to make or receive calls, including calls to emergency services, despite having signal on their devices.
Ofcom said both incidents caused UK-wide disruption and affected millions of mobile users across the two networks.
Why 999 Call Failures Raise Regulatory Stakes
While mobile outages are not uncommon, failures that prevent access to emergency services significantly increase regulatory scrutiny. For example, under UK law, telecoms providers have specific obligations to ensure that 999 and 112 calls can be made reliably, even during periods of network stress or partial failure.
Ofcom said providers must take “appropriate and proportionate” measures to identify risks to their networks and to plan for scenarios that could compromise availability, performance or functionality. These duties extend beyond preventing outages altogether and include effective monitoring, rapid response and mitigation when failures occur.
In announcing the investigations, Ofcom said it would assess “whether there are reasonable grounds to believe that BT and Three have failed to comply with their regulatory obligations”.
The regulator has not suggested that enforcement action is inevitable, but it does have the power to impose financial penalties, require remedial changes to network design or processes, or issue formal directions if breaches are found.
Network Resilience
Ofcom has placed increasing emphasis on network resilience in recent years, particularly as the UK becomes more reliant on mobile connectivity for essential services. For example, its Network and Service Resilience Guidance sets out expectations for how providers should design and operate networks to reduce single points of failure and limit the impact of incidents.
The guidance states that firms are expected to “identify and reduce the risks of disruption” and to take steps to prevent “adverse effects arising from any such compromises”. Where outages do occur, providers are expected to respond quickly, communicate clearly with customers and learn lessons to reduce the likelihood of recurrence.
Commenting on the investigations, Ofcom said: “The importance of connectivity cannot be underestimated. People rely on their mobile phones to stay in touch, to work, and to contact the emergency services.”
The regulator has made clear that customer impact, including the duration and scale of disruption, will be a central factor in assessing whether obligations were met.
Industry Reaction And Company Responses
Both companies have said they are cooperating fully with the investigation. A spokesperson for BT Group said the company apologised to customers affected by the July incident and would “co-operate fully with Ofcom throughout the investigation”. BT has previously said the outage was caused by a software issue rather than a hardware failure, and that services were restored once the fault was identified.
Three UK said it had engaged openly with Ofcom since the June outage and would continue to do so. The company said the disruption followed a third-party software configuration change that led to unexpected traffic levels on its voice network.
Ofcom has previously made clear that outages can still occur even where networks are designed with resilience in mind, but that providers are expected to have robust processes in place to detect faults quickly, limit their impact, and identify lessons that reduce the risk of similar incidents in future.
The regulator’s guidance stresses that compliance is not limited to preventing failures outright. It also includes effective planning, monitoring and response when services are disrupted, particularly where access to emergency calls is affected.
Previous Enforcement Action
The investigations also take place against a backdrop of previous enforcement action in the sector. For example, back in July 2024, BT was fined £17.5 million after Ofcom found a “catastrophic failure” in its emergency call handling service had prevented around 14,000 999 calls from connecting during a ten-hour outage in June 2023.
Three has also previously been fined by Ofcom. In 2017, the company was ordered to pay £1.9 million after a network failure in 2016 left customers without service. Ofcom concluded at the time that the disruption could have been prevented with better planning and safeguards.
More recently, Three’s UK operations merged with Vodafone to form VodafoneThree, creating the UK’s largest mobile network with around 27 million customers. While the summer outage occurred before the merger was completed, the investigation comes at a sensitive time as the combined business works to integrate networks and systems.
Why The Issue Matters More Now
The timing of the outages has heightened concern because mobile networks are increasingly treated as critical infrastructure. As the UK progresses with the digital landline switchover, many households and vulnerable users are becoming more dependent on mobile connectivity for emergency communication.
Ofcom has repeatedly warned that resilience expectations apply not just to traditional landlines but to all networks that support access to emergency services. The regulator has also highlighted the need for additional safeguards for users who rely on telecare systems, personal alarms or medical monitoring that may depend on voice connectivity.
Government guidance has echoed these concerns, with ministers previously stating that communications providers have statutory obligations to ensure networks are “appropriately resilient”.
What Ofcom Will Examine Next
Ofcom said its investigations will focus on the facts surrounding each incident, including how the faults arose, how quickly they were detected, and what steps were taken to restore services and protect emergency calling. It will also examine whether risk assessments, change management processes and contingency planning were adequate.
The regulator has not set a public timetable for completing the investigations and outcomes could range from no further action if compliance is found, through to enforcement measures if breaches are identified.
What Does This Mean For Your Business?
The investigations place renewed focus on how mobile networks are operated, governed and tested in practice, particularly where basic voice services are relied on for public safety rather than convenience. For Ofcom, the outcome will help clarify how existing resilience rules are being applied in real incidents and whether further intervention is needed to ensure emergency access is protected as networks become more complex and software-driven.
For telecoms providers, the cases highlight how resilience is being judged across the full lifecycle of network management, from configuration changes and third-party dependencies through to detection, response and communication. The fact that both incidents involved software-related failures rather than physical damage is likely to be closely examined, especially as automation and network virtualisation play a growing role in UK mobile infrastructure.
There are also wider implications for UK businesses that depend on mobile voice services for operational continuity, safety procedures and customer contact. For example, prolonged or widespread loss of calling capability, even where data services remain available, can disrupt frontline operations, lone worker safety and emergency escalation processes. The investigations may prompt organisations to recheck how resilient their own communications arrangements are, particularly where mobile phones are the primary or sole method of contact.
For consumers, emergency services and vulnerable users, the cases reinforce why mobile networks are now treated as critical infrastructure rather than optional utilities. As the digital landline switchover continues and reliance on mobile connectivity deepens, the tolerance for failures affecting 999 access appears to be narrowing. How Ofcom responds, and what it requires of operators as a result, is likely to shape expectations around network reliability and accountability well beyond these two incidents.
Security-Stop Press : Worst Data Breaches of 2025 Show Cyber Attacks Are About Disruption
The most serious cyber incidents of 2025 showed a clear move away from data theft towards operational disruption and economic damage.
Globally, attackers exploited trusted platforms and supply chains, with US federal systems breached repeatedly and the Clop group stealing sensitive data by abusing an unknown flaw in Oracle E Business. More than one billion records were also accessed from Salesforce environments after hackers compromised connected third party platforms rather than Salesforce itself.
In the UK, disruption had immediate consequences. Cyber attacks on Marks & Spencer and Co-op exposed customer data and knocked systems offline, with Co-op later confirming all 6.5 million members were affected. The Cyber Monitoring Centre estimated the retail attacks caused up to £440 million in economic damage.
The most severe UK case involved Jaguar Land Rover, where a cyberattack halted production for months and destabilised its supply chain, prompting a £1.5 billion government guarantee to protect jobs and suppliers.
For businesses, the lesson from 2025 is that resilience is critical. Guidance from the National Cyber Security Centre emphasises patching, limiting third party access, tested backups, and rehearsed incident response, because fast recovery is now the key defence against disruptive attacks.