Major Insurers Say AI Is Too Risky to Cover

Insurers on both sides of the Atlantic are warning that artificial intelligence may now be too unpredictable to insure, raising concerns about the financial fallout if widely used models fail at scale.

Anxiety

As recently reported in the Financial Times, it seems that anxiety across the insurance sector has grown sharply in recent months as companies race to deploy generative AI tools in customer service, product design, business operations, and cybersecurity. For example, several of the largest US insurers, including Great American, Chubb, and W. R. Berkley, have now reportedly asked US state regulators for permission to exclude AI-related liabilities from standard corporate insurance policies. Their requests centre on a growing fear that large language models and other generative systems pose what the sector calls “systemic risk”, where one failure triggers thousands of claims at the same time.

What Insurers Are Worried About

The recent filings describe AI systems as too opaque for actuaries to model, with one, reported by the Financial Times, as saying that LLM outputs are “too much of a black box”. Actuaries normally rely on long historical datasets to predict how often a specific type of claim might occur. Generative AI has only been in mainstream use for a very short period, and its behaviour is influenced by training data and internal processes that are not easily accessible to external analysts.

The Central Fear

The industry’s central fear is not an isolated error but the possibility that a single malfunction in a widely used model could affect thousands of businesses at the same time. For example, a senior executive at Aon, one of the world’s largest insurance brokers, outlined the challenge earlier this year, noting that insurers can absorb a £300 to £400 million loss affecting one company, but cannot easily survive a situation where thousands of claims emerge simultaneously from a common cause.

The concept of “aggregation” risk is well understood within insurance. For example, cyberattacks, natural disasters, and supply chain failures already create challenges when losses cluster. However, what makes AI different is the speed at which a flawed model update, inaccurate output, or unexpected behaviour could spread across global users within seconds.

Real Incidents Behind the Rising Concern

Several high-profile cases have highlighted the unpredictability of AI systems when deployed at scale. For example, earlier this year, Google’s AI Overview feature falsely accused an Arizona solar company of regulatory violations and legal trouble. The business filed a lawsuit seeking $110 million in damages, arguing that the false claim caused reputational harm and lost sales. The case was widely reported across technology and legal publications and is now a reference point for insurers trying to price the risks associated with AI-driven public information tools.

Air Canada faced a different challenge in 2023 when a customer service chatbot invented a discount policy and provided it to a traveller. The airline argued that the chatbot was responsible for the mistake, not the company, but a tribunal ruled that companies remain liable for the behaviour of their AI systems. This ruling has since appeared in several legal and insurance industry analyses as a sign of where liability is likely to sit in future disputes.

Another incident involved the global engineering consultancy Arup, which confirmed that fraudsters used a deepfake of a senior employee during a video call to authorise a transfer. The theft totalled around £25 million. This case, first reported by Bloomberg, has been used by cyber risk specialists to illustrate the speed and sophistication of AI-enabled financial crime.

It seems that these examples are not isolated. For example, industry reports from cyber insurers and security analysts show steep increases in AI-assisted phishing attacks, automated hacking tools, and malicious code generation. The UK’s National Cyber Security Centre has also noted that AI is lowering the barrier for less skilled criminals to produce convincing scams.

Why Insurers Are Seeking New Exclusions

Filings submitted to US state regulators show insurers requesting permission to exclude claims arising from “any actual or alleged use” of AI in a product or service. In fact, some requests are reported to go further, seeking to exclude losses connected to decisions made by AI or errors introduced by systems that incorporate generative models.

W. R. Berkley’s filing, for example, asks to exclude claims linked to AI systems embedded within company products, as well as advice or information generated by an AI tool. Chubb and Great American are seeking similar adjustments, citing the difficulty of identifying, modelling, and pricing the underlying risk.

AIG was mentioned by some insurers during the early stages of these discussions, although the company has since clarified that it is not seeking to introduce any AI-related exclusions at this time.

Some specialist insurers have already limited the types of AI risks they are willing to take on. Mosaic Insurance, which focuses on cyber risk, has confirmed that it provides cover for certain software where AI is embedded but does not offer protection for losses linked to large general purpose models such as ChatGPT or Claude.

What Industry Analysts Say About the Risk

The Geneva Association, the global insurance think tank, published a report last year warning that parts of AI risk may become “uninsurable” without improvements in transparency, auditability, and regulatory control. The report highlighted several drivers of concern, including the lack of training data visibility, unpredictable model behaviour, and the rapid adoption of AI across industries with varying levels of oversight.

It seems that Lloyd’s of London has also taken an increasingly cautious approach. For example, recent bulletins instructed underwriters to review AI exposure within cyber policies, noting that widespread model adoption may create new forms of correlated risk. Lloyd’s has been preparing for similar challenges on the cyber side for years, including the possibility that a global cloud platform outage or a major vulnerability could create simultaneous losses for thousands of clients.

In its most recent market commentary, Lloyd’s emphasised that AI introduces both upside and downside risk but noted that “high levels of dependency on a small number of models or providers” could increase the severity of a large scale incident.

Regulators and the Emerging Policy Debate

State insurance regulators in the US are now reviewing the proposed exclusions, which must be approved before they can be applied to policies. However, approval is not guaranteed and regulators typically weigh the interests of insurers against the needs of businesses who require predictable cover to operate safely.

There is also a growing policy debate in Washington and across Europe about whether AI liability should sit with developers, deployers, or both. For example, the European Union’s AI Act, approved earlier this year, introduces new rules for high risk AI systems and could reduce some uncertainty for insurers in the longer term. The Act requires risk assessments, transparency commitments, and technical documentation for certain types of AI models, which could help underwriters understand how systems have been trained and tested.

The UK has taken a more flexible, sector based approach so far, although its regulators have expressed concerns about the speed at which AI is being adopted. The Financial Conduct Authority has already issued guidance reminding firms that they remain responsible for the outcomes of any automated decision making systems, regardless of whether those systems use AI.

Business Risk

Many organisations now use AI for customer service, marketing, content generation, fraud detection, HR screening, and operational automation. However, if insurers continue to retreat from covering AI related losses, businesses may need to rethink how they assess and manage the risks associated with these tools.

Some analysts believe that a new class of specialist AI insurance products will emerge, similar to how cyber insurance developed over the past decade. Others argue that meaningful coverage may not be possible until the industry gains far more visibility into how models work, how they are trained, and how they behave in unexpected situations.

What Does This Mean For Your Business?

Insurers are clearly confronting a technology that’s developing faster than the tools used to measure its risk. The issue is not hostility towards AI but the absence of reliable ways to model how large, general purpose systems behave. Without that visibility, insurers cannot judge how often errors might occur or how widely they might spread, which is essential for any form of cover.

Systemic exposure remains the central concern here. For example, a single flawed update or misinterpreted instruction could create thousands of identical losses at once, something the insurance market is not designed to absorb. Individual claims can be managed but really large clusters of identical failures can’t. This is why insurers are pulling back and why businesses may soon face gaps that did not exist a year ago.

The implications for UK organisations are significant. For example, many businesses already rely on generative AI for customer service, content creation, coding, and screening tasks. If insurers exclude losses linked to AI behaviour, companies may need to reassess how they deploy these systems and where responsibility sits if something goes wrong. A misstatement from a chatbot or an error introduced in a design process could leave a firm exposed without the safety net of traditional liability cover.

Developers and regulators will heavily influence what happens next. Insurers have been clear that better transparency, audit trails, and documentation would help them price risk more accurately. Regulatory frameworks, such as the EU’s AI Act, may also make high risk systems more insurable over time. The UK’s lighter, sector based approach leaves more responsibility with businesses to manage these risks proactively.

The wider picture here is that insurers, developers, regulators, and users each have a stake in how this evolves. Until risk can be measured with greater confidence, cover will remain uncertain and may become more restrictive. The next stage of AI adoption will rely as much on the ability to understand and manage these liabilities as on the technology itself.

Microsoft Launches Fara-7B, Its New On-Device AI Computer Agent

Microsoft has announced Fara-7B, a new “agentic” small language model built to run directly on a PC and carry out tasks on screen, marking a significant move towards practical AI agents that can operate computers rather than simply generate text.

What Is Fara-7B?

Fara-7B is Microsoft’s first computer-use small language model (SLM), designed to act as an on-device operator that sees the screen, understands what is visible and performs actions with the mouse and keyboard. It does not read hidden interface structures and does not rely on multiple models stitched together. Instead, Microsoft says it works in the same visual way a person would, interpreting screenshots and deciding what to click, type or scroll next.

Compact

The model has 7 billion parameters, which is small compared with leading large language models. However, Microsoft says Fara-7B delivers state-of-the-art performance for its size and is competitive with some larger systems used for browser automation. The focus on a compact model is deliberate. For example, smaller models offer lower energy requirements, faster response times and the ability to run locally, which has become increasingly important for both privacy and reliability.

Where Can You Get It?

Microsoft has positioned Fara-7B as an experimental release intended to accelerate development of practical computer-use agents. It is openly available through Microsoft Foundry and Hugging Face, can be explored through the Magentic-UI environment and will run on Copilot+ PCs using a silicon-optimised version.

Why Build A Computer-Use SLM?

Microsoft’s announcement of Fara-7B is not that surprising, given the wider trend in AI development. The industry has now moved beyond text-only chat models to models that can act, reason about their environment and automate digital tasks. This actually reflects the growing demand from businesses and users for assistants that can complete work rather than merely describe how to do it.

There is also a strategic element. For example, Microsoft has invested heavily in AI across Windows, Azure, Copilot and its device ecosystem. Building a capable agentic model that runs directly on Windows strengthens this position and gives Microsoft a competitive answer to similar tools emerging from OpenAI, Google and other major players.

By releasing the model with open weights and permissive licensing, Microsoft is also encouraging researchers and developers to experiment, build new tools and benchmark new methods. This approach has the potential to shape the direction of computer-use agents across the industry.

How Fara-7B Has Been Developed

One of the biggest challenges in creating computer-use agents is the lack of large, high-quality data showing how people interact with websites and applications. For example, a typical task might involve dozens of small actions, from locating a button to entering text in the correct field. Gathering this data manually would be too slow and expensive at the scale needed.

Microsoft says its team tackled this by creating a synthetic data pipeline built on the company’s earlier Magentic-One framework. The pipeline generates tasks from real public webpages, then uses a multi-agent system to explore each page, plan actions, carry out those actions and record every observation and step. These recordings, known as trajectories, are passed through verifier agents that confirm the tasks were completed successfully. Only verified attempts are used to train the model.

In total, Fara-7B was trained on around 145,000 trajectories containing around one million individual steps. These tasks cover e-commerce, travel, job applications, restaurant bookings, information look-ups and many other common activities. The base model, Qwen2.5-VL-7B, was selected for its strong multimodal grounding abilities and its support for long context windows, which allows Fara-7B to consider multiple screenshots and previous actions at once.

How Fara-7B Works In Practice

During use, Fara-7B receives screenshots of the browser window, the task description and a history of actions. It then predicts its next move, such as clicking on a button, typing text or visiting a new URL. The model outputs a short internal reasoning message and the exact action it intends to take.

Mirrors Human Behaviour By Just Looking At The Screen

This is all designed to mirror human behaviour. For example, the model sees only what is on the screen and must work out what to do based on that view. This avoids the need for extra data sources and ensures the model’s decisions can be inspected and audited.

Strong Results

Evaluations published by Microsoft appear to show strong results. For example, on well-known web automation benchmarks such as WebVoyager and Online-Mind2Web, Fara-7B outperforms other models in its size range and in some cases matches or exceeds the performance of larger systems. Independent testing by Browserbase also recorded a 62 per cent success rate on WebVoyager under human verification.

What Fara-7B Can Be Used For

The current release is aimed at developers, researchers and technical users who want to explore automated web tasks. Typical examples include:

– Filling out online forms.
– Searching for information.
– Making bookings.
– Managing online accounts.
– Navigating support pages.
– Comparing product prices.
– Extracting or summarising content from websites.

These tasks reflect everyday processes that take time in workplaces. Automating them could, therefore, reduce repetitive admin, speed up routine workflows and improve consistency when handling high-volume digital tasks.

Also, the fact that the model is open weight means organisations can fine tune it or build custom versions for internal use. For example, a business could adapt it to handle specialist web portals, internal booking systems or industry-specific interfaces.

Who Can Use It And When?

Fara-7B is available now through Microsoft Foundry, Hugging Face and the Magentic-UI research environment. A quantised and silicon-optimised version is available for Copilot+ PCs running Windows 11, allowing early adopters to test the model directly on their devices.

However, it should be noted here that it’s not yet a consumer feature and should be used in controlled experimentation rather than in production environments. Microsoft recommends running it in a sandboxed environment where users can observe its actions and intervene if needed.

The Benefits For Business Users

Many organisations have been cautious about browser automation due to concerns about data privacy, vendor lock-in and cloud dependency. Fara-7B’s on-device design appears to directly address these issues by keeping data local. This is especially relevant for sectors where regulatory requirements restrict the movement of sensitive information.

Running the model locally also reduces latency. For example, an agent that is reading the screen and clicking through a webpage must respond quickly, and any delay can disrupt the experience. An on-device agent avoids these delays and provides more predictable performance.

Benefits For Microsoft

For Microsoft, Fara-7B essentially strengthens its position in agentic AI, supports its Windows and Copilot+ hardware strategy and provides a foundation for future systems that combine device-side reasoning with cloud-based intelligence.

Developers

For developers and researchers, the open-weight release lowers barriers to experimentation, allowing new techniques to be tested and new evaluation methods to be developed. This may accelerate progress in areas such as safe automation, grounding accuracy and long-horizon task completion.

Challenges And Criticisms

Microsoft is clear that Fara-7B remains an experimental model with limitations. It can misinterpret interfaces, struggle with unfamiliar layouts or fail partway through a complex task. Like other agents that control computers, it remains vulnerable to malicious webpages, prompt-based attacks and unpredictable site behaviour.

There are some notable governance and security questions too. For example, businesses will need to consider how to monitor and log agent actions, how credentials are managed and how to prevent incorrect or undesired operations.

That said, Microsoft has introduced several safety systems to address these risks. The model has been trained to stop at “Critical Points”, such as payment stages or permission prompts, and will refuse to proceed without confirmation. The company also notes that the model achieved an 82 per cent refusal rate on red-team tasks designed to solicit harmful behaviour.

Early commentary has also highlighted that benchmark success does not necessarily translate directly into strong real-world performance, since live websites can behave unpredictably. Developers will need to conduct extensive testing before deploying any form of autonomous web agent in operational settings.

What Does This Mean For Your Business?

Fara-7B brings the idea of practical, controllable computer-use agents much closer to everyday reality, and the implications reach far beyond its immediate research release. The model shows that meaningful on-device automation is now possible with compact architectures rather than sprawling cloud systems. That alone will interest UK businesses that want to streamline manual web-based tasks without handing sensitive data to external services. These organisations have long relied on browser-driven processes in areas such as procurement, HR, finance and customer administration, so a tool that can take on repeatable workflows locally could offer genuine operational value if it proves reliable enough.

The wider AI market is likely to view the launch as a clear signal that Microsoft intends to compete directly in the emerging space for agentic automation. Fara-7B gives the company a foothold that it controls end to end, from the hardware and operating system through to developer tools and safety frameworks. This matters in a landscape where other players have approached computer-use agents with more closed or cloud-first designs. The open-weight release also sets a tone for how Microsoft wants the community to interact with the model, and it encourages a level of scrutiny that could shape future iterations.

In Fara-7B, developers and researchers gain a flexible platform that they can adapt, test and benchmark in their own environments. The training methodology itself, built on large scale synthetic tasks, raises important questions about how best to model digital behaviour and how to ensure that agents can generalise beyond curated datasets. These questions will continue to surface as more organisations explore automation that depends on visual reasoning rather than structured APIs.

It’s likely that stakeholders across government, regulation and security will now be assessing the risks as closely as the opportunities. For example, a system capable of taking actions on a live machine introduces new oversight challenges, from governance and auditing to resilience against hostile prompts or malicious web content. Microsoft’s emphasis on safety, refusal behaviour and Critical Points is a start, although much will depend on how reliably these mechanisms perform once the model is exposed to diverse real-world environments.

The release ultimately gives the industry a clearer view of what agentic AI might look like when it is embedded directly into personal devices rather than controlled entirely in the cloud. If the technology matures, it could affect expectations about digital assistance in the workplace, reduce friction in routine operations and extend automation to tasks that currently have no clean API-based alternative. The coming months will show whether developers and early adopters can turn this experimental foundation into stable, responsible tools that benefit businesses, consumers and the wider ecosystem.

GDS Local Launched To Link National And Local Services

A new GDS Local unit has been launched to give residents simpler, consistent access to both national and local government services through a single digital system.

What the Government Has Announced

On 22 November 2025, the Department for Science, Innovation and Technology (DSIT) unveiled GDS Local, a dedicated team within the Government Digital Service (GDS) created to support councils with digital transformation. The stated aim is to help local authorities modernise services, reform long-term technology contracts, and make better use of shared data to improve everyday tasks such as managing council tax, reporting issues in a local area, applying for school places or accessing local support.

Three Main Priorities

The government says GDS Local has been set up with three core priorities, which are:

1. To help councils connect to existing national platforms including GOV.UK One Login and the GOV.UK App. These platforms already underpin central government services such as tax, passports and benefits, and the plan is that residents will eventually only need one secure account for both national and local services.

2. Market and procurement reform, with a clear focus on helping councils break free from restrictive long-term contracts that limit flexibility and often involve high costs for outdated systems.

3. To improve the way councils use and share anonymised data, supported by a new Government Digital and Data Hub that brings together digital and data professionals from across the public sector.

Part of “Rewire The State”

The launch actually forms part of a wider programme to “rewire the state” and address the findings of the recent State of Digital Government Review, which estimated that modernising public services could release up to £45 billion in productivity gains each year. Reports cited during the review also suggest that digital and data spending across the UK public sector remains well below international benchmarks.

Why Local Councils Are A Major Focus

Much of the UK’s recent digital modernisation has taken place at central government level. The roll-out of GOV.UK One Login, changes to HMRC’s digital services, and new online systems for benefits and health services have all progressed, yet councils have often been left to modernise in isolation. This is despite councils being responsible for many of the services people use most frequently.

Minister for Digital Government Ian Murray said this gap had persisted “for too long”, arguing that councils had not benefited from the same investment or support as central departments. Announcing the new unit, he said GDS Local would help end the “postcode lottery” for digital services and give every resident access to “modern, joined-up and reliable online services”. He described the aim as ensuring that public services “work seamlessly for people wherever they live”.

The scale of the challenge becomes clearer when looking at the underlying numbers. For example, digital spending in local government is significantly lower than the levels seen in comparable sectors internationally. Also, councils depend on ageing systems, often supplied by a small number of long-standing vendors who offer limited interoperability and hold councils in expensive, inflexible contracts. Many of these contracts are due to expire over the next decade, which the government sees as an opportunity to reshape the market and encourage more competition.

Creating A Single Account For Local And National Services

One of the most visible changes GDS Local aims to deliver is the integration of GOV.UK One Login into local services. One Login is the national secure identity system that will eventually replace dozens of separate logins across the public sector. The government argues that using this same system for councils will make services simpler for residents and more efficient for local authorities.

If fully implemented, this would allow residents to sign in to the GOV.UK App or website and access everything from council tax accounts to local housing support using the same verified identity they use for passport renewals or DVLA services. This approach is expected to reduce duplication, strengthen security, lower failure rates when people cannot remember multiple passwords, and give councils access to a modern identity system without having to build one independently.

Central Solutions Imposed On Councils

GDS has emphasised that this work will not involve imposing central solutions on councils. GDS Local leaders Liz Adams and Theo Blackwell said the priority is to “collaboratively extend proven platforms and expertise”, recognising the unique needs of each authority. They also stressed that councils’ own experience in designing local services will remain central to how the national platforms evolve.

Reforming Long-Term Technology Contracts

Long-standing technology contracts have been one of the biggest barriers to local digital progress. For example, many councils have been locked into multi-year agreements with a single supplier covering critical services such as revenues and benefits, social care or housing. These systems often cannot integrate easily with modern tools or data platforms, making it harder for councils to innovate or switch provider.

The government’s announcement described these arrangements as “ball and chain” contracts that “lock councils into long-term agreements with single suppliers, often paying premium prices for outdated technology”. GDS Local has been tasked with giving councils more control, increasing competition, and helping authorities choose systems that support modern digital standards.

This work will be carried out with the Local Government Association (LGA) and the Ministry for Housing, Communities and Local Government (MHCLG). The LGA has long argued that councils need more flexibility and more competitive procurement options. Its Public Service Reform and Innovation Committee chair, Councillor Dan Swords, welcomed the move and said the new unit offered “a fantastic opportunity to accelerate the pace of transformation”, making services “more accessible, efficient and tailored to local need”.

Improving How Councils Use and Share Data

Alongside GDS Local, the government has also launched the Government Digital and Data Hub, which is a central online platform for digital and data professionals across the public sector. The hub brings together staff from central government, councils, the NHS and other public bodies, offering training, career guidance, resources and a network to share expertise.

One goal of the hub is to help councils share anonymised data on issues such as homelessness, social care demand and environmental trends. The intention is to help authorities learn from one another’s approaches, scale innovation that works, and identify emerging issues earlier. GDS argues that shared learning and consistent data practices can help reduce duplication and improve service planning across regions.

Liverpool City Region As An Early Partner

Liverpool City Region has been closely involved in the early stages of GDS Local and was chosen as the location for the national launch. The region has previously developed a Community Charter on Data and AI, led by local residents, to set clear principles for responsible data use. It has also experimented with data-driven projects through initiatives such as its AI for Good programme and the Civic Data Cooperative.

Councillor Liam Robinson, the region’s Cabinet Member for Innovation, described GDS Local as “an important step forward” and said the region’s recent work showed how data and technology could be used to tackle real-world challenges such as improving health outcomes or addressing misinformation.

The launch event also highlighted the upcoming Local Government Innovation Hackathon in Birmingham, taking place on 26–27 November. The event will bring together councils, designers, technologists and voluntary organisations to explore how digital tools can help address homelessness and rough sleeping.

What Comes Next?

Councils are now being invited to register interest in working with GDS Local through discovery projects, data-sharing initiatives and early connections with GOV.UK One Login. More detailed plans are expected over the coming months as DSIT and GDS set out the next steps for integration, procurement reform and data standards.

The unit’s success will depend on how widely councils engage with it, how effectively central and local systems can be joined up, and how quickly legacy barriers can be removed.

What Does This Mean For Your Business?

All of this seems to point to a more consistent experience for residents, but the scale of change involved will test how well central and local government can work together. Councils will, no doubt, need sustained support to unwind their legacy systems, adapt to common identity standards and take advantage of shared data platforms. Some authorities are already well placed to do this, while others face steeper challenges due to funding pressures, outdated infrastructure or complex service demands. The success of GDS Local will rely on whether these differences can be narrowed rather than deepened.

The implications stretch beyond councils. For example, UK businesses that depend on timely licensing decisions, planning processes, environmental checks or local regulatory services could benefit from faster and more predictable digital systems. More consistent use of One Login may reduce administrative friction for organisations interacting with multiple authorities, and clearer data standards may help suppliers build tools that work across regions rather than creating bespoke versions for every council. There are also opportunities for technology firms to compete in a reformed procurement environment where long-term lock-in no longer dominates the market.

Residents, meanwhile, may stand to gain from simpler access to core services and a clearer sense of what to expect from their local authority regardless of where they live. Improved data sharing may also help councils respond earlier to really serious issues such as homelessness, care demand or environmental risks, which could influence wider public services including health and emergency response.

The coming months will show how quickly GDS Local can turn its priorities into practical progress. Much will depend on how well central platforms can adapt to local needs and how effectively councils can reshape contracting arrangements that have been entrenched for years. The foundations laid through this launch should give the programme a clear direction, although the real measure will be whether residents and organisations begin to notice services becoming easier, faster and more consistent across the country.

Microsoft Copilot To Leave WhatsApp In January 2026

Microsoft has announced that its Copilot chatbot will stop working on WhatsApp on 15 January 2026 after WhatsApp introduces its new restrictions on third party AI assistants.

Why Copilot Was On WhatsApp In The First Place

Copilot was launched on WhatsApp in late 2024 as part of Microsoft’s wider effort to meet users inside the apps they already use each day. It allows people to talk to Copilot through a normal WhatsApp chat thread, asking questions, requesting explanations, drafting messages, or generating ideas. Microsoft says “millions of people” have used the WhatsApp integration since launch, showing how messaging apps have become a common first step into generative AI for mainstream users.

Operated Through The WhatsApp Business API

The chatbot operated through the WhatsApp Business API, which is the system that lets companies automate conversations with customers. Copilot’s version was “unauthenticated”, meaning users did not sign in with a Microsoft account. This made the experience fast and simple, although it meant the service was separated from users’ main Copilot profiles on Microsoft platforms.

Why It’s Being Removed

The removal of Copilot from WhatsApp appears to be due entirely to changes in WhatsApp’s platform rules. For example, in October 2025, WhatsApp updated its Business API terms to prohibit general purpose AI chatbots from running on the platform. These rules apply to assistants capable of broad, open ended conversation rather than bots created to support specific customer service tasks.

WhatsApp said the Business API should remain focused on helping organisations serve customers, i.e., providing shipping updates, booking information, or answers to common questions. The company made clear that it no longer intends WhatsApp to act as a distribution channel for large AI assistants created by external providers.

Several Factors, Say Industry Analysts

Industry analysts have linked the decision to several factors. For example, these include the cost of handling high volume AI traffic on WhatsApp’s infrastructure, Meta’s growing focus on consolidating data inside its own ecosystem, and the introduction of Meta AI, the company’s consumer facing assistant that is being deployed across WhatsApp, Instagram, and Messenger. Meta AI is expected to remain the only general purpose assistant users can access directly inside WhatsApp once the policy takes effect.

How The Change Will Happen

Microsoft has confirmed that Copilot will remain accessible on WhatsApp until 15 January. After that date, the chatbot will stop responding and users will not be able to send new prompts through the app.

Microsoft has also warned that chat history will not transfer to any other Copilot platform. The WhatsApp integration not using Microsoft’s account authentication means that there is no technical link between a user’s WhatsApp conversation and their profile on the Copilot app or website. Microsoft therefore recommends exporting chats manually using WhatsApp’s built in export tool before the deadline if users want to keep a record of past conversations.

OpenAI has taken a similar approach with ChatGPT on WhatsApp, although it has said that some users may be able to link previous chats to their ChatGPT history if they used a version tied to their account. This is not an option for Copilot due to the design of the original integration.

Where Users Can Access Copilot Instead

Microsoft is directing users to three main platforms where Copilot will continue to be available, which are:

1. The Copilot mobile app on iOS and Android.

2. Copilot on the web at copilot.microsoft.com.

3. Copilot on Windows, built into the operating system.

These platforms support all of the core features users are already familiar with and introduce additional tools that were not available in WhatsApp. These include Copilot Voice for spoken queries, Copilot Vision for image understanding, and Mico, a companion style presence that supports daily tasks. Microsoft says these will form the central experience for Copilot going forward.

The Wider Effect On AI Chatbots

WhatsApp is now reported to be used by more than three billion people globally and has become an important distribution route for companies deploying AI driven tools. The updated rules now mean that all general purpose AI assistants will be removed from the platform, including ChatGPT and Perplexity, which were introduced earlier in 2025. Each provider has begun notifying users and guiding them towards their own mobile apps and websites.

OpenAI previously said more than 50 million people had used ChatGPT through WhatsApp, showing how significant the channel had become for AI adoption. Microsoft has not released its own usage figures beyond confirming “millions” of Copilot interactions on WhatsApp since launch.

Commentary from industry analysts notes that the update will reshape how external AI companies can reach users inside Meta’s ecosystem. It also creates a clearer distinction between approved business automation, which can continue, and broad AI assistants, which cannot operate inside WhatsApp under the new rules.

What The Policy Change Means For AI Developers

Developers that relied on the WhatsApp Business API to distribute general purpose assistants will no longer be able to use that channel. Companies that built workflows around WhatsApp based assistants now need to redesign their approach to comply with the updated rules. Many WhatsApp integration providers have already issued technical advice to help organisations check whether their existing use cases fall under the new restrictions or remain permitted under the “customer support” classification.

Microsoft’s public response has been measured. For example, its official statement states that it is “proud of the impact” Copilot has had on WhatsApp and that it is now focused on ensuring a smooth transition for users. The company has avoided any direct criticism of WhatsApp and has instead highlighted the added functionality available in its own apps, particularly multimodal features that did not fit within WhatsApp’s interface.

What Does This Mean For Your Business?

This development shows how quickly access to mainstream AI tools can change when platform rules are updated, and it reinforces how much control large messaging platforms now have over which assistants users can reach. For UK businesses, the change means that any informal use of Copilot or ChatGPT through WhatsApp will now need to move to authenticated apps or web based tools, which may offer clearer security controls even if the transition disrupts established habits. Organisations that had started exploring AI driven workflows inside WhatsApp must check whether their implementations fall within the permitted customer support category or whether they now count as general purpose assistants that need reworking or relocating.

AI developers face tighter boundaries on where and how their models can operate, particularly when relying on platforms that sit between them and their users. This will encourage providers to invest more heavily in their own apps and operating system integrations, where they retain full control over authentication, data handling, and feature development. Users who previously relied on WhatsApp as a simple way to test or adopt generative AI will now need to shift their expectations to standalone tools that offer richer functionality but require more deliberate use.

This change also highlights how Meta is positioning its own assistant as the primary option inside WhatsApp, creating a more contained environment for general purpose AI. This will influence how consumers discover and evaluate different AI products, and it will shape how competing providers reach audiences on messaging platforms that have become central to everyday communication.

Company Check : How Bending Spoons Hit an $11 Billion Valuation in Just 48 Hours

Bending Spoons has completed one of the most dramatic 48-hour periods in recent European tech history after announcing an agreement to acquire AOL and revealing a $270 million funding round that has pushed its valuation from $2.55 billion to $11 billion.

What is ‘Bending Spoons’?

Bending Spoons is a Milan based technology company that has built its business by acquiring well known but often stagnating digital brands and turning them into profitable, streamlined operations. Founded in 2013, the company initially developed its own mobile apps before switching its focus to buying established products with large user bases and restructuring them in pursuit of long term profitability.

Its portfolio already includes Evernote, Meetup, WeTransfer, Harvest, Komoot, Brightcove, Mosaic Group and StreamYard. It has also agreed deals for Vimeo and now AOL, with both transactions expected to complete by the end of 2025 subject to regulatory approvals. Bending Spoons now has more than 300 million monthly active users across its products, supported by a growing workforce in Milan, London, Madrid and Warsaw.

Hold Forever – Not Just Cut Costs and Sell On

The company’s approach has attracted attention because it combines elements of private equity restructuring with a long term, “hold forever” strategy. For example, rather than buying companies, cutting costs and selling them on, Bending Spoons actually says it aims to own and operate each acquisition indefinitely. For founders seeking stability or for investors looking to offload ageing assets, that approach is becoming increasingly attractive.

Why Bending Spoons Wanted AOL

AOL, once one of the most recognisable names in the early internet era, has changed hands several times over the past two decades after being owned by Time Warner, Verizon and most recently Apollo backed Yahoo. Despite its reduced profile, AOL remains one of the world’s most used email services, with around 8 million daily and 30 million monthly active users.

It’s that user base that’s a key part of Bending Spoons’ rationale. In announcing the deal, chief executive Luca Ferrari described AOL as “an iconic, beloved business that has stood the test of time” and said the company sees “unexpressed potential” in the brand. The plan is to invest heavily in the core email service, modernise the underlying technology, improve product experience and explore new revenue opportunities.

Exact Amount Not Disclosed

Although exact financial terms have not been disclosed, multiple reports have placed the acquisition price at roughly $1.4 to $1.5 billion. Bending Spoons has secured a $2.8 billion debt financing package from a group of banks to fund the AOL deal, support further research and development and provide capacity for future acquisitions.

Scale and Visibility

It could be said that the AOL purchase really stands out due to its scale and visibility. For example, whereas earlier Bending Spoons acquisitions involved smaller, often niche brands, AOL’s name recognition and large audience give the company a new level of global prominence. That said, it also presents operational challenges, including the need to migrate legacy systems, protect long established user data and rebuild a product that has not seen major improvements for several years.

The Funding Round That Changed The Company’s Trajectory

Less than two days after confirming the AOL deal, Bending Spoons announced a new $270 million fundraising round led by major institutional investors including T Rowe Price, Baillie Gifford, Cox Enterprises, Durable Capital Partners and Fidelity. A further $440 million changed hands in secondary transactions as existing shareholders sold stock.

Now A ‘Decacorn’

The raise marks one of the largest late stage private funding events in Europe this year and pushes Bending Spoons into the small group of European “decacorns”, companies valued at more than $10 billion. The company’s valuation has now risen from $2.55 billion in early 2024 to $11 billion in late 2025, a dramatic increase driven by its acquisition strategy and its ability to rapidly restructure and monetise digital properties.

Investors Confident in the Bending Spoons Operating Model

Investor appetite appears to reflect real confidence in the company’s operating model. It seems that, while many venture backed startups have struggled to raise funds in the current environment, Bending Spoons is positioning itself as a consolidator of mature tech assets rather than a speculative bet on early stage growth. The strategy offers predictable revenue, large user bases and the opportunity to centralise functions such as engineering, marketing and finance across dozens of brands.

How Bending Spoons Creates Growth (Where Others Can’t)

The company’s approach really involves three main elements, which are:

1. Buying underperforming brands.

2. Cutting costs and restructuring operations.

3. Increasing revenue through pricing changes or new paid features.

Its acquisition of Evernote illustrates the pattern. For example, after purchasing the note taking service in early 2023, Bending Spoons reduced headcount, restructured teams and introduced stricter limits on free accounts, ultimately pushing more users towards paid plans.

Similar changes followed at Filmic, Meetup and WeTransfer. In some cases, restructuring has been controversial, with criticism over layoffs and alterations to product features that long standing users had taken for granted. The company argues that without these changes, many of the businesses it acquires would continue to stagnate or decline.

The Benefits of Scale

For Bending Spoons, the benefit lies in scale. For example, by centralising common functions, it avoids duplicating costs across its portfolio and can invest selectively in the features and technologies it believes each brand needs. It also likes to increase the use of artificial intelligence to streamline workflows, improve content recommendations and modernise systems that are many years old.

What The AOL Deal Means For Customers

Millions of individuals and thousands of small businesses still rely on AOL email accounts. Many use the service because of its familiarity or because it is tied to old workflows, business cards or customer communications. Those users are likely to see product changes over time, particularly if Bending Spoons introduces new pricing tiers or imposes limits on free accounts as it has done elsewhere.

Bending Spoons insists that it will invest in improving AOL’s technology, user experience and reliability. For business users, that could mean better security, faster email delivery, improved spam filtering and more intuitive interfaces. The challenge will be ensuring that changes do not disrupt long standing processes for individuals and organisations with limited capacity to adapt.

The acquisition also raises questions around customer service, data migration and localisation. For example, previous restructurings at other brands have seen support teams reduced or reorganised. However, AOL’s scale may require a different approach, particularly given the sensitivity of email data and the wide demographic range of its user base.

Impact on Competitors and the Wider Market

The size of the AOL deal and the surge in Bending Spoons’ valuation will, no doubt, be closely watched by other firms in the “venture zombie” market. For example, companies such as Constellation Software, SaaS.group, Tiny and Curious also acquire mature software products, but few operate at Bending Spoons’ scale or rely so heavily on debt financing to accelerate expansion.

The AOL acquisition may signal that large, consumer facing internet brands are now becoming targets for permanent capital acquirers that traditionally focused on smaller SaaS companies. It could also encourage more venture backed companies to consider sales to operators that prioritise profitability over hypergrowth.

For traditional venture capital, however, this trend poses a bit of a challenge. For example, many older software startups with moderate revenue have struggled to find conventional exits, and the rise of permanent holders like Bending Spoons may reshape expectations around valuation, return timelines and portfolio strategy.

Challenges and Scrutiny Ahead

Despite its rising profile, Bending Spoons faces several risks. Integrating AOL’s ageing infrastructure with its modern technology stack will require significant investment and presents operational complexity. The company also carries a growing debt load, creating pressure to turn newly acquired assets into profitable units quickly.

Regulators may also take a closer interest as Bending Spoons gains control of a wider set of online services used by millions of consumers and businesses. Although the company insists it plans to invest for the long term, the combination of aggressive restructuring, centralised ownership and cost reduction has attracted criticism from former employees and some existing users of the brands it has acquired.

For now, the company has signalled that it will continue its acquisition driven expansion, supported by fresh investment and one of the largest debt packages raised by any private European tech firm this year. Whether this model can scale across a portfolio that increasingly includes household names is a question that will be closely followed by customers, competitors and the broader tech industry in the months ahead.

What Does This Mean For Your Business?

The events of the past two days leave Bending Spoons operating from a position of unusual strength, although every part of that strength will now be tested. The company has shown that it can convince major investors to back a long term acquisition model at a moment when most late stage funding is slowing. The AOL deal demonstrates that it is no longer targeting only niche or neglected software brands but is now prepared to absorb some of the internet’s most recognisable properties. The funding round reinforces that change and gives it the financial capability to keep expanding while it works through the practical realities of integrating a very diverse set of products.

The implications are significant for customers, regulators and the wider market. For example, AOL’s millions of email users will want clarity on how the service will evolve, particularly once the familiar platform begins to adopt the pricing structures and technical overhaul seen across other Bending Spoons properties. Also, organisations that rely on AOL for communication or advertising will be looking for stability rather than disruption, and the company will need to show that its restructuring methods can be applied without undermining long standing business workflows. Regulators too will examine how the acquisition affects data protection, security and competition across email and online content, especially as a single owner becomes responsible for a portfolio that now touches well over 300 million people each month.

There are also some clear consequences for the investment landscape. For example, competitors in the “venture zombie” space now face a consolidator with access to capital on a scale they may struggle to match. Venture funds holding mature but slow growth software companies could revisit their exit expectations, particularly if valuations begin to adjust to reflect the prices being paid by permanent owners. For UK businesses, the story is a reminder that established digital tools used daily in operations, marketing or customer communication can change hands quickly and be reshaped in ways that require preparation. Companies relying on services such as Evernote, WeTransfer or now AOL may need to plan for price changes, feature adjustments and new account tiers, even as potential improvements in security and performance start to appear.

The central question here is whether Bending Spoons can really apply its efficiency focused model at the scale implied by its expanding portfolio. Success would strengthen its claim that many mature digital brands still hold substantial untapped value. However, any missteps would fuel criticism that aggressive restructuring and rapid integration place too much pressure on complex, widely used services. The next year will, therefore, offer a clearer view of whether the company’s hold forever strategy can deliver the long term gains it promises across brands as large and visible as AOL.

Security Stop-Press: Shadow AI Breaches Expected To Hit 40 Percent Of Enterprises By 2030

Gartner says 40 percent of enterprises will face a shadow AI related breach by 2030 as unapproved and unmanaged AI tools continue to spread across workplaces.

Shadow AI covers any AI system or workflow used without formal oversight, such as employees putting company data into public models or teams deploying internal tools with no security review. Gartner notes that rapid adoption of generative AI has already created visibility gaps in many organisations.

The firm points to risks including accidental data leaks, unsafe integrations, unmanaged API access, and insecure model deployment. Growing AI sprawl, fuelled by low code platforms and consumer AI services, is making it easier for staff to build or adopt tools that sit entirely outside IT governance.

Gartner places the warning within its AI TRiSM framework, arguing that many organisations still lack basic inventories of where AI is used and what data models can reach.

Clear AI governance, approved platforms, strict data handling rules, and active monitoring of AI use across the business can help reduce exposure to these emerging risks.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives