UK Plans Major Expansion Of Facial Recognition

The government has set out plans to expand the use of facial recognition and other biometrics across UK policing, describing it as the biggest breakthrough for catching criminals since DNA matching.

A National Strategy For Biometrics

The Home Office has launched a ten week consultation to establish a new legal framework covering all police use of facial recognition and biometric technologies. This would replace the current mix of case law and guidance with a single, structured system that applies consistently across forces.

The plan includes creating a dedicated regulator overseeing facial recognition, fingerprints and emerging biometric tools. The Home Office says a single body would provide clarity and help forces apply safeguards more confidently. It also proposes a national facial matching service, allowing officers to run searches against millions of custody images through one central system.

Breakthrough

Launching the consultation, Crime and Policing Minister Sarah Jones said, “Facial recognition is the biggest breakthrough for catching criminals since DNA matching,” adding, “We will expand its use so that forces can put more criminals behind bars and tackle crime in their communities.” Her view reflects the government’s belief that existing deployments have already demonstrated clear operational value, particularly in identifying violent offenders.

Why Now?

The push for expansion comes as police forces face increasing pressure to track offenders across regions and to manage high volumes of video supplied by retailers, businesses and members of the public. Also, recent cases of prisoners being released in error, or disappearing before arrest, have highlighted the difficulty of locating suspects quickly without technological support.

Public Tolerance For Certain Uses

Government research published alongside the consultation appears to suggest high public tolerance for certain uses. For example, according to the government’s figures, 97 per cent of respondents said retrospective facial recognition is at least sometimes acceptable, while 88 per cent said the same about live facial recognition for locating suspects. Ministers may see this as support for building a clearer framework, although rights groups argue that acceptability is dependent on strict safeguards and transparency.

The Need For Oversight

That said, independent accuracy testing has reinforced the need for stronger oversight. For example, the National Physical Laboratory found that earlier systems used in UK policing produced significantly higher false alert rates for Black and Asian people. The Home Office now acknowledges these disparities, noting that updated systems and reviews have since been introduced. Even so, the findings have shaped calls for clearer legal boundaries before expansion proceeds.

When These Changes Might Take Effect

The consultation runs through early 2026, after which ministers will draft legislation for parliamentary scrutiny. The Home Office estimates that introducing a new legal regime, establishing the regulator and deploying the national facial matching service will take around two years. During that period, existing deployments will continue under current guidance.

Police forces already using live facial recognition, including the Metropolitan Police and South Wales Police, will continue targeted deployments. Trials using mobile facial recognition vans across multiple forces are also expected to continue, and the national facial matching service is scheduled for testing in 2026.

How The Technology Works Across UK Forces Today

Police currently rely on three distinct facial recognition tools, each supporting different operational needs, which are:

1. Retrospective facial recognition. Used during investigations, this compares still images from CCTV, doorbell cameras, mobile footage or social media against custody images. It is the most widely used form, and police say it speeds up identification in cases where investigators have a clear image but no confirmed identity.

2. Live facial recognition. These systems scan faces in real time as people pass a camera. The software compares each face to a watchlist of individuals wanted for specific offences or subject to court conditions. When a possible match arises, officers decide whether to stop the person. Deployments are usually short, targeted and focused on high footfall areas.

3. Operator initiated facial recognition. This mobile app allows officers to check identity during encounters by comparing a photo to custody images, avoiding unnecessary trips to a station solely for identification.

Police leaders say these tools allow forces to locate wanted individuals more efficiently. Lindsey Chiswick, the National Police Chiefs’ Council lead for facial recognition, says the technology “makes officers more effective and delivers more arrests than would otherwise be possible”, adding that “public trust is vital, and we want to build on that by listening to people’s views”.

Legal And Ethical Issues

Legal concerns have followed facial recognition since its earliest deployments, and several landmark rulings continue to shape how police use the technology. For example, back in 2020, a Court of Appeal ruling in the Ed Bridges case remains the most significant legal challenge to date. In this case, the court found that South Wales Police’s early use of live facial recognition breached privacy rights because of inadequate safeguards, incomplete assessments and insufficient checks on whether the system discriminated against particular groups.

Also, the Equality and Human Rights Commission has criticised aspects of earlier Metropolitan Police deployments, saying forces must demonstrate necessity and proportionality each time. The Information Commissioner’s Office has also warned forces to ensure accuracy and justify the retention of custody images belonging to people never convicted of an offence.

Accuracy Problems

Accuracy remains central to the ethical debate. For example, the National Physical Laboratory found that in one system previously used operationally, Asian faces were wrongly flagged around four per cent of the time and Black faces around five and a half per cent, compared with around 0.04 per cent for white faces. For Black women, false alerts rose to nearly ten per cent. These figures show how demographic disparities can emerge in real deployments and highlight the importance of system configuration.

Rights groups warn that these issues could lead to wrongful stops or reinforce existing inequalities. They also argue that routine scanning in public spaces risks creating a sense of constant surveillance that may influence how people move or gather. Liberty has said it is “disappointed” that expansion is being planned before the risks are fully resolved, while Big Brother Watch has urged a pause during the consultation.

Support Strong From Police

It’s worth noting here that, perhaps not surprisingly, support within policing remains strong. For example, former counter terror policing lead Neil Basu says live facial recognition is “a massive step forward for law enforcement, a digital 21st century step change in the tradition of fingerprint and DNA technology”, while noting that it “will still require proper legal safeguards and oversight by the surveillance commissioner”. Police forces repeatedly stress that every alert is reviewed by an officer rather than acted on automatically.

Industry Supports Structured Rollout

Industry organisations also appear to support a structured rollout. For example, Sue Daley, Director of Tech and Innovation at techUK, says “regulation clarity, certainty and consistency on how this technology will be used will be paramount to establish trust and long term public support”. The technology sector argues that clear rules will help build confidence both inside and outside policing.

Charities

Charities focused on vulnerable people have also highlighted some potential benefits. For example, Susannah Drury of Missing People says facial recognition “could help to ensure more missing people are found, protecting people from serious harm”, though she also stresses the need to examine ethical implications before expanding use.

That said, civil liberties groups continue to call for stronger limits, arguing that wider deployment risks normalising biometric scanning in everyday spaces unless strict rules are imposed regarding watchlists, retention and operational necessity.

Areas For Further Debate

The proposals raise questions that will remain live throughout the consultation period. For example, these include how forces will define and maintain watchlists, how the new regulator will enforce safeguards, what thresholds will apply before live facial recognition can be deployed, and how demographic accuracy will be monitored over time. Businesses that operate high footfall environments, such as shopping centres and transport hubs, are also likely to face questions about how their video systems might interact with police requests as adoption increases.

What Does This Mean For Your Business?

It seems that, following this announcement from the government, policymakers now face a moment where practical policing needs, public confidence and legal safeguards must be aligned in a way that has not been achieved before. The consultation sets out an ambition for national consistency and clearer rules, although the evidence presented across this debate shows that accuracy, oversight and transparency will determine whether expansion strengthens trust or undermines it. The range of views from policing, civil liberties groups, industry and charities illustrates how differently this technology is experienced, and why the government will need to resolve issues that sit well beyond technical capability alone.

The implications extend into policing culture, investigative practice and public space management, which will all look different if facial recognition becomes a mainstream tool. Forces anticipate faster identifications, clearer procedures and more reliable ways to locate individuals who pose a genuine risk. Civil society groups, by contrast, point to the potential for overreach unless firm limits are embedded in law. These competing priorities will shape how the regulator operates and how the Home Office interprets proportionality in real deployments.

Businesses also sit at the centre of this discussion because they capture and provide a significant volume of the video footage used in retrospective searches. Retailers, transport hubs and major venues may face new expectations about how they store, secure and share images, and these responsibilities may grow as facial matching becomes more accurate and more widely used. Clearer rules could help organisations understand how to cooperate with investigations without exposing themselves to unnecessary compliance risks, particularly around data protection and equality duties.

The wider public interest lies in how these decisions affect everyday life. Public attitudes will depend on whether safeguards are visible, whether wrongful identifications are prevented, and whether live deployments remain tightly focused rather than becoming a routine feature of public spaces. A national framework could provide that reassurance if it genuinely addresses the concerns raised during testing and legal review. The coming months will show how far the government is prepared to go in defining those boundaries and whether the final model satisfies the mix of operational urgency and ethical caution that has defined this debate so far.

OpenAI Trains AI Models To Confess When They Break The Rules

OpenAI has developed a new research technique that trains advanced AI models to admit when they ignored instructions, took unintended shortcuts, or quietly breached the rules they were given.

A New Approach To Detecting Hidden Misbehaviour

OpenAI’s latest research introduces what it calls a “confession”, which is a second output that sits alongside the model’s main answer. The main answer is trained in the usual way, scoring well when it is helpful, correct, safe, compliant, and aligned with user expectations. However, the confession is different, i.e., it is judged only on honesty, and nothing the model says in this second output can negatively affect the reward for the first.

The idea appears to be fairly simple. For example, when an AI system generates an answer, it is often balancing many objectives at once, which can push it towards unintended behaviour. An answer might look polished from the outside, but internally the model may have relied on a shortcut or rewarded itself for the wrong thing. The new technique tries to surface these hidden steps. As the blog puts it, the confession is meant to provide “a full account of the model’s compliance with the letter and spirit of its policies and instructions”.

The Model Must Explain Things

To produce a confession report, the model is asked to explain three things. It must list the instructions and constraints it believes it should have followed, assess whether its answer actually met each of them, and note any points where the instructions felt ambiguous or unclear. The aim is not to penalise the model for mistakes, but to encourage it to describe what really happened.

Why OpenAI Believes Confessions Work

According to the research team, many types of unwanted behaviour appear because the combined reward signal used in reinforcement learning mixes too many objectives. For example, earlier studies found that some datasets rewarded confident but incorrect answers more than honest uncertainty. Other research showed that models could become overly agreeable when trained too heavily on preference data, or exploit loopholes in reward systems, known as reward hacking.

OpenAI argues that by treating honesty as a completely separate objective, confessions give the model a reason to admit behaviour that would normally go unnoticed. The blog states that honesty emerges because providing a truthful, evidence supported explanation is easier for the model than maintaining a consistent fabricated account that could pass scrutiny from a judging model.

Testing The Method Under Stress

To examine the approach, OpenAI says it trained a version of GPT 5 Thinking to produce confessions, then evaluated it on a set of adversarial stress tests. The tests were specifically designed to provoke misbehaviour such as hallucinations, instruction violations, reward hacking, or scheming.

Across these evaluations, OpenAI reports that the probability of a false negative, where the model broke instructions and then failed to confess, averaged 4.4 per cent. OpenAI says that, in many scenarios, the confession channel actually performed better still. When the model engaged in hacking behaviour, for example, it confessed to breaking the rules in nearly ninety per cent of cases. In tests involving hallucinations and instruction following, most confessions correctly identified non compliance and explained why it happened.

Performance Variable

OpenAI reports that performance was variable across task types, but with an average confession rate of about 74 per cent when the model had misbehaved. In four of the twelve tests, the confession rate exceeded 90 per cent, while in two others it fell to around 50 per cent. OpenAI says that these differences reflect the complexity of certain tasks and the difficulty of judging compliance when instructions are ambiguous.

Example

One example published on OpenAI’s blog illustrates how the system behaves in practice. A model was asked (somewhat weirdly) to write a letter from the perspective of a chef taking up a new sport, with the added instruction to alternate sentence lengths between twelve and eighteen words. The main answer attempted to follow this rule by adding indicators such as “(12)” and “(18)” at the end of each sentence, although the actual counts were not always correct. The confession then reviewed the output and acknowledged that, while it had tried to meet the requirement, it had not achieved the precise word lengths requested.

Experiments With Deliberately Weak Reward Signals

OpenAI also examined what happens when both the main reward signal and the confession reward signal are judged by a weaker model that can be easily exploited. In this setup, the main answer gradually learned how to satisfy the weak judge, producing responses that appeared to score highly while moving further away from genuine accuracy.

The confession channel behaved very differently. Despite being assessed by the same weak judge, its honesty increased as training progressed. When the model broke instructions, the confession eventually acknowledged the failure in almost every case. OpenAI sees this as evidence that, even in a flawed training environment, the most reliable way for a model to maximise its confession reward is simply to tell the truth about what it did.

How The Research Community Is Reacting

The technique has received interest from researchers who focus on AI safety and oversight. It has also prompted some scepticism. For example, some researchers have questioned whether any form of self reporting can be relied on in systems that already show signs of deceptive behaviour in controlled tests. They argue that a model trained to optimise for favourable outcomes may still learn ways to present itself as compliant, which raises doubts about how far confession mechanisms can be trusted on their own.

Doesn’t Prevent Bad Behaviour – It Just Surfaces It

OpenAI acknowledges these concerns. In fact, the company stresses that confessions “do not prevent bad behaviour; they surface it”. They are meant to act as a diagnostic tool, similar in purpose to chain of thought monitoring, which attempts to reveal hidden reasoning. Both techniques aim to make internal behaviour more legible, but neither can directly stop harmful outputs on their own.

Proof of Concept

OpenAI’s work on this could be described as a proof of concept. Training was conducted at a relatively small scale compared with full commercial systems, and confession accuracy remains imperfect. False positives occur, where the model confesses despite following instructions, and false negatives occur, usually because the model misunderstood the instructions or did not realise it had made a mistake.

Possible Implications For Organisations Using AI

While this research is not yet part of any customer facing product, it hints at a possible direction for oversight mechanisms in future AI deployments. In theory, confession style reporting could provide an additional signal for risk teams, for example by highlighting answers where the model believes it might have violated an instruction or where it encountered uncertainty.

Industries with strong regulatory oversight may find structured self analysis useful as one component of an audit trail, provided it is combined with independent evaluation. Confessions could also help technical teams identify where models tend to cut corners during development, allowing them to refine safeguards or add human review for sensitive tasks.

Fits Within A Broader Safety Strategy

OpenAI places confessions within a broader safety strategy that includes deliberative alignment, instruction hierarchies, and improved monitoring tools. The company argues that as AI systems become more capable and more autonomous, there will be greater need for techniques that reveal hidden reasoning or expose early signs of misalignment. Confessions, even in their early form, are presented as one way to improve visibility of behaviour that would otherwise remain obscured.

What Does This Mean For Your Business?

The findings appear to suggest that confession based reporting could become a useful transparency tool rather than a guarantee of safe behaviour. The method exposes what a model believes it did, which offers a way for developers and auditors to understand errors that would otherwise remain hidden. This makes it easier to trace how an output was produced and to identify the points where training signals pulled the model in an unintended direction.

There are also some practical implications for organisations that rely on AI systems, particularly those in regulated sectors. UK businesses that must demonstrate accountability for automated decisions may benefit from structured explanations that help build an audit trail. Confessions could support internal governance processes by flagging moments where a model was uncertain or believed it had not met an instruction, which may help risk and compliance teams decide when human intervention is needed. This will matter as firms increase their use of AI in areas such as customer service, data analysis and operational support.

Developers and safety researchers are also likely to see value in the technique. For example, confessions provide an additional signal when testing models for unwanted behaviour and may help teams identify where shortcuts are likely to appear during training. This also offers a clearer picture of how reward hacking emerges and how different training setups influence the model’s internal incentives.

OpenAI’s framing makes it clear that confessions are not a standalone solution, and actually sit within a larger body of work aimed at improving transparency and oversight as models become more capable. The early results show that the method can surface behaviour that might otherwise go undetected, although it remains reliant on careful interpretation and still produces mistakes. The wider relevance is that it gives researchers, businesses and policymakers another mechanism for assessing whether a system is behaving as intended, which becomes increasingly important as AI tools are deployed in higher stakes environments.

Bank of England Warns AI Valuations Could Trigger a Sharp Market Correction

The Bank of England has warned that the rapid rise in artificial intelligence focused technology stocks has created clear financial stability risks and could lead to a sharp correction in global markets.

AI Valuations Reach Their Most Stretched Levels In Years

The Bank’s latest Financial Stability Report says equity valuations linked to AI are now “particularly stretched”, with US technology firms approaching levels last seen before the dotcom bubble and UK valuations close to their most elevated point since the 2008 financial crisis. The Financial Policy Committee points out that a relatively small number of AI oriented firms have driven much of this year’s market gains, which means any reversal could have outsize effects.

Shares in companies such as Nvidia illustrate the scale of the enthusiasm. For example, it has become one of the world’s most valuable firms (a $5 trillion valuation!) as demand for its AI chips has surged, lifting its share price by more than 30 per cent this year alone following a period of even steeper growth through 2023 and 2024. The Bank notes that this rapid rise reflects real earnings strength, although it also concentrates a significant amount of market value in a handful of firms.

Not Quite Like The 90s

Andrew Bailey, the Bank’s governor, has stressed that today’s large AI firms aren’t comparable to the loss making companies of the late 1990s because they produce strong cash flows and clear commercial demand exists for their products. Bailey added, however, that this does not guarantee stability, especially as competition intensifies. His view is that AI could well become a general purpose technology capable of raising productivity, although valuations can still run far ahead of fundamentals.

A Five Trillion Dollar Infrastructure Spend

One of the most significant risks highlighted in the report is the scale and structure of investment required to support AI development. Industry estimates shared in the document suggest AI infrastructure spending over the next five years could exceed an eye-watering $5 trillion!

The Bank says that while the largest technology firms will fund much of this through their operating cash flows, around half of the total is expected to come from external financing. Debt markets, rather than equity markets, are likely to play the largest role. This includes corporate bond issuance, loans from global banks and lending from the rapidly expanding private credit sector, which exists largely outside traditional regulatory frameworks.

Growing Reliance on Borrowing

The growing reliance on borrowing matters because it creates deeper links between AI firms and the wider financial system. As the Financial Policy Committee warns, this means that if a sharp drop in valuations were to occur, losses on lending could quickly spread beyond the AI sector and place pressure on banks, credit funds and institutional investors. It also notes the increasingly interconnected nature of AI supply chains, which involve multibillion dollar partnerships across cloud providers, chip manufacturers and data centre operators.

Similar International Warnings

It should be noted here that it’s not just the Bank of England that is concerned. International organisations including the IMF and OECD have issued similar assessments this year. Both have pointed to high asset prices driven by optimism about AI related earnings and have warned of the risk of abrupt downward adjustments if expectations weaken. Senior industry figures such as JP Morgan chief executive Jamie Dimon have also expressed concern about market complacency and the possibility of a significant correction.

Why This Is Not A Simple Repeat Of The Dotcom Era

In its report, the Bank goes to some lengths to distinguish current conditions from the late 1990s bubble. Crucially, many AI firms today have established revenue streams and profitable operations and their valuations are based on substantial real world demand for cloud computing, data processing and AI model development.

Scale and Leverage Is The Real Risk Today

The risk instead actually comes from concentration, scale and leverage. For example, market value is increasingly concentrated in a small group of companies whose performance influences global stock indices, pension funds and retail investment products. At the same time, large amounts of borrowing are now tied to long term AI infrastructure projects that depend on continued investor confidence. These dynamics are different from the dotcom era yet present their own vulnerabilities.

Exposure For UK Savers And Pension Funds

The Bank has also made it clear that the UK is not insulated from an AI related correction. Many UK pension funds hold global equity portfolios where AI leaders now account for a significant share of total value. A fall in these stocks would flow through to savers’ pension pots and stocks and shares ISAs.

This has become more relevant following policy moves encouraging savers to invest more heavily in equities. The Bank’s report notes that a broad market decline could reduce household wealth, lower consumption and place additional pressure on the economy at a time when higher mortgage costs are still filtering through. Approximately 3.9 million UK mortgage holders are expected to refinance at higher rates by 2028, although a third may see payments fall as rates stabilise.

UK Banks Pass Stress Tests As Other Risks Grow

The Bank’s stress tests indicate that, thankfully, major UK lenders are resilient enough to withstand a severe downturn that includes higher unemployment, falling house prices and significant market turbulence. This resilience has led the Financial Policy Committee to propose lowering Tier 1 capital requirements from 14 per cent to 13 per cent from 2027, while still leaving banks with an estimated £60 billion buffer above minimum levels.

However, it seems that other parts of the financial system pose greater concerns. For example, the report highlights growing leverage in the UK gilt market, where international hedge funds have been borrowing heavily against their government bond holdings. The Bank warns that forced deleveraging in a downturn could amplify movements in gilt yields and push up government borrowing costs.

It also points to wider global pressures, including geopolitical tensions, cyber threats and rising sovereign debt burdens, which have created a more fragile international financial environment. These risks add further uncertainty to an already stretched market landscape shaped by the rapid growth of AI.

The Message

The key message from the Bank is really not that AI should be viewed with scepticism as a technology. For example, the report recognises that AI could deliver meaningful productivity gains and long term economic benefits. Its warning instead focuses on how the financial side of the AI boom has evolved and the vulnerabilities that could emerge if valuations adjust sharply.

UK businesses that rely on bank lending or capital markets may face more volatile financing conditions if a correction ripples across global markets. Credit channels linked to technology investment could tighten and firms with higher borrowing needs may encounter more expensive or more selective lending.

The Bank is, therefore, encouraging investors, lenders and corporate leaders to prepare for a period where AI continues to expand as a technology while financial markets remain sensitive to any signs that expectations have become overextended.

What Does This Mean For Your Business?

The central point in the Bank’s warning is really the need to separate enthusiasm for AI as a technology from the financial risks created by how the sector is currently being funded and valued. The report makes it clear that AI can still deliver major economic benefits while markets face periods of sharp adjustment, and those two realities can sit side by side. This places investors, policymakers and companies in a position where they must be ready for genuine technological progress and heightened financial volatility at the same time.

For UK businesses, the implications are already taking shape. For example, firms that depend on access to credit may find that lending conditions react quickly to any downturn in global tech markets, especially as a sizeable share of AI expansion is being financed through debt that links the sector more tightly to banks and private credit funds. Companies planning large technology upgrades or long term capital programmes may also need to consider how external shocks could affect borrowing costs or investment appetite. The same applies to institutional investors, pension schemes and retail savers whose portfolios are increasingly influenced by the performance of a small group of global AI firms.

This backdrop also gives the UK’s financial regulators a little bit of room for complacency. The resilience shown in bank stress tests is reassuring, although the vulnerabilities identified in areas such as leveraged gilt trading and private credit activity underline how market tensions could surface outside the traditional banking system. The combination of elevated geopolitical risk, cyber threats and fragile sovereign debt conditions reinforces the picture of a more complex and interconnected risk environment.

The Bank’s assessment, therefore, seems to lean heavily towards caution without dismissing the long term potential of AI. It is basically signalling that stakeholders should not assume current valuations will hold indefinitely and that preparation for a rapid repricing is now a matter of prudence rather than pessimism. UK businesses, financial institutions and savers all have a direct interest in how well those preparations are made, particularly as the effects of any correction would extend far beyond the technology sector itself.

Amazon Tests 30 Minute Deliveries

Amazon is piloting a new ultra fast delivery service that brings household essentials and fresh groceries to customers in parts of Seattle and Philadelphia in about 30 minutes or less.

‘Amazon Now’ And What It Offers

‘Amazon Now’ is a new delivery option built directly into the main Amazon app and website. Customers in eligible neighbourhoods will see a “30 Minute Delivery” tab in the navigation bar, which opens a catalogue of items available for immediate dispatch. The pilot scheme covers thousands of products that customers often need urgently, such as milk, eggs, fresh produce, toothpaste, cosmetics, pet treats, nappies, paper products, over the counter medicines, electronics and seasonal goods. Everyday snacks like crisps and dips are included too, reflecting the impulse led nature of the service.

Ultra-Fast Delivery

Amazon describes it as “an ultra fast delivery offering of the items customers want and need most urgently”, and says its aim is to get essentials to the doorstep in about 30 minutes or less. Customers can place an order, track the driver in real time and add a tip within the app, mirroring the experience already familiar from food delivery platforms.

Where The Pilot Is Running

The rollout is currently only limited to parts of Seattle, where Amazon is headquartered, and parts of Philadelphia in the US. Amazon has not confirmed how many neighbourhoods are covered or how long the test will run, and there is no stated timetable for expansion to other US cities. The company is referring to this phase as a trial, making it clear that the results will shape future decisions.

Was Even Faster in the United Arab Emirates in October

This US pilot follows an ultra fast launch in the United Arab Emirates in October, where Amazon introduced a 15 minute delivery service using micro facilities in local communities. Some customers in the UAE reportedly received their orders in as little as six minutes, showing the company’s willingness to push the limits of rapid fulfilment.

How The 30 Minute Model Works

As you may expect, it seems that hitting a 30 minute delivery window (delivering groceries as fast as a pizza) requires a tightly controlled operation. For example, Amazon says it is using “specialised smaller facilities designed for efficient order fulfilment”, located very close to where customers in both cities live and work. These sites stock a limited but high demand range of items and are built for fast picking, packing and dispatch.

Also, delivery is handled by partners and gig workers who use the Amazon Flex system. Reports from early usage suggest that drivers must leave within a few minutes of receiving an order notification to stay within the promised window. The entire model relies on short travel distances, real time routing, and a fulfilment process that is optimised for speed rather than breadth of inventory.

No Need For Additional Downloads

Since Amazon Now is part of the main shopping app, customers do not need to download anything new or switch services. For example, once they simply enter their postcode, the app confirms eligibility and displays the 30 minute catalogue. The experience is intentionally streamlined to minimise delay between ordering and dispatch.

How Much Does It Cost?

Amazon Now is not included in Prime’s standard free delivery benefits. Instead, Prime members in the pilot areas can access 30 minute delivery from $3.99 per order. Non Prime customers pay $13.99.

A small basket fee of $1.99 applies to orders under $15, which aims to discourage very low value purchases that may be expensive to deliver at ultra fast speeds. This aligns with pricing strategies already used by food and grocery delivery platforms.

It’s An Optional Premium Service

Prime members continue to receive same day, overnight and next day delivery at no additional cost once order thresholds are met, so Amazon Now is essentially positioned as an optional premium service rather than a replacement for existing benefits.

Why Is Amazon Doing This Now?

Amazon Now is designed to fit into the company’s wider logistics expansion programme. In mid 2025, Amazon announced that it planned to invest more than 4 billion US dollars to triple the scale of its delivery network by 2026. This included growing its network of same day facilities and reorganising the entire US fulfilment system around regional hubs. The changes have already reduced average delivery times and increased the proportion of orders arriving the same or next day.

Ultra fast delivery, therefore, marks the next key stage of this strategy. Amazon’s key competitors such as DoorDash, Uber Eats and Instacart already fulfil convenience and grocery orders within an hour, often by picking from local supermarkets. Amazon’s model differs because the inventory is held in its own small facilities, giving the company much tighter control over stock levels, availability and timing.

The new pilot also builds on Amazon’s earlier experiments. For example, the company launched Prime Now in 2014, offering two hour deliveries, then closed the standalone app in 2021 when it folded the service into the main shopping app. Amazon Now is, in effect, a new iteration of that idea, but designed for a world where rapid delivery is becoming mainstream.

Impact On Competitors And The Market

The initial announcement had an immediate market impact. For example, shares in Instacart fell by more than 2 per cent and DoorDash also dipped after the news broke, reflecting investor concern that Amazon may apply the same scale and pricing power to rapid grocery delivery that it previously applied to next day fulfilment. Analysts noted that Amazon’s growing interest in this category could put pressure on existing quick commerce players whose business models often rely on high fees and narrow margins.

Walmart is also part of the competitive picture. The retailer already offers rapid grocery delivery to most US households and benefits from its extensive store network. Industry studies suggest that a large proportion of customers are prepared to pay for fast grocery deliveries, highlighting the strength of demand in this category. Amazon’s pilot will therefore be watched closely by rivals in grocery, convenience and last mile logistics.

Customers And Businesses

For customers in Seattle and Philadelphia, the immediate benefit is convenience. For example, items that once required a trip to a local shop can now be delivered in half an hour, which is faster than typical takeaway delivery times in many parts of the United States. Ultra fast delivery may appeal especially to busy households, parents, pet owners and customers dealing with last minute needs such as forgotten ingredients or essentials.

For businesses, the implications extend beyond retail. FMCG manufacturers and brand owners may now see opportunities to position products within the ultra fast catalogue or to experiment with smaller pack sizes designed specifically for rapid missions. Also, marketing strategies could evolve as Amazon gains new data on urgent purchases and browsing patterns inside the 30 minute section of the app.

Local supermarkets and smaller delivery start ups may face stronger competition if Amazon expands the model. Since Amazon controls both the inventory and the logistics, it may be able to keep prices lower than rivals that rely on third party shops and couriers.

Challenges And Criticisms

It should be noted here that this ultra fast delivery is expensive to run, and analysts have warned that these models can suffer from high operating costs. For example, faster delivery windows require more staff, more micro facilities, more inventory and more vehicles on the road. This can make profitability difficult, especially when customers expect low delivery fees.

There are labour concerns too. Gig workers may face higher pressure when delivery windows are tight, and campaigners are likely to watch how Amazon balances speed with driver wellbeing and safety. Amazon emphasises that its specialised facilities improve safety for staff picking and packing orders, but questions remain around the wider impact on drivers and delivery partners.

Sustainability is another factor to consider. For example, Amazon argues that micro facilities positioned close to customers reduce the distance and emissions associated with deliveries. However, critics point out that ultra fast services may increase the total number of delivery trips and create more packaging waste, particularly for small orders.

There is also a wider cultural debate about the need for extreme immediacy in everyday shopping. Some commentators have questioned whether orders in minutes encourage unnecessary consumption or reinforce habits built around convenience over planning.

What Does This Mean For Your Business?

The Amazon Now pilot highlights how far the rapid delivery market has evolved and why Amazon is investing heavily in this area. The company is using its scale and financial superiority, which is important because it is expensive to run, to test whether ultra fast fulfilment can become a core part of mainstream retail rather than a niche convenience service. The approach brings clear advantages for customers who value immediacy and for Amazon, which gains more control over high demand categories and more insight into urgent purchase behaviour. It also places new pressure on competitors that rely on partnerships with local supermarkets rather than owning their fulfilment process from end to end.

There are still unanswered questions about sustainability, labour practices and long term profitability. Ultra fast delivery needs dense networks of sites, reliable staffing and strong demand at a price customers are willing to pay. These pressures are not limited to the United States and will be watched closely by UK retailers, logistics firms and brands that already operate in a market where fast delivery has become an expectation. UK businesses may find themselves adapting product ranges, marketing tactics or supply chain plans if similar models expand internationally, especially in urban areas where rapid fulfilment could reshape local competition and customer expectations.

The wider impact on city infrastructure, emissions and working conditions will also remain part of the discussion. Everyone from delivery partners to sustainability groups is likely to want assurances that speed does not undermine safety or environmental commitments. The success of the model, therefore, will ultimately depend on whether Amazon can balance convenience with operational, ethical and financial realities while proving that ultra fast fulfilment can scale without intensifying existing challenges.

Company Check : Another Cloudflare Outage Raises Fresh Concerns

Cloudflare has suffered its second major service outage in less than a month, briefly taking a substantial portion of the internet offline and prompting renewed questions about the resilience of the infrastructure many organisations now rely on.

Friday 5 December Outage

This latest incident occurred on Friday 5 December, when websites around the world began returning blank pages, stalled login screens and 500 error messages from around 08:47 GMT. Cloudflare confirmed that the problem affected part of its global network and that a significant number of high profile customers were impacted. Although services were largely restored by 09:12, the disruption was extensive enough to affect millions of users and thousands of online businesses during a busy weekday morning.

What Happened And Why Did It Spread So Quickly?

Cloudflare acknowledged shortly after the incident that the outage was caused by an internal change to how its Web Application Firewall processes incoming requests. The change had been deployed as part of an emergency response to a newly disclosed security vulnerability in React Server Components. The flaw, widely discussed across the software industry, could allow remote code execution in some applications built using React and Next.js. Cloudflare introduced new rules to help shield its customers from potential exploitation while they applied their own patches.

A Bug Was Triggered

During that process, a long standing bug in how the Web Application Firewall parses request bodies was triggered under the specific conditions created by the mitigation. This resulted in errors being generated within parts of Cloudflare’s network responsible for inspecting and forwarding traffic. In practice, it meant that requests processed through those systems began failing, which is why so many sites appeared blank or unresponsive.

Not A Cyber Attack

Cloudflare’s Chief Technology Officer commented publicly that this was not the result of an attack and was instead linked to logging changes implemented to help address the React vulnerability. The company has since published a technical summary of the issue, stating that it was working on a full review to prevent similar failures from recurring.

The speed of the disruption reflected Cloudflare’s central role in global web infrastructure. For example, the company provides security, performance optimisation and traffic routing services for a large proportion of internet services. This means that when a fault is introduced in a critical part of its platform, the effects can cascade quickly across many unrelated industries and geographies.

Which Services Were Impacted?

Reports from affected organisations and users indicated that large platforms such as LinkedIn, Zoom, Canva and Discord were among the most prominent names disrupted. E commerce providers including Shopify, Deliveroo and Vinted also experienced problems. Media outlets and entertainment platforms saw outages, as did financial services and stock trading apps in some regions. Ironically, even DownDetector, the independent website that tracks service outages, was temporarily unavailable because it also runs on Cloudflare’s network.

For many businesses the disruption manifested as failed page loads, broken checkout journeys or services timing out without explanation. It should be noted that, although the outage was brief, these symptoms can have very real impacts. For example, retailers risk abandoned purchases, subscription platforms face customer frustration and organisations offering time critical services can see immediate operational strain.

How This Compares With The November Outage

The December outage arrived only weeks after Cloudflare’s previous incident on 18 November, which was far longer and affected a wider range of services. That disruption began around midday UTC and took several hours to fully resolve.

Cloudflare later explained that the November issue stemmed from an automatically generated configuration file used by its Bot Management system. A change to database permissions caused the file to grow far beyond its intended size. When the oversized file was synchronised across the network, it caused a core traffic routing module to fail repeatedly. Major services including X, ChatGPT, Spotify and large gaming platforms all experienced significant downtime.

Both The Results of Internal Changes

It seems, therefore, that the two outages were technically unrelated. The November incident was caused by a configuration file that overwhelmed a key proxying process, while the December disruption was caused by a logic error triggered within the Web Application Firewall. However, what links them is that both were the result of internal changes aimed at improving security and performance, and both exposed fragilities within a highly automated global system.

Reactions From Cloudflare And The Wider Industry

Cloudflare has stated publicly that any outage of this scale is unacceptable and has acknowledged the frustration caused to customers. After the November incident, its chief executive promised a series of improvements to configuration handling, kill switches and automated safety checks. The fact that a second issue occurred so soon afterwards has prompted visible concern from customers and industry observers about the platform’s change control processes.

The Danger Of Relying On A Small Number Of Infrastructure Providers

Security experts have emphasised the broader lesson here, i.e., that many organisations now rely heavily on a small number of global infrastructure providers. Cloudflare’s size and technical capabilities offer benefits in terms of speed and protection from attacks, yet this scale also creates single points of failure. If a major provider experiences a fault, thousands of websites and applications can be disrupted almost instantly.

Industry groups have urged organisations to reassess their resilience strategies. Some policy specialists argue that businesses should identify where they rely on a single vendor for critical operations and explore ways to diversify. This might involve adopting multiple cloud providers, splitting content delivery across different networks or architecting applications so they degrade gracefully rather than fail outright when a dependency becomes unavailable.

Customers And Competitors

For Cloudflare’s customers, the December outage reinforces the need to balance performance gains with risk planning. Many organisations use Cloudflare for security filtering, caching, bot protection and traffic routing, meaning a failure in any of those layers can have immediate consequences for availability.

Also, competitors in the content delivery and cloud security sector may see renewed interest in multi provider approaches. This does not necessarily mean businesses will move away from Cloudflare, given its extensive footprint and capability, but it is likely to encourage more organisations to build redundancy around critical services.

Regulators are also likely to take note of what has happened at Cloudflare. For example, European and UK frameworks focusing on operational resilience, such as NIS2 and DORA, place increasing emphasis on understanding and mitigating third party risk. Repeated outages at a major provider may strengthen the argument for closer oversight of critical internet infrastructure and more transparent reporting requirements.

What Happens Next?

Cloudflare has said it will publish a full post incident analysis and will continue making changes to improve reliability across its platform. The company has already committed to reviewing how new security mitigations are validated before deployment, in addition to strengthening internal safeguards that determine how changes propagate across the network.

For customers and other stakeholders, the incident is another reminder that internet resilience depends not only on defending against attackers but also on managing the risks introduced by routine operational changes. The growing complexity of web infrastructure has made this increasingly challenging, and the recent outages have placed long term operational resilience firmly back on the agenda.

What Does This Mean For Your Business?

The pace of software change, the pressure to react quickly to new vulnerabilities and the scale at which providers now operate mean that even well intentioned updates can clearly create unexpected instability. This latest incident from Cloudflare shows how a single adjustment deep inside a security layer can move rapidly through global systems and affect businesses with no direct connection to the underlying flaw. It also reinforces why resilience planning needs to be treated as a strategic priority rather than an operational afterthought.

UK businesses, in particular, face a growing need to understand how their digital supply chains actually function. Many organisations depend on Cloudflare without realising how many of their core services sit behind it. The outage demonstrated that customer experience, revenue and even internal operations can be affected within minutes if one vendor encounters a problem. These short disruptions may not make headlines for long, yet they expose gaps in continuity planning that boards and technology teams are being pushed to close, especially as regulators sharpen their expectations around third party risk.

Although Cloudflare’s competitors may now really want to highlight the benefits of multi provider architectures and the reduced exposure this can offer, the practical reality is that Cloudflare’s scale, speed and security tooling remain difficult to replicate. Most organisations may not currently be planning to abandon the platform but they may be looking for ways to introduce redundancy around it, whether by spreading workloads, adding backup routing options or designing services that fail more gracefully when a dependency falters. In other words, the market is now moving towards diversification rather than replacement.

Other stakeholders have lessons to learn from all this as well. For example, regulators will continue scrutinising outages that affect large sections of the internet, particularly where they touch financial services, transport or healthcare. Also, investors will look at whether Cloudflare can demonstrate consistent improvements after two incidents so close together. Developers and security teams across the industry may now reflect on the risks involved in rolling out urgent protections at speed, especially when the underlying software landscape is evolving as quickly as it is today.

Cloudflare remains a central pillar of global internet infrastructure, and that reality brings both advantages and pressures. Although pretty inconvenient and costly to many businesses and their users, the recent outages do not change the importance of Cloudflare, but they do highlight how essential it has become to strengthen resilience around the entire ecosystem. This means that organisations that choose to invest in understanding their dependencies and designing for failure may be better positioned to handle future shocks, whatever their source, and will place themselves on far stronger footing as digital systems continue to grow in complexity.

Security Stop-Press: Scam Ads Reported On YouTube As Fraudsters Exploit Ad Slots

Users in several countries say they are seeing a rise in misleading adverts on YouTube, including fake government schemes, miracle health claims, inappropriate content and AI-generated promotions that lead to suspicious websites.

Many of the ads redirect to imitation news pages or fake portals designed to collect personal information or small payments. Viewers say the scams often look polished, making them harder to spot at a glance.

Security researchers warn that criminals are using malvertising techniques to slip fraudulent ads into YouTube’s automated auction system. Cheap AI tools make it easy to generate endless scam variations that bypass basic checks, even as billions of harmful ads are removed each year.

Businesses can reduce exposure by training staff to recognise suspicious promotions, avoiding links in untrusted ads and using browser protections that block known malicious domains. Clear reporting routes and strong account security help limit the chances of employees being caught out.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives