Sustainability-In-Tech : Data Centre Power Demand May Triple By 2035

Global data centre electricity demand is now forecast to almost triple by 2035, forcing urgent questions about how to power the AI boom sustainably.

The Forecasts Point To A Steep Rise

New analysis from BloombergNEF suggests data centres could be drawing around 106 gigawatts of power by 2035, up from about 40 gigawatts today. This represents a near threefold increase and marks a sharp upward revision on projections made only months ago. The rise reflects not only the number of new facilities but also the dramatic scale of those now being planned.

Of around 150 new US data centre projects added to one leading industry tracker in the last year, nearly a quarter are expected to exceed 500 megawatts of capacity, and a small number will go past the one gigawatt mark. A 200 megawatt site is now considered a normal hyperscale facility, which highlights the size of the new generation of AI focused builds.

AI Also Driving Up Data Centre Utilisation

Average data centre utilisation is also expected to rise from about 59 per cent today to 69 per cent by 2035. This reflects the steep growth in AI training and inference workloads, which are projected to account for nearly 40 per cent of all data centre compute within the same timeframe.

Gartner’s global forecasts point in the same direction. Analysts expect electricity consumption across all data centres worldwide to increase from 448 terawatt hours in 2025 to 980 terawatt hours in 2030. That means demand is projected to grow 16 per cent in 2025 alone and double over the five year period!

AI Infrastructure Is Driving Bigger And Busier Facilities

One major reason behind these increases appears to be the rapid expansion of AI infrastructure. For example, Gartner notes that while traditional servers and cooling contribute to overall electricity use, the fastest rise comes from AI optimised servers, whose energy consumption is expected to rise from 93 terawatt hours in 2025 to 432 terawatt hours in 2030. These servers will represent almost half of all data centre power use by the end of the decade.

The growth in AI workloads is also reshaping where data centres are built. For example, the traditional clusters near major cities face land and grid constraints, so new facilities are being planned further out in regions where connections are more readily available. In the United States, for example, the PJM Interconnection region, which includes Virginia, Pennsylvania and Ohio, is seeing a large wave of new sites. Texas is experiencing a similar trend, with former crypto-mining facilities being repurposed into AI data centres.

These facilities take many years to deliver, i.e., industry analysts estimate the average timeline for a major data centre from early planning to full operation is about seven years. That means decisions being made now will lock in power demand well into the 2030s, with limited short term flexibility to adjust course.

Grid Operators Face A New Reliability Test

Electricity systems are now being tested by a scale and pace of growth that is difficult to absorb. For example, in the PJM region, data centre capacity could reach 31 gigawatts by 2030, which is almost equal to the 28.7 gigawatts of new electricity generation expected over the same period. This imbalance has already led to concerns from PJM’s independent market monitor, which has argued that new data centre loads should only be connected when the grid can support them reliably.

Texas has also been reported as facing its own set of pressures. For example, forecasts show that reserve margins within the ERCOT grid could fall into riskier territory after 2028 if demand from data centres outpaces the construction of new power plants and transmission capacity.

The US And China

Gartner’s regional analysis indicates that the United States and China will together account for more than two thirds of global data centre electricity consumption by 2030. Europe’s share is expected to rise from 2.7 per cent to around 5 per cent as new facilities are built to support cloud uptake and AI workloads.

More On-Site Power Needed

Given these pressures, analysts have highlighted how many large data centres are likely to secure their own power sources rather than relying entirely on the grid. Gartner’s research on data centre power provisioning warns that utilities are struggling to expand generation and transmission infrastructure quickly enough to support the rate of construction now under way.

In fact, by 2028, Gartner says only about 40 per cent of newly built data centres will rely solely on grid electricity. The remainder will most likely draw on some form of on site generation or long term, dedicated supply arrangements.

Clean Technologies?

Looking ahead to the mid-2030s, around 40 per cent of new data centres are expected to be powered by clean technologies that are not yet commercially mature. These include, e.g., small modular nuclear reactors, green hydrogen systems and advanced geothermal technologies.

A Commercial Impact Too

Gartner also highlights a commercial impact. For example, early adopters of clean on site power options will face higher upfront costs and these costs are likely to be passed on to cloud customers. This implies that the long term economics of cloud computing will be shaped not only by processor performance but also by the availability and price of electricity.

Scotland Exposes The Local Impact Of Global Demand

The UK is now facing its own version of this issue. Research by Foxglove shows how a cluster of eleven large data centres planned in Scotland would demand between 2,000 and 3,000 megawatts of electricity. Scotland’s current winter peak demand is just over 4 gigawatts, which means these projects alone could account for between 50 and 75 per cent of the country’s current peak electricity use.

The list of proposed Scottish facilities includes a 550 megawatt campus at Ravenscraig in North Lanarkshire, several 200 to 300 megawatt sites across locations such as the Scottish Borders, East Ayrshire and West Lothian, and an Edinburgh site at South Gyle with a capacity of around 212 megawatts. The South Gyle plan includes projected annual emissions of more than 220,000 tonnes of CO2 equivalent, according to figures provided by the developer.

Foxglove notes that the combined demand of these projects is comparable to about two or three times the capacity of the Peterhead gas power station or roughly the combined output of the former Torness and Hunterston B nuclear power plants when both were operating. Scotland’s generation capacity is already close to 20 gigawatts and is expected to more than double by 2030 through growth in renewables, but major upgrades are needed to move electricity to where it is used.

The UK’s Wider Emissions And Planning Context

It’s not surprising, therefore, that environmental groups have raised concerns that such a large new demand from global tech companies could absorb renewable capacity that is needed to decarbonise existing industry and households. In England, research from Foxglove and Global Action Plan estimates that ten of the largest planned data centre projects could together account for around 2.75 million tonnes of CO2 equivalent a year based on developers’ own figures. This is compared with the carbon savings expected from the electric vehicle transition in 2025.

National Grid’s chief executive has said demand from commercial data centres will increase sixfold over the next decade. The UK government has already designated new AI Growth Zones that must have access to at least 500 megawatts of power and has introduced an AI Energy Council to help plan for future demand. Data centre operators are also being encouraged to locate projects in Scotland and northern England where renewable output is higher, although the grid infrastructure linking these regions to demand centres still requires major investment.

Together, these forecasts show how quickly AI infrastructure is reshaping national and regional energy planning. Governments now face decisions about where large facilities can be built, how much new capacity is required, how on site generation should be regulated and how to ensure that the expansion of data centres aligns with emissions targets rather than undermining them.

What Does This Mean For Your Organisation?

The scale of projected demand now makes it clear that energy planning will become one of the defining constraints on AI growth, not just a technical backdrop. The forecasts point to an industry that will only remain viable if power availability, clean generation and long term cost structures are built into every stage of development. This matters because the growth trajectories do not leave much room for delays. Once the data centres currently in the pipeline begin to switch on, the impact on local and national grids will arrive quickly, which heightens the pressure on governments and operators to prove that the required generation and transmission capacity will be there in time.

For UK policymakers, the situation in Scotland shows how fast these pressures can concentrate. If even a portion of the proposed Scottish sites proceed at the scale outlined, energy planners and regulators will face decisions about how to balance industrial demand, household consumption and renewable deployment. That puts transparency, accurate modelling and realistic emissions assessments at the centre of the conversation. It also places a responsibility on developers to demonstrate how their projects will integrate into wider decarbonisation plans rather than simply relying on headline renewable capacity figures.

There are also direct implications for UK businesses. For example, cloud costs are likely to be shaped increasingly by electricity pricing and by the power procurement strategies of the operators behind the services they use. If data centre owners face higher costs for on site generation or grid upgrades, there is a strong chance that these costs will feed through to SaaS platforms, hosting services and AI tools. Businesses that rely heavily on cloud based analytics or emerging AI workloads may, therefore, face more volatile operating expenses unless the industry secures stable long term energy arrangements. Energy reliability also becomes a resilience issue, as organisations will want confidence that the infrastructure behind their digital tools is not exposed to local grid constraints.

For environmental groups and local communities, the findings highlight the need for early scrutiny of project impacts and firm commitments on emissions reduction pathways. The period between now and the mid 2030s is likely to involve a mix of transitional fuels, large new loads and evolving clean technologies, so there is a real question about how to minimise emissions during that window. The faster that credible alternatives such as battery storage, green hydrogen and advanced clean generation mature, the more manageable that interim period becomes.

What emerges across all of this is a picture of an industry that can expand sustainably only if energy availability and environmental impact are treated as core design requirements rather than afterthoughts. The forecasts make the stakes clear. Data centre growth is not slowing, AI demand is rising and the power systems that support them need rapid structural change if reliability, affordability and sustainability are to keep pace.

Tech Tip – Remember To Add Important Folders To Favorites in Outlook

It sounds like a simple idea, but taking a minute to do it could save you many more minutes each day by keeping the folders you use most right at the top of the navigation pane.

How to do it:

– In Microsoft Outlook, in the folder pane, right‑click the folder you want quick access to.
– Choose ‘Show in Favorites’.
– To remove it later, right‑click the same folder in the Favorites section and pick ‘Remove from Favorites’.

Why it helps – One click takes you straight to the folder you need, saving seconds that add up over the day. It’s a tiny change that can make a big difference in your workflow. Give it a try!

UK Plans Major Expansion Of Facial Recognition

The government has set out plans to expand the use of facial recognition and other biometrics across UK policing, describing it as the biggest breakthrough for catching criminals since DNA matching.

A National Strategy For Biometrics

The Home Office has launched a ten week consultation to establish a new legal framework covering all police use of facial recognition and biometric technologies. This would replace the current mix of case law and guidance with a single, structured system that applies consistently across forces.

The plan includes creating a dedicated regulator overseeing facial recognition, fingerprints and emerging biometric tools. The Home Office says a single body would provide clarity and help forces apply safeguards more confidently. It also proposes a national facial matching service, allowing officers to run searches against millions of custody images through one central system.

Breakthrough

Launching the consultation, Crime and Policing Minister Sarah Jones said, “Facial recognition is the biggest breakthrough for catching criminals since DNA matching,” adding, “We will expand its use so that forces can put more criminals behind bars and tackle crime in their communities.” Her view reflects the government’s belief that existing deployments have already demonstrated clear operational value, particularly in identifying violent offenders.

Why Now?

The push for expansion comes as police forces face increasing pressure to track offenders across regions and to manage high volumes of video supplied by retailers, businesses and members of the public. Also, recent cases of prisoners being released in error, or disappearing before arrest, have highlighted the difficulty of locating suspects quickly without technological support.

Public Tolerance For Certain Uses

Government research published alongside the consultation appears to suggest high public tolerance for certain uses. For example, according to the government’s figures, 97 per cent of respondents said retrospective facial recognition is at least sometimes acceptable, while 88 per cent said the same about live facial recognition for locating suspects. Ministers may see this as support for building a clearer framework, although rights groups argue that acceptability is dependent on strict safeguards and transparency.

The Need For Oversight

That said, independent accuracy testing has reinforced the need for stronger oversight. For example, the National Physical Laboratory found that earlier systems used in UK policing produced significantly higher false alert rates for Black and Asian people. The Home Office now acknowledges these disparities, noting that updated systems and reviews have since been introduced. Even so, the findings have shaped calls for clearer legal boundaries before expansion proceeds.

When These Changes Might Take Effect

The consultation runs through early 2026, after which ministers will draft legislation for parliamentary scrutiny. The Home Office estimates that introducing a new legal regime, establishing the regulator and deploying the national facial matching service will take around two years. During that period, existing deployments will continue under current guidance.

Police forces already using live facial recognition, including the Metropolitan Police and South Wales Police, will continue targeted deployments. Trials using mobile facial recognition vans across multiple forces are also expected to continue, and the national facial matching service is scheduled for testing in 2026.

How The Technology Works Across UK Forces Today

Police currently rely on three distinct facial recognition tools, each supporting different operational needs, which are:

1. Retrospective facial recognition. Used during investigations, this compares still images from CCTV, doorbell cameras, mobile footage or social media against custody images. It is the most widely used form, and police say it speeds up identification in cases where investigators have a clear image but no confirmed identity.

2. Live facial recognition. These systems scan faces in real time as people pass a camera. The software compares each face to a watchlist of individuals wanted for specific offences or subject to court conditions. When a possible match arises, officers decide whether to stop the person. Deployments are usually short, targeted and focused on high footfall areas.

3. Operator initiated facial recognition. This mobile app allows officers to check identity during encounters by comparing a photo to custody images, avoiding unnecessary trips to a station solely for identification.

Police leaders say these tools allow forces to locate wanted individuals more efficiently. Lindsey Chiswick, the National Police Chiefs’ Council lead for facial recognition, says the technology “makes officers more effective and delivers more arrests than would otherwise be possible”, adding that “public trust is vital, and we want to build on that by listening to people’s views”.

Legal And Ethical Issues

Legal concerns have followed facial recognition since its earliest deployments, and several landmark rulings continue to shape how police use the technology. For example, back in 2020, a Court of Appeal ruling in the Ed Bridges case remains the most significant legal challenge to date. In this case, the court found that South Wales Police’s early use of live facial recognition breached privacy rights because of inadequate safeguards, incomplete assessments and insufficient checks on whether the system discriminated against particular groups.

Also, the Equality and Human Rights Commission has criticised aspects of earlier Metropolitan Police deployments, saying forces must demonstrate necessity and proportionality each time. The Information Commissioner’s Office has also warned forces to ensure accuracy and justify the retention of custody images belonging to people never convicted of an offence.

Accuracy Problems

Accuracy remains central to the ethical debate. For example, the National Physical Laboratory found that in one system previously used operationally, Asian faces were wrongly flagged around four per cent of the time and Black faces around five and a half per cent, compared with around 0.04 per cent for white faces. For Black women, false alerts rose to nearly ten per cent. These figures show how demographic disparities can emerge in real deployments and highlight the importance of system configuration.

Rights groups warn that these issues could lead to wrongful stops or reinforce existing inequalities. They also argue that routine scanning in public spaces risks creating a sense of constant surveillance that may influence how people move or gather. Liberty has said it is “disappointed” that expansion is being planned before the risks are fully resolved, while Big Brother Watch has urged a pause during the consultation.

Support Strong From Police

It’s worth noting here that, perhaps not surprisingly, support within policing remains strong. For example, former counter terror policing lead Neil Basu says live facial recognition is “a massive step forward for law enforcement, a digital 21st century step change in the tradition of fingerprint and DNA technology”, while noting that it “will still require proper legal safeguards and oversight by the surveillance commissioner”. Police forces repeatedly stress that every alert is reviewed by an officer rather than acted on automatically.

Industry Supports Structured Rollout

Industry organisations also appear to support a structured rollout. For example, Sue Daley, Director of Tech and Innovation at techUK, says “regulation clarity, certainty and consistency on how this technology will be used will be paramount to establish trust and long term public support”. The technology sector argues that clear rules will help build confidence both inside and outside policing.

Charities

Charities focused on vulnerable people have also highlighted some potential benefits. For example, Susannah Drury of Missing People says facial recognition “could help to ensure more missing people are found, protecting people from serious harm”, though she also stresses the need to examine ethical implications before expanding use.

That said, civil liberties groups continue to call for stronger limits, arguing that wider deployment risks normalising biometric scanning in everyday spaces unless strict rules are imposed regarding watchlists, retention and operational necessity.

Areas For Further Debate

The proposals raise questions that will remain live throughout the consultation period. For example, these include how forces will define and maintain watchlists, how the new regulator will enforce safeguards, what thresholds will apply before live facial recognition can be deployed, and how demographic accuracy will be monitored over time. Businesses that operate high footfall environments, such as shopping centres and transport hubs, are also likely to face questions about how their video systems might interact with police requests as adoption increases.

What Does This Mean For Your Business?

It seems that, following this announcement from the government, policymakers now face a moment where practical policing needs, public confidence and legal safeguards must be aligned in a way that has not been achieved before. The consultation sets out an ambition for national consistency and clearer rules, although the evidence presented across this debate shows that accuracy, oversight and transparency will determine whether expansion strengthens trust or undermines it. The range of views from policing, civil liberties groups, industry and charities illustrates how differently this technology is experienced, and why the government will need to resolve issues that sit well beyond technical capability alone.

The implications extend into policing culture, investigative practice and public space management, which will all look different if facial recognition becomes a mainstream tool. Forces anticipate faster identifications, clearer procedures and more reliable ways to locate individuals who pose a genuine risk. Civil society groups, by contrast, point to the potential for overreach unless firm limits are embedded in law. These competing priorities will shape how the regulator operates and how the Home Office interprets proportionality in real deployments.

Businesses also sit at the centre of this discussion because they capture and provide a significant volume of the video footage used in retrospective searches. Retailers, transport hubs and major venues may face new expectations about how they store, secure and share images, and these responsibilities may grow as facial matching becomes more accurate and more widely used. Clearer rules could help organisations understand how to cooperate with investigations without exposing themselves to unnecessary compliance risks, particularly around data protection and equality duties.

The wider public interest lies in how these decisions affect everyday life. Public attitudes will depend on whether safeguards are visible, whether wrongful identifications are prevented, and whether live deployments remain tightly focused rather than becoming a routine feature of public spaces. A national framework could provide that reassurance if it genuinely addresses the concerns raised during testing and legal review. The coming months will show how far the government is prepared to go in defining those boundaries and whether the final model satisfies the mix of operational urgency and ethical caution that has defined this debate so far.

OpenAI Trains AI Models To Confess When They Break The Rules

OpenAI has developed a new research technique that trains advanced AI models to admit when they ignored instructions, took unintended shortcuts, or quietly breached the rules they were given.

A New Approach To Detecting Hidden Misbehaviour

OpenAI’s latest research introduces what it calls a “confession”, which is a second output that sits alongside the model’s main answer. The main answer is trained in the usual way, scoring well when it is helpful, correct, safe, compliant, and aligned with user expectations. However, the confession is different, i.e., it is judged only on honesty, and nothing the model says in this second output can negatively affect the reward for the first.

The idea appears to be fairly simple. For example, when an AI system generates an answer, it is often balancing many objectives at once, which can push it towards unintended behaviour. An answer might look polished from the outside, but internally the model may have relied on a shortcut or rewarded itself for the wrong thing. The new technique tries to surface these hidden steps. As the blog puts it, the confession is meant to provide “a full account of the model’s compliance with the letter and spirit of its policies and instructions”.

The Model Must Explain Things

To produce a confession report, the model is asked to explain three things. It must list the instructions and constraints it believes it should have followed, assess whether its answer actually met each of them, and note any points where the instructions felt ambiguous or unclear. The aim is not to penalise the model for mistakes, but to encourage it to describe what really happened.

Why OpenAI Believes Confessions Work

According to the research team, many types of unwanted behaviour appear because the combined reward signal used in reinforcement learning mixes too many objectives. For example, earlier studies found that some datasets rewarded confident but incorrect answers more than honest uncertainty. Other research showed that models could become overly agreeable when trained too heavily on preference data, or exploit loopholes in reward systems, known as reward hacking.

OpenAI argues that by treating honesty as a completely separate objective, confessions give the model a reason to admit behaviour that would normally go unnoticed. The blog states that honesty emerges because providing a truthful, evidence supported explanation is easier for the model than maintaining a consistent fabricated account that could pass scrutiny from a judging model.

Testing The Method Under Stress

To examine the approach, OpenAI says it trained a version of GPT 5 Thinking to produce confessions, then evaluated it on a set of adversarial stress tests. The tests were specifically designed to provoke misbehaviour such as hallucinations, instruction violations, reward hacking, or scheming.

Across these evaluations, OpenAI reports that the probability of a false negative, where the model broke instructions and then failed to confess, averaged 4.4 per cent. OpenAI says that, in many scenarios, the confession channel actually performed better still. When the model engaged in hacking behaviour, for example, it confessed to breaking the rules in nearly ninety per cent of cases. In tests involving hallucinations and instruction following, most confessions correctly identified non compliance and explained why it happened.

Performance Variable

OpenAI reports that performance was variable across task types, but with an average confession rate of about 74 per cent when the model had misbehaved. In four of the twelve tests, the confession rate exceeded 90 per cent, while in two others it fell to around 50 per cent. OpenAI says that these differences reflect the complexity of certain tasks and the difficulty of judging compliance when instructions are ambiguous.

Example

One example published on OpenAI’s blog illustrates how the system behaves in practice. A model was asked (somewhat weirdly) to write a letter from the perspective of a chef taking up a new sport, with the added instruction to alternate sentence lengths between twelve and eighteen words. The main answer attempted to follow this rule by adding indicators such as “(12)” and “(18)” at the end of each sentence, although the actual counts were not always correct. The confession then reviewed the output and acknowledged that, while it had tried to meet the requirement, it had not achieved the precise word lengths requested.

Experiments With Deliberately Weak Reward Signals

OpenAI also examined what happens when both the main reward signal and the confession reward signal are judged by a weaker model that can be easily exploited. In this setup, the main answer gradually learned how to satisfy the weak judge, producing responses that appeared to score highly while moving further away from genuine accuracy.

The confession channel behaved very differently. Despite being assessed by the same weak judge, its honesty increased as training progressed. When the model broke instructions, the confession eventually acknowledged the failure in almost every case. OpenAI sees this as evidence that, even in a flawed training environment, the most reliable way for a model to maximise its confession reward is simply to tell the truth about what it did.

How The Research Community Is Reacting

The technique has received interest from researchers who focus on AI safety and oversight. It has also prompted some scepticism. For example, some researchers have questioned whether any form of self reporting can be relied on in systems that already show signs of deceptive behaviour in controlled tests. They argue that a model trained to optimise for favourable outcomes may still learn ways to present itself as compliant, which raises doubts about how far confession mechanisms can be trusted on their own.

Doesn’t Prevent Bad Behaviour – It Just Surfaces It

OpenAI acknowledges these concerns. In fact, the company stresses that confessions “do not prevent bad behaviour; they surface it”. They are meant to act as a diagnostic tool, similar in purpose to chain of thought monitoring, which attempts to reveal hidden reasoning. Both techniques aim to make internal behaviour more legible, but neither can directly stop harmful outputs on their own.

Proof of Concept

OpenAI’s work on this could be described as a proof of concept. Training was conducted at a relatively small scale compared with full commercial systems, and confession accuracy remains imperfect. False positives occur, where the model confesses despite following instructions, and false negatives occur, usually because the model misunderstood the instructions or did not realise it had made a mistake.

Possible Implications For Organisations Using AI

While this research is not yet part of any customer facing product, it hints at a possible direction for oversight mechanisms in future AI deployments. In theory, confession style reporting could provide an additional signal for risk teams, for example by highlighting answers where the model believes it might have violated an instruction or where it encountered uncertainty.

Industries with strong regulatory oversight may find structured self analysis useful as one component of an audit trail, provided it is combined with independent evaluation. Confessions could also help technical teams identify where models tend to cut corners during development, allowing them to refine safeguards or add human review for sensitive tasks.

Fits Within A Broader Safety Strategy

OpenAI places confessions within a broader safety strategy that includes deliberative alignment, instruction hierarchies, and improved monitoring tools. The company argues that as AI systems become more capable and more autonomous, there will be greater need for techniques that reveal hidden reasoning or expose early signs of misalignment. Confessions, even in their early form, are presented as one way to improve visibility of behaviour that would otherwise remain obscured.

What Does This Mean For Your Business?

The findings appear to suggest that confession based reporting could become a useful transparency tool rather than a guarantee of safe behaviour. The method exposes what a model believes it did, which offers a way for developers and auditors to understand errors that would otherwise remain hidden. This makes it easier to trace how an output was produced and to identify the points where training signals pulled the model in an unintended direction.

There are also some practical implications for organisations that rely on AI systems, particularly those in regulated sectors. UK businesses that must demonstrate accountability for automated decisions may benefit from structured explanations that help build an audit trail. Confessions could support internal governance processes by flagging moments where a model was uncertain or believed it had not met an instruction, which may help risk and compliance teams decide when human intervention is needed. This will matter as firms increase their use of AI in areas such as customer service, data analysis and operational support.

Developers and safety researchers are also likely to see value in the technique. For example, confessions provide an additional signal when testing models for unwanted behaviour and may help teams identify where shortcuts are likely to appear during training. This also offers a clearer picture of how reward hacking emerges and how different training setups influence the model’s internal incentives.

OpenAI’s framing makes it clear that confessions are not a standalone solution, and actually sit within a larger body of work aimed at improving transparency and oversight as models become more capable. The early results show that the method can surface behaviour that might otherwise go undetected, although it remains reliant on careful interpretation and still produces mistakes. The wider relevance is that it gives researchers, businesses and policymakers another mechanism for assessing whether a system is behaving as intended, which becomes increasingly important as AI tools are deployed in higher stakes environments.

Bank of England Warns AI Valuations Could Trigger a Sharp Market Correction

The Bank of England has warned that the rapid rise in artificial intelligence focused technology stocks has created clear financial stability risks and could lead to a sharp correction in global markets.

AI Valuations Reach Their Most Stretched Levels In Years

The Bank’s latest Financial Stability Report says equity valuations linked to AI are now “particularly stretched”, with US technology firms approaching levels last seen before the dotcom bubble and UK valuations close to their most elevated point since the 2008 financial crisis. The Financial Policy Committee points out that a relatively small number of AI oriented firms have driven much of this year’s market gains, which means any reversal could have outsize effects.

Shares in companies such as Nvidia illustrate the scale of the enthusiasm. For example, it has become one of the world’s most valuable firms (a $5 trillion valuation!) as demand for its AI chips has surged, lifting its share price by more than 30 per cent this year alone following a period of even steeper growth through 2023 and 2024. The Bank notes that this rapid rise reflects real earnings strength, although it also concentrates a significant amount of market value in a handful of firms.

Not Quite Like The 90s

Andrew Bailey, the Bank’s governor, has stressed that today’s large AI firms aren’t comparable to the loss making companies of the late 1990s because they produce strong cash flows and clear commercial demand exists for their products. Bailey added, however, that this does not guarantee stability, especially as competition intensifies. His view is that AI could well become a general purpose technology capable of raising productivity, although valuations can still run far ahead of fundamentals.

A Five Trillion Dollar Infrastructure Spend

One of the most significant risks highlighted in the report is the scale and structure of investment required to support AI development. Industry estimates shared in the document suggest AI infrastructure spending over the next five years could exceed an eye-watering $5 trillion!

The Bank says that while the largest technology firms will fund much of this through their operating cash flows, around half of the total is expected to come from external financing. Debt markets, rather than equity markets, are likely to play the largest role. This includes corporate bond issuance, loans from global banks and lending from the rapidly expanding private credit sector, which exists largely outside traditional regulatory frameworks.

Growing Reliance on Borrowing

The growing reliance on borrowing matters because it creates deeper links between AI firms and the wider financial system. As the Financial Policy Committee warns, this means that if a sharp drop in valuations were to occur, losses on lending could quickly spread beyond the AI sector and place pressure on banks, credit funds and institutional investors. It also notes the increasingly interconnected nature of AI supply chains, which involve multibillion dollar partnerships across cloud providers, chip manufacturers and data centre operators.

Similar International Warnings

It should be noted here that it’s not just the Bank of England that is concerned. International organisations including the IMF and OECD have issued similar assessments this year. Both have pointed to high asset prices driven by optimism about AI related earnings and have warned of the risk of abrupt downward adjustments if expectations weaken. Senior industry figures such as JP Morgan chief executive Jamie Dimon have also expressed concern about market complacency and the possibility of a significant correction.

Why This Is Not A Simple Repeat Of The Dotcom Era

In its report, the Bank goes to some lengths to distinguish current conditions from the late 1990s bubble. Crucially, many AI firms today have established revenue streams and profitable operations and their valuations are based on substantial real world demand for cloud computing, data processing and AI model development.

Scale and Leverage Is The Real Risk Today

The risk instead actually comes from concentration, scale and leverage. For example, market value is increasingly concentrated in a small group of companies whose performance influences global stock indices, pension funds and retail investment products. At the same time, large amounts of borrowing are now tied to long term AI infrastructure projects that depend on continued investor confidence. These dynamics are different from the dotcom era yet present their own vulnerabilities.

Exposure For UK Savers And Pension Funds

The Bank has also made it clear that the UK is not insulated from an AI related correction. Many UK pension funds hold global equity portfolios where AI leaders now account for a significant share of total value. A fall in these stocks would flow through to savers’ pension pots and stocks and shares ISAs.

This has become more relevant following policy moves encouraging savers to invest more heavily in equities. The Bank’s report notes that a broad market decline could reduce household wealth, lower consumption and place additional pressure on the economy at a time when higher mortgage costs are still filtering through. Approximately 3.9 million UK mortgage holders are expected to refinance at higher rates by 2028, although a third may see payments fall as rates stabilise.

UK Banks Pass Stress Tests As Other Risks Grow

The Bank’s stress tests indicate that, thankfully, major UK lenders are resilient enough to withstand a severe downturn that includes higher unemployment, falling house prices and significant market turbulence. This resilience has led the Financial Policy Committee to propose lowering Tier 1 capital requirements from 14 per cent to 13 per cent from 2027, while still leaving banks with an estimated £60 billion buffer above minimum levels.

However, it seems that other parts of the financial system pose greater concerns. For example, the report highlights growing leverage in the UK gilt market, where international hedge funds have been borrowing heavily against their government bond holdings. The Bank warns that forced deleveraging in a downturn could amplify movements in gilt yields and push up government borrowing costs.

It also points to wider global pressures, including geopolitical tensions, cyber threats and rising sovereign debt burdens, which have created a more fragile international financial environment. These risks add further uncertainty to an already stretched market landscape shaped by the rapid growth of AI.

The Message

The key message from the Bank is really not that AI should be viewed with scepticism as a technology. For example, the report recognises that AI could deliver meaningful productivity gains and long term economic benefits. Its warning instead focuses on how the financial side of the AI boom has evolved and the vulnerabilities that could emerge if valuations adjust sharply.

UK businesses that rely on bank lending or capital markets may face more volatile financing conditions if a correction ripples across global markets. Credit channels linked to technology investment could tighten and firms with higher borrowing needs may encounter more expensive or more selective lending.

The Bank is, therefore, encouraging investors, lenders and corporate leaders to prepare for a period where AI continues to expand as a technology while financial markets remain sensitive to any signs that expectations have become overextended.

What Does This Mean For Your Business?

The central point in the Bank’s warning is really the need to separate enthusiasm for AI as a technology from the financial risks created by how the sector is currently being funded and valued. The report makes it clear that AI can still deliver major economic benefits while markets face periods of sharp adjustment, and those two realities can sit side by side. This places investors, policymakers and companies in a position where they must be ready for genuine technological progress and heightened financial volatility at the same time.

For UK businesses, the implications are already taking shape. For example, firms that depend on access to credit may find that lending conditions react quickly to any downturn in global tech markets, especially as a sizeable share of AI expansion is being financed through debt that links the sector more tightly to banks and private credit funds. Companies planning large technology upgrades or long term capital programmes may also need to consider how external shocks could affect borrowing costs or investment appetite. The same applies to institutional investors, pension schemes and retail savers whose portfolios are increasingly influenced by the performance of a small group of global AI firms.

This backdrop also gives the UK’s financial regulators a little bit of room for complacency. The resilience shown in bank stress tests is reassuring, although the vulnerabilities identified in areas such as leveraged gilt trading and private credit activity underline how market tensions could surface outside the traditional banking system. The combination of elevated geopolitical risk, cyber threats and fragile sovereign debt conditions reinforces the picture of a more complex and interconnected risk environment.

The Bank’s assessment, therefore, seems to lean heavily towards caution without dismissing the long term potential of AI. It is basically signalling that stakeholders should not assume current valuations will hold indefinitely and that preparation for a rapid repricing is now a matter of prudence rather than pessimism. UK businesses, financial institutions and savers all have a direct interest in how well those preparations are made, particularly as the effects of any correction would extend far beyond the technology sector itself.

Amazon Tests 30 Minute Deliveries

Amazon is piloting a new ultra fast delivery service that brings household essentials and fresh groceries to customers in parts of Seattle and Philadelphia in about 30 minutes or less.

‘Amazon Now’ And What It Offers

‘Amazon Now’ is a new delivery option built directly into the main Amazon app and website. Customers in eligible neighbourhoods will see a “30 Minute Delivery” tab in the navigation bar, which opens a catalogue of items available for immediate dispatch. The pilot scheme covers thousands of products that customers often need urgently, such as milk, eggs, fresh produce, toothpaste, cosmetics, pet treats, nappies, paper products, over the counter medicines, electronics and seasonal goods. Everyday snacks like crisps and dips are included too, reflecting the impulse led nature of the service.

Ultra-Fast Delivery

Amazon describes it as “an ultra fast delivery offering of the items customers want and need most urgently”, and says its aim is to get essentials to the doorstep in about 30 minutes or less. Customers can place an order, track the driver in real time and add a tip within the app, mirroring the experience already familiar from food delivery platforms.

Where The Pilot Is Running

The rollout is currently only limited to parts of Seattle, where Amazon is headquartered, and parts of Philadelphia in the US. Amazon has not confirmed how many neighbourhoods are covered or how long the test will run, and there is no stated timetable for expansion to other US cities. The company is referring to this phase as a trial, making it clear that the results will shape future decisions.

Was Even Faster in the United Arab Emirates in October

This US pilot follows an ultra fast launch in the United Arab Emirates in October, where Amazon introduced a 15 minute delivery service using micro facilities in local communities. Some customers in the UAE reportedly received their orders in as little as six minutes, showing the company’s willingness to push the limits of rapid fulfilment.

How The 30 Minute Model Works

As you may expect, it seems that hitting a 30 minute delivery window (delivering groceries as fast as a pizza) requires a tightly controlled operation. For example, Amazon says it is using “specialised smaller facilities designed for efficient order fulfilment”, located very close to where customers in both cities live and work. These sites stock a limited but high demand range of items and are built for fast picking, packing and dispatch.

Also, delivery is handled by partners and gig workers who use the Amazon Flex system. Reports from early usage suggest that drivers must leave within a few minutes of receiving an order notification to stay within the promised window. The entire model relies on short travel distances, real time routing, and a fulfilment process that is optimised for speed rather than breadth of inventory.

No Need For Additional Downloads

Since Amazon Now is part of the main shopping app, customers do not need to download anything new or switch services. For example, once they simply enter their postcode, the app confirms eligibility and displays the 30 minute catalogue. The experience is intentionally streamlined to minimise delay between ordering and dispatch.

How Much Does It Cost?

Amazon Now is not included in Prime’s standard free delivery benefits. Instead, Prime members in the pilot areas can access 30 minute delivery from $3.99 per order. Non Prime customers pay $13.99.

A small basket fee of $1.99 applies to orders under $15, which aims to discourage very low value purchases that may be expensive to deliver at ultra fast speeds. This aligns with pricing strategies already used by food and grocery delivery platforms.

It’s An Optional Premium Service

Prime members continue to receive same day, overnight and next day delivery at no additional cost once order thresholds are met, so Amazon Now is essentially positioned as an optional premium service rather than a replacement for existing benefits.

Why Is Amazon Doing This Now?

Amazon Now is designed to fit into the company’s wider logistics expansion programme. In mid 2025, Amazon announced that it planned to invest more than 4 billion US dollars to triple the scale of its delivery network by 2026. This included growing its network of same day facilities and reorganising the entire US fulfilment system around regional hubs. The changes have already reduced average delivery times and increased the proportion of orders arriving the same or next day.

Ultra fast delivery, therefore, marks the next key stage of this strategy. Amazon’s key competitors such as DoorDash, Uber Eats and Instacart already fulfil convenience and grocery orders within an hour, often by picking from local supermarkets. Amazon’s model differs because the inventory is held in its own small facilities, giving the company much tighter control over stock levels, availability and timing.

The new pilot also builds on Amazon’s earlier experiments. For example, the company launched Prime Now in 2014, offering two hour deliveries, then closed the standalone app in 2021 when it folded the service into the main shopping app. Amazon Now is, in effect, a new iteration of that idea, but designed for a world where rapid delivery is becoming mainstream.

Impact On Competitors And The Market

The initial announcement had an immediate market impact. For example, shares in Instacart fell by more than 2 per cent and DoorDash also dipped after the news broke, reflecting investor concern that Amazon may apply the same scale and pricing power to rapid grocery delivery that it previously applied to next day fulfilment. Analysts noted that Amazon’s growing interest in this category could put pressure on existing quick commerce players whose business models often rely on high fees and narrow margins.

Walmart is also part of the competitive picture. The retailer already offers rapid grocery delivery to most US households and benefits from its extensive store network. Industry studies suggest that a large proportion of customers are prepared to pay for fast grocery deliveries, highlighting the strength of demand in this category. Amazon’s pilot will therefore be watched closely by rivals in grocery, convenience and last mile logistics.

Customers And Businesses

For customers in Seattle and Philadelphia, the immediate benefit is convenience. For example, items that once required a trip to a local shop can now be delivered in half an hour, which is faster than typical takeaway delivery times in many parts of the United States. Ultra fast delivery may appeal especially to busy households, parents, pet owners and customers dealing with last minute needs such as forgotten ingredients or essentials.

For businesses, the implications extend beyond retail. FMCG manufacturers and brand owners may now see opportunities to position products within the ultra fast catalogue or to experiment with smaller pack sizes designed specifically for rapid missions. Also, marketing strategies could evolve as Amazon gains new data on urgent purchases and browsing patterns inside the 30 minute section of the app.

Local supermarkets and smaller delivery start ups may face stronger competition if Amazon expands the model. Since Amazon controls both the inventory and the logistics, it may be able to keep prices lower than rivals that rely on third party shops and couriers.

Challenges And Criticisms

It should be noted here that this ultra fast delivery is expensive to run, and analysts have warned that these models can suffer from high operating costs. For example, faster delivery windows require more staff, more micro facilities, more inventory and more vehicles on the road. This can make profitability difficult, especially when customers expect low delivery fees.

There are labour concerns too. Gig workers may face higher pressure when delivery windows are tight, and campaigners are likely to watch how Amazon balances speed with driver wellbeing and safety. Amazon emphasises that its specialised facilities improve safety for staff picking and packing orders, but questions remain around the wider impact on drivers and delivery partners.

Sustainability is another factor to consider. For example, Amazon argues that micro facilities positioned close to customers reduce the distance and emissions associated with deliveries. However, critics point out that ultra fast services may increase the total number of delivery trips and create more packaging waste, particularly for small orders.

There is also a wider cultural debate about the need for extreme immediacy in everyday shopping. Some commentators have questioned whether orders in minutes encourage unnecessary consumption or reinforce habits built around convenience over planning.

What Does This Mean For Your Business?

The Amazon Now pilot highlights how far the rapid delivery market has evolved and why Amazon is investing heavily in this area. The company is using its scale and financial superiority, which is important because it is expensive to run, to test whether ultra fast fulfilment can become a core part of mainstream retail rather than a niche convenience service. The approach brings clear advantages for customers who value immediacy and for Amazon, which gains more control over high demand categories and more insight into urgent purchase behaviour. It also places new pressure on competitors that rely on partnerships with local supermarkets rather than owning their fulfilment process from end to end.

There are still unanswered questions about sustainability, labour practices and long term profitability. Ultra fast delivery needs dense networks of sites, reliable staffing and strong demand at a price customers are willing to pay. These pressures are not limited to the United States and will be watched closely by UK retailers, logistics firms and brands that already operate in a market where fast delivery has become an expectation. UK businesses may find themselves adapting product ranges, marketing tactics or supply chain plans if similar models expand internationally, especially in urban areas where rapid fulfilment could reshape local competition and customer expectations.

The wider impact on city infrastructure, emissions and working conditions will also remain part of the discussion. Everyone from delivery partners to sustainability groups is likely to want assurances that speed does not undermine safety or environmental commitments. The success of the model, therefore, will ultimately depend on whether Amazon can balance convenience with operational, ethical and financial realities while proving that ultra fast fulfilment can scale without intensifying existing challenges.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives