World’s First $5 Trillion Company
Nvidia has become the first company in history to reach a $5 trillion market valuation, as investors bet that demand for its artificial intelligence chips will continue at record levels across global industries.
Who Is Nvidi? Why Does It Matter?
Nvidia began in the 1990s as a designer of graphics processing units (GPUs) for gaming computers. Those chips were originally built to render visuals quickly, but their architecture also made them ideal for performing many calculations at once, which is a capability that later proved critical for artificial intelligence (AI).
Over the past decade, Nvidia has moved far beyond gaming. For example, its GPUs now underpin most of the world’s AI infrastructure, powering data centres that train and run large language models such as ChatGPT. In fact, analysts estimate that Nvidia now controls more than 80 per cent of the global AI chip market.
The Advantage of CUDA
The company also supplies networking technology, entire server systems, and software platforms like CUDA that help developers build AI applications specifically for Nvidia hardware. CUDA, introduced in 2006, is one of Nvidia’s biggest competitive advantages. For example, once a business or research organisation builds its AI workloads on CUDA, it becomes difficult to switch to a rival chipmaker without rewriting large amounts of code (a high barrier to exit). This has created an ecosystem that ties thousands of AI start-ups, cloud providers, and universities to Nvidia’s hardware roadmap.
What Just Happened To Nvidia’s Valuation?
On 29 October, Nvidia’s share price rose more than 5 per cent in a single day to more than $212, lifting its total market capitalisation above $5 trillion. The company had reached $1 trillion only in June 2023 and $4 trillion just three months ago, marking an extraordinary rate of growth – even by technology sector standards!
The immediate catalyst was a string of announcements that reinforced investor confidence in Nvidia’s long-term dominance. For example, chief executive Jensen Huang told analysts that the company expects about $500 billion in AI chip orders over the next year. He also confirmed that Nvidia is building seven new AI supercomputers for the US government, covering areas such as national security, energy research, and scientific computing. Each of those projects will require thousands of Nvidia GPUs, underscoring the company’s position at the centre of the global AI race.
Investor optimism was also fuelled by geopolitics. For example, US President Donald Trump said he plans to discuss Nvidia’s new Blackwell chips with Chinese President Xi Jinping, raising expectations that Nvidia’s access to the Chinese market will continue. China is the company’s single largest overseas market, despite earlier US restrictions on the export of its most advanced AI chips. Nvidia has since agreed to pay 15 per cent of certain China-related revenues to the US government under a licensing arrangement introduced to manage those controls.
How Nvidia’s Chips Became Essential Infrastructure
Nvidia’s growth has been driven by the simple reality that modern AI systems consume huge amounts of computing power. Every new generation of models requires exponentially more data and processing capacity than the last. As a result, global cloud providers such as Microsoft, Amazon, and Google are spending tens of billions of dollars each quarter building new AI data centres, and almost all of them rely on Nvidia’s GPUs.
Jensen Huang’s long-term strategy has been to sell complete systems rather than individual chips. Nvidia now provides full server racks and networking systems optimised for AI workloads, creating an all-in-one platform that large customers can install and scale rapidly. The company’s H100 and newer Blackwell GPUs are currently considered the industry standard for training and running advanced AI models, including those used in robotics, autonomous vehicles, and scientific research.
Nvidia has also been expanding into telecommunications. Earlier this week, it announced a $1 billion investment in Nokia to help develop AI-native 5G Advanced and 6G networks using Nvidia’s computing platforms. The two companies said their goal is to bring artificial intelligence “to every base station” so that future mobile networks can process data and run AI models directly at the edge, reducing latency and improving security.
The Wider Tech Market
Nvidia’s $5 trillion valuation means it now sits ahead of both Apple and Microsoft, which have each passed $4 trillion. Its rise has also helped lift the broader US stock market to record highs, with AI-related firms accounting for around 80 per cent of gains in major indices this year.
The scale of spending linked to Nvidia is immense. For example, Microsoft reported capital expenditure of more than $35 billion in its last quarter, largely on AI infrastructure, and OpenAI recently confirmed that Nvidia will provide 10 gigawatts of computing power to support its future models. Oracle, Amazon, and Meta have all signed multibillion-dollar supply deals.
Self-Reinforcing Loop
This cycle creates what analysts call a self-reinforcing loop. For example, the tech firms buy Nvidia systems to build AI products, those products then demonstrate rapid adoption and revenue potential, which pushes investor optimism even higher, thereby allowing Nvidia to invest more heavily in its next generation of hardware. That in turn becomes the new industry standard.
It seems that competitors are now struggling to catch up with Nvidia. AMD, Intel, and several start-ups are developing rival chips, while Google and Amazon are designing in-house AI accelerators to reduce their reliance on Nvidia. Governments have also entered the race, i.e., China is backing domestic chipmakers to reduce dependence on US technology, and the European Union is investing heavily in semiconductor manufacturing capacity to avoid supply chain vulnerability.
The Impact On Governments, Businesses, And The Global Economy
Nvidia’s technology is now viewed as part of its country’s national infrastructure. For example, the seven US supercomputers being built with Nvidia hardware are intended to strengthen capabilities in defence, climate modelling, and scientific innovation. Access to leading-edge compute power has become a matter of strategic importance, with governments treating it as they once treated oil or rare earth metals.
For telecommunications providers, Nvidia’s partnership with Nokia signals a broader shift toward AI-driven networks that can manage themselves, predict faults, and run advanced analytics at the edge. Industry analysts at Omdia estimate that the market for AI-assisted radio access networks could exceed $200 billion by 2030.
For enterprise customers, the issue is essentially access and cost. For example, Nvidia’s GPUs remain scarce and expensive, and the waiting time for high-end systems can stretch into months. It is that scarcity that gives Nvidia immense pricing power and influence over who can deploy large-scale AI models. Businesses looking to integrate AI into their operations often find themselves competing with global tech giants for limited GPU supply, which can delay projects and inflate costs.
The company’s scale also affects financial markets. At $5 trillion, Nvidia’s value now exceeds the combined stock exchanges of every country in the world except the United States, China, and Japan. Its shares are held across pension funds and index trackers, meaning even small fluctuations in its price can move major global indices.
Growing Concerns About An AI Bubble
However, such rapid growth has led to mounting warnings about a potential AI-driven market bubble. The Bank of England, the International Monetary Fund, and several investment banks have all cautioned that valuations could fall sharply if the expected returns from AI adoption do not arrive quickly enough.
Analysts also point to what some describe as “financial engineering” in the AI sector, i.e., where companies invest in one another to sustain rising valuations. Nvidia has said it plans to invest up to $100 billion in OpenAI over the coming years, with both companies committing to deploy vast amounts of Nvidia hardware to power OpenAI’s future systems. Critics say such arrangements blur the line between commercial demand and strategic co-investment.
Tech Revolution Rather Than Speculative Excess?
It is also worth noting here that some market analysts argue that Nvidia’s growth reflects a genuine technological revolution rather than just speculative excess. Firms such as Ark Invest have suggested that AI remains at an early stage of development and that valuations could still have room to grow, even if a short-term correction occurs. That said, others, including analysts at AJ Bell, have pointed out that Nvidia’s valuation is almost beyond comprehension and likely to intensify debate over an AI bubble, although investors so far appear undeterred.
Trade Policy
Trade policy remains another risk worth mentioning here. For example, Nvidia’s share price briefly dipped in April when markets were shaken by renewed US-China tensions. Although President Trump has since reversed previous restrictions on advanced chip exports, the company’s reliance on Chinese demand makes it vulnerable to future policy changes. Beijing has been promoting local chipmakers and has already ordered state-linked companies to limit purchases of certain Nvidia models designed for the Chinese market.
For now, Nvidia’s impressive momentum looks set to continue. Its share price has risen more than 50 per cent since January, and its influence now extends across sectors from telecommunications to healthcare. Whether that momentum proves sustainable depends not just on how fast AI technology evolves, but on how far global investors are willing to believe in the story of an endless AI boom.
What Does This Mean For Your Business?
Nvidia’s extraordinary valuation is essentially a testament to the transformative potential of AI and a reminder of how concentrated that power has become. The company’s GPUs have effectively become the global standard for AI development, thereby embedding Nvidia deeply into the digital and economic infrastructure of almost every major industry. Yet its dominance also raises questions about long-term sustainability, supply resilience, and how much value creation is tied to speculation rather than fundamentals.
For investors and policymakers, the immediate concern is whether Nvidia’s growth reflects a permanent technological shift or a cycle of exuberance similar to previous tech booms. Central banks have already warned that such concentrated value could expose markets to sudden corrections if AI returns fall short of expectations. Nvidia’s role as both a supplier and investor across the AI ecosystem reinforces its strategic position, but also magnifies systemic risk if demand slows or policy barriers tighten.
For UK businesses, the implications are pretty significant. For example, access to Nvidia’s computing power underpins much of the AI capability now being adopted in sectors such as finance, logistics, healthcare, and manufacturing. As competition for GPUs remains intense, smaller firms may find themselves priced out or forced into cloud-based AI services that depend heavily on US infrastructure. This could deepen reliance on overseas providers unless the UK accelerates investment in domestic compute resources and training. The opportunity for innovation is vast, but so too is the risk of falling behind if access remains limited to global players.
At the same time, the broader technology ecosystem continues to adapt around Nvidia’s dominance. Competitors, governments, and research institutions are all pushing to develop alternative chips and software frameworks to reduce dependency. Whether those efforts succeed will determine how balanced the AI hardware market becomes over the next decade. For now, Nvidia’s scale gives it enormous influence over pricing, research priorities, and even the pace at which AI advances reach commercial use.
What happens next is likely to depend on whether real-world applications catch up with investor enthusiasm. If AI continues to deliver tangible productivity gains across sectors, Nvidia’s valuation may yet prove justified. If expectations cool, the company’s rise could be remembered as the peak of an era when optimism about artificial intelligence reshaped not only technology, but the structure of the global economy itself.
Brands Pay To Be Recommended By AI, Not Google
The Prompting Company has raised $6.5 million to help businesses get mentioned in AI-generated answers from tools like ChatGPT, Gemini, and Claude, signalling a major shift in how people now discover products online.
Who Is The Prompting Company?
The Prompting Company is a young, Y Combinator-backed startup (Y Combinator is a Silicon Valley startup accelerator) that wants to redefine online marketing for the age of artificial intelligence. Founded just four months ago by Kevin Chandra, Michelle Marcelline, and Albert Purnama, the company specialises in what it calls Generative Engine Optimisation, or GEO. The idea is that as people increasingly ask AI tools for advice instead of searching Google, brands must learn how to make their products visible to these systems.
The three founders, all originally from Indonesia, previously built Typedream, an AI-assisted website builder later acquired by Beehiiv, and Cotter, a passwordless authentication service bought by Stytch. Their latest venture reflects what many see as a fundamental turning point in digital discovery, i.e. AI assistants are becoming the new gateway to information, and by extension, to products and services.
Client List
The company’s early client list already includes Rippling (an HR and payroll software platform), Rho (a corporate banking and spend management platform), Motion (an AI-powered productivity and scheduling tool), Fondo (a tax automation platform for startups), Kernel (a data and machine learning infrastructure company), Traceloop (a developer observability platform), and Vapi (an AI voice agent platform), along with one unnamed Fortune 10 business.
What The Prompting Company Does
The startup’s service is actually built around a relatively simple process. For example, first, it identifies the kinds of questions AI systems are being asked in a particular market. Rather than focusing on short search keywords like “best business bank account”, GEO looks for longer, more contextual prompts such as “I’ve just set up a small company, what’s the best business account with no monthly fees?”
Once those queries are identified, The Prompting Company creates structured, machine-readable content that directly answers them. These AI-optimised pages strip away human-facing clutter like pop-ups, menus, and marketing slogans. Instead, they present clean, factual information written in a format that large language models can easily interpret and reference. The company then automatically routes AI crawlers to these pages instead of the brand’s normal website.
In short, it is search engine optimisation for AI rather than for humans. Its goal is to make brands “the product cited by ChatGPT”, as its own website puts it. The service operates on a subscription model, starting from $99 per month for basic tracking of 25 prompts and rising to enterprise plans with custom integrations and support.
The Trend
The Prompting Company’s entire business seems to rest on a simple observation that people are no longer starting their product searches on Google. They are asking AI assistants instead.
For example, Adobe’s 2025 Digital Economy Report found that US traffic from generative AI tools surged 4,700 per cent in a single year, with 38 per cent of consumers saying they had already used AI for shopping. Of those, 73 per cent said AI had now become their main tool for product research. The same report showed that visitors coming via AI assistants stayed 32 per cent longer on sites, viewed more pages, and were 27 per cent less likely to leave immediately. In other words, more shoppers are turning to generative AI tools to find deals, research products, and make buying decisions.
These changes suggest that AI assistants are now beginning to perform the filtering role that search engines once did. Instead of scrolling through links, users receive an instant shortlist of relevant products, often only two or three names. Being one of those names, therefore, has obvious commercial value.
Why Investors Are Paying Attention
That value explains why investors have been so quick to back The Prompting Company. The $6.5 million seed round, led by Peak XV Partners and Base10 with participation from Y Combinator and others, reflects growing belief that the next phase of digital advertising will take place inside AI assistants.
For investors, the logic is pretty straightforward. Whoever shapes how AI tools make product recommendations will control the top of the sales funnel for entire industries. Traditional search and pay-per-click advertising rely on visible results and bids for keywords. In AI-driven discovery, there may be no visible results page at all. An assistant could simply say, “You should try Rho for business banking,” and the conversation ends there.
That urgency among brands is reflected in the company’s own analysis, which suggests that much of the recent growth in website traffic is now coming from AI bots rather than human visitors. The founders say that developers are already using AI tools to ask for product recommendations inside their workflows, and they believe that ordinary consumers are beginning to do the same.
What Will The Funding Be Used For?
The startup says it will use the $6.5 million to scale its platform, develop AI-facing website templates for customers, and expand partnerships with major AI providers. It is also collaborating with Nvidia on “next-generation AI search”, though the details of that project have not been disclosed.
The company currently claims to host around half a million AI-optimised pages and to be driving double-digit millions of monthly visits for clients. Its customers span fintech, developer tools, and enterprise software, but the founders say the model applies to any sector where customers ask detailed, conversational questions.
The Lead In A New Sector
The funding gives The Prompting Company a clear lead in what could soon be a major new marketing sector. For example, by positioning itself as the first dedicated GEO platform, it is creating a new type of infrastructure for online visibility. The company argues that the fastest-growing “users” of the internet today are AI agents, not humans, and that brands need to design websites for those agents first.
It also aims to make GEO repeatable and data-driven, similar to how SEO matured into an industry over the past two decades. The difference is that in AI discovery, results are generated dynamically rather than ranked on a static page, meaning brands will need constant updates to stay visible.
Competitors
The rise of GEO is highly likely to unsettle traditional SEO agencies and digital advertisers. The overlap between Google search results and AI recommendations is shrinking, with some analyses suggesting it has dropped from around 70 per cent to below 20 per cent. That means a brand ranking first on Google might not even appear in an AI assistant’s answer.
Agencies built around keyword bidding and link optimisation now face the challenge of learning how to influence AI-generated answers, which rely on context and relevance rather than metadata and backlinks. This transition could change how marketing budgets are allocated, with more money flowing towards GEO-style services.
AI Companies
For companies like OpenAI, Google, Anthropic, and Meta, this trend could be an opportunity as well as a risk. For example, on the one hand, AI-driven shopping and product discovery could open new sources of revenue, especially as assistants move beyond recommending items to actually completing purchases. OpenAI’s recent integration with Stripe, for example, already allows ChatGPT to handle some transactions directly.
On the other hand, questions around bias and commercial influence are inevitable. For example, if AI assistants begin recommending brands that have paid for optimisation or have supplied AI-friendly data, users may expect clear disclosure of those relationships. Transparency will become crucial as assistants start to resemble personal shoppers or product curators.
There are also technical implications to consider here. GEO depends on AI models being able to browse the open web and ingest structured content. ChatGPT, Gemini, and Perplexity can already do this, but others, such as Anthropic’s Claude, have been more limited. This could lead to a divided ecosystem with some assistants open to optimisation, while others keep recommendations strictly in-house.
Businesses And Advertisers
For businesses, the important message is that appearing in AI-generated answers may soon matter as much as appearing on page one of Google once did. The Prompting Company claims its system allows even small or new brands to compete by creating high-quality, context-aware content that AI tools are more likely to cite.
Early signs suggest that AI-driven traffic, while smaller in volume than search, may be higher in quality. For example, Adobe’s data shows that visitors arriving from AI recommendations tend to stay longer and are more focused on buying decisions. They also use AI most often for complex or big-ticket purchases, where research matters more than impulse.
For advertisers, however, it also poses some new questions, such as how do you measure success when a chatbot’s conversation, not a click, triggers a purchase? How do you influence visibility in an algorithm that changes with every prompt? Also, how do you maintain brand trust when recommendations are made by machines rather than people?
Challenges And Criticisms
As with any fast-moving technology, the rise of generative engine optimisation (GEO) raises a number of ethical and practical questions for both businesses and consumers.
The first challenge is transparency. For example, if brands start paying to be mentioned by AI, users must be able to tell whether a recommendation is organic or commercially influenced. Regulators could extend existing advertising disclosure rules to AI assistants, just as they have done with influencer marketing.
Bias is also a key issue to consider. AI systems are only as balanced as the data they are trained on, and introducing commercial optimisation risks amplifying existing inequalities. Studies of AI in retail have already raised concerns about how these systems collect and use customer data, and whether they treat all consumers fairly. Experts have warned that businesses must prioritise transparency, bias testing, and responsible data use if AI-driven commerce is to gain public trust.
Another challenge is attribution. For example, AI traffic still converts at lower rates than traditional search or social referrals, though the gap is narrowing. Marketers can’t yet prove, with precision, that being mentioned in an AI answer directly leads to a sale. Until that attribution problem is solved, investment in GEO may remain experimental for many firms.
Finally, there are issues around the subject of dependence. For example, if AI assistants become the main interface for product discovery, the brands that are mentioned will dominate attention, and those that are not may struggle to be seen at all. For now, The Prompting Company is just positioning itself as the bridge between those two realities, betting that businesses will soon have to market to AI agents as actively as they do to people.
What Does This Mean For Your Business?
If GEO takes hold in the way its backers expect, the structure of online discovery could change faster than many realise. For UK businesses in particular, this means rethinking how visibility is achieved and measured. Instead of fighting for Google rankings or paying for search ads, companies may soon need to consider whether their products can be understood, cited, and recommended by AI systems that are shaping what customers see first. That shift could favour agile firms that adopt AI-ready content strategies early, while leaving slower competitors struggling to appear in the new recommendation landscape.
For advertising and marketing industries, GEO could become both a challenge and an opportunity. For example, traditional SEO agencies may need to retrain their focus on machine-readable design, structured data, and conversational context, while media buyers could face a future where there are no clear ad slots to purchase. Instead, visibility might depend on maintaining technical partnerships, feeding accurate data to AI systems, and monitoring how generative models respond to brand information in real time.
AI companies also face growing scrutiny as these practices expand. If assistants begin to behave like digital sales representatives, they will need to explain how and why specific products are recommended. Regulators and consumer watchdogs will expect transparency around paid optimisation, and users will demand the ability to distinguish between genuine relevance and commercial influence. Maintaining public trust will require clear standards, and the companies that set them will likely shape the rules for everyone else.
For investors and innovators, the appeal is pretty obvious. GEO creates a new layer of infrastructure in the digital economy, one that could define how brands reach audiences as AI assistants replace search boxes. However, the broader outcome will depend on how responsibly the model is used. If transparency and fairness are built into the system from the start, AI-powered product discovery could simplify choices for consumers and open new routes to market for smaller firms. If not, it risks becoming another opaque advertising channel that benefits only those able to pay for visibility.
For now, The Prompting Company has positioned itself at the centre of that debate. Its technology reflects a future in which algorithms act as gatekeepers to consumer attention, and its early funding shows how much confidence investors have in that vision. Whether this transforms online marketing or simply adds another layer to it will depend on how quickly businesses, regulators, and AI developers adapt to a world where products must be marketed not only to people but to the machines that speak to them.
Microsoft Accusedly Misleads Over Copilot Prices
Australian regulators have taken Microsoft to court, alleging the company misled around 2.7 million Microsoft 365 users by implying they had to accept a higher-priced AI-powered plan or cancel altogether, while failing to reveal a cheaper alternative that was still available.
What Happened and Why?
The case focuses on Microsoft’s handling of its consumer subscription services, i.e., Microsoft 365 Personal and Family plans, used by millions of households for applications such as Word, Excel, PowerPoint, Outlook and OneDrive. These plans are sold on monthly or annual auto-renewing subscriptions, making them a cornerstone of many users’ digital routines.
Back in October 2024, Microsoft decided to integrate Copilot (its generative AI assistant) into Microsoft 365 Personal and Family subscriptions in Australia. The rollout later expanded worldwide in January 2025. Microsoft described Copilot as a major innovation, offering “AI-powered features” that would “help users unlock their potential”.
However, this integration also triggered quite a sharp price rise. For example, according to the Australian Competition and Consumer Commission (ACCC), the annual Microsoft 365 Personal plan increased from AUD 109 to AUD 159, a rise of 45 per cent, while the Family plan rose from AUD 139 to AUD 179, a 29 per cent increase. Monthly fees also went up.
Only Two Choices Implied
The ACCC says Microsoft notified subscribers through two emails and a blog post, telling them their next renewal would include Copilot and the higher price. The messages reportedly told users that unless they cancelled before renewal, the higher charge would apply automatically.
For example, one such email stated: “Unless you cancel two days before your renewal date, we’ll charge AUD 159.00 including taxes every year. Cancel any time to stop future charges or change how you pay by managing your subscription.”
The regulator now alleges these communications implied users had only two choices, i.e., pay more for Copilot, or cancel their subscription entirely. What the company failed to mention, according to the ACCC, was that there was a third option available, which was switching to what Microsoft called the “Classic” plan.
Classic Plan Is Third Option
Although the Classic plan allowed customers to retain all the features of their existing Microsoft 365 subscription, without Copilot, at the old price, it seems that it was not mentioned in Microsoft’s emails or blog post. Instead, the ACCC says the Classic option only appeared if a customer began the cancellation process, navigating through several screens before the option was revealed.
Given how integral Microsoft 365 has become to home users, e.g., providing essential software and cloud storage, the ACCC argues this created unfair pressure. ACCC chair Gina Cass-Gottlieb said: “The Microsoft Office apps included in 365 subscriptions are essential in many people’s lives, and given there are limited substitutes to the bundled package, cancelling the subscription is a decision many would not make lightly.”
Proceedings Filed Against Microsoft
On 27 October 2025, the ACCC filed proceedings in Australia’s Federal Court against both Microsoft Corporation in the United States and its Australian subsidiary, Microsoft Pty Ltd. The regulator alleges that Microsoft engaged in misleading or deceptive conduct, and made false or misleading representations, in breach of sections 18 and 29 of the Australian Consumer Law.
Specifically, the ACCC says Microsoft falsely represented that users had to accept Copilot to maintain access to their subscription (“Copilot Necessity Representation”), that they had to pay higher prices to continue using Microsoft 365 (“Price Necessity Representation”), and that they only had the two options of accepting the higher price or cancelling (“Options Representation”).
The ACCC claims these representations were false and misleading because the Classic plan was available at the old price, without Copilot, and that by omitting mention of that plan, the regulator says Microsoft denied customers the chance to make an informed decision.
Cass-Gottlieb stated: “We will allege in court that Microsoft deliberately omitted reference to the Classic plans in its communications and concealed their existence until after subscribers initiated the cancellation process to increase the number of consumers on more expensive Copilot-integrated plans.” She added: “We believe many Microsoft 365 customers would have opted for the Classic plan had they been aware of all the available options.”
The regulator argues that this omission caused consumers financial harm. Many subscribers, believing they had no alternative, allowed their subscriptions to renew automatically at the higher Copilot rate, and those users, the ACCC says, effectively paid more for something they might not have chosen.
Why It Matters
The case is significant because it highlights how software subscription models are evolving with the introduction of AI features. For example, Microsoft’s integration of Copilot, and the resulting price increases, demonstrates how companies are bundling AI capabilities into established services, but the ACCC argues that this bundling must be transparent and optional.
For consumers, the issue is both financial and procedural. A 45 per cent increase represents a notable cost rise for households relying on Microsoft 365. More importantly, the regulator argues that burying the cheaper Classic plan behind the cancellation flow deprived users of informed consent, which is a key principle in consumer law.
For Microsoft, the allegations really go beyond pricing. For example, the case also touches on interface design and user experience. Regulators are increasingly focused on so-called “dark patterns”, i.e., design choices that nudge users into particular decisions. The ACCC says Microsoft’s renewal flow was structured to steer users towards the more expensive plan by hiding the cheaper one.
For competitors, the case could shape how AI features are rolled out across subscription products. Companies like Google, Apple and Adobe are all integrating AI into consumer and productivity tools. If the court rules that Microsoft’s conduct was misleading, others may need to rethink how they communicate AI upgrades and pricing options.
Cass-Gottlieb said the regulator’s goal is broader than this single case: “All businesses need to provide accurate information about their services and prices. Failure to do so risks breaching the Australian Consumer Law.”
What Happens Next?
The (Australian) Federal Court will now review the ACCC’s evidence, including Microsoft’s October 2024 blog post and the two key emails sent to subscribers. The regulator is seeking penalties, injunctions, declarations, consumer redress and costs.
If the court finds against Microsoft, penalties could be substantial. For example, under Australian law, the maximum fine for each breach is the greater of AUD 50 million, three times the value of any benefits obtained, or 30 per cent of the company’s adjusted turnover during the breach period. The ACCC has signalled that it will seek a significant penalty, citing the number of affected consumers and the scale of the alleged conduct.
Microsoft – Reviewing The Claims
Microsoft has said it is reviewing the claims, adding that “consumer trust and transparency are top priorities” and that it intends to work constructively with the ACCC. The company has not yet filed a detailed defence.
Wider Context
The case comes at a time when regulators worldwide are scrutinising how big technology companies integrate AI into their products. Microsoft has made Copilot central to its software strategy, embedding it into Windows, Office and Bing. The integration has been marketed as a major advance, but it has also raised questions about whether AI is being used to justify higher subscription fees.
It’s worth noting here that, earlier in 2025, Microsoft faced separate antitrust scrutiny in Europe, where it agreed to unbundle Teams from Microsoft 365 after competition regulators raised concerns about unfair bundling. The Australian case is different in that it focuses on consumer fairness rather than competition, but both point to a growing willingness among regulators to challenge how Microsoft structures its product offerings.
The proceedings also coincide with a wider policy debate about consumer protection in digital markets. Regulators in the UK, EU and Australia have been warning companies against design choices that obscure cheaper or less data-intensive options. The ACCC’s case against Microsoft is one of the first major tests of these principles in the context of AI subscription pricing.
Challenges and Criticisms
Microsoft’s defence is expected to centre on whether its communications were genuinely misleading. For example, the company may argue that the price rise and integration were communicated transparently and that the Classic plan was a courtesy option, not an advertised tier.
Critics, however, say the case exposes how complex modern subscription models have become. Consumer advocates argue that when essential software like Microsoft 365 becomes tied to expensive AI add-ons, users may have little real choice, particularly if they are steered away from cheaper options through interface design.
The ACCC alleges Microsoft deliberately hid the Classic option to increase uptake of Copilot, describing the concealment as an intentional strategy. It says consumers’ dependence on Microsoft’s software made them more vulnerable to such tactics.
Meanwhile, business users and consumers have voiced frustration online. For example, some told Australian media they were surprised to find higher charges on renewal and were unaware that an alternative existed. Others have reportedly raised concerns that global technology companies may be using AI upgrades as a pretext for universal price increases.
The case is now being closely watched by regulators and consumer organisations worldwide, who see it as an early test of how AI-linked price changes will be governed under consumer law. For Microsoft, the outcome could determine how it promotes future Copilot features, and how transparent it will need to be with millions of subscribers when the next upgrade arrives.
What This Means For Your Business?
If the ACCC’s case succeeds, it could redefine how companies communicate subscription changes and AI integrations worldwide. The issues at stake extend far beyond Microsoft’s customer base in Australia. For example, transparency in pricing, honest representation of product features, and fair presentation of choices are all central to maintaining consumer trust in a digital economy that increasingly runs on subscriptions rather than ownership. The Court’s decision will, therefore, be closely analysed by consumer regulators, legal teams and software firms around the world.
For Microsoft, the financial penalties may be less significant than the reputational and operational consequences. The company’s strategy of embedding Copilot into every tier of its software ecosystem depends on users accepting AI features as a normal, even necessary, part of productivity software. If regulators conclude that the rollout was handled in a way that misled users, Microsoft may need to re-evaluate how it introduces future AI upgrades and how clearly it differentiates between optional and bundled products. Other technology firms will also be watching closely, given that most are following a similar path of building premium AI layers into existing subscriptions.
For UK businesses, the case highlights how global developments in consumer law can have local implications. The Competition and Markets Authority has already warned UK companies about interface design that conceals key information or discourages users from exercising choice. If Microsoft is found to have breached consumer law in Australia, it may prompt British regulators to take a closer look at how AI-driven services are marketed and priced in the UK. It could also encourage businesses that depend on Microsoft 365 to examine their own contracts and renewal processes more carefully, particularly where subscription changes are linked to new technologies or price adjustments.
The broader lesson is that as AI becomes more integrated into software, the boundary between innovation and obligation must remain clear. Consumers need to know when they are paying extra for AI functionality and when they can reasonably decline it. For regulators, the challenge will be to ensure that product evolution does not erode transparency or consumer control. For the tech industry, the message is that trust will be built not only through advanced technology, but through openness about what that technology costs, how it is delivered, and the real choices available to those who use it.
WhatsApp Introduces Passkey-Encrypted Backups
WhatsApp is rolling out passkey-encrypted backups, thereby letting users protect and recover their chat history using their face, fingerprint, or device screen lock instead of remembering a long password or storing a 64-digit recovery key.
A Major Step in WhatsApp’s Encryption Journey
WhatsApp has announced a new feature that allows users to encrypt their chat backups with passkeys rather than relying on passwords or lengthy encryption codes. Passkeys are a form of passwordless authentication that combine something a user has (their phone) with something they are or know (such as biometrics or a screen lock code). According to WhatsApp, this will make end-to-end encrypted backups simpler and safer to use across iOS and Android devices.
Previously
For years, the app’s end-to-end encryption actually only covered live chats and calls. Messages were secure in transit but often less so once stored in cloud backups. Until 2021, backups to iCloud and Google Drive were not encrypted, which meant anyone who gained access to those cloud accounts could potentially read the stored chat history. That year, Meta introduced end-to-end encrypted backups, giving users the option to protect those files using a password or a randomly generated 64-character key. It was a major privacy milestone, but a cumbersome one: if a user lost the password or key, their backup became permanently inaccessible.
No Need to Memorise a Key
WhatsApp’s new passkey approach doesn’t change how backups are encrypted, but it does change how users unlock them. Instead of memorising a key, people can now rely on the same biometric or lock screen verification they already use to access their phone.
Why Passkeys, and Why Now?
In a blog post titled Encrypting Your WhatsApp Chat Backup Just Got Easier, the company explained the rationale behind the move. “Passkeys will allow you to use your fingerprint, face, or screen lock code to encrypt your chat backups instead of having to memorise a password or a cumbersome 64-digit encryption key,” WhatsApp said. “Now, with just a tap or a glance, the same security that protects your personal chats and calls on WhatsApp is applied to your chat backups so they are always safe, accessible and private.”
The move actually reflects a broader trend in cybersecurity and user experience. For example, while passwords remain the default for most online services, they are increasingly seen as both inconvenient and insecure. Passkeys, built on the FIDO and WebAuthn standards, have been adopted by Apple, Google, and Microsoft as part of the industry-wide transition towards passwordless authentication. WhatsApp’s latest feature extends this approach to backup protection, bringing it in line with these major ecosystems.
Usability is also a central motivation. For example, many users either forgot their encrypted backup password or never enabled the feature at all because of fears they might lose the key. With passkeys, the backup process is far more seamless. The device itself becomes the trusted gatekeeper, using local authentication that the user already understands.
This could also help WhatsApp’s reputation among privacy advocates. The service now has over three billion monthly active users worldwide, and any improvement in accessibility could drive wider adoption of its encryption features.
When?
The company said the rollout will take place “over the coming weeks and months”, meaning not all users will see the new option immediately.
How It Works in Practice
Once available, users can enable passkey-encrypted backups through the app’s settings: Settings → Chats → Chat backup → End-to-end encrypted backup. From there, they can choose to secure their backup using a passkey rather than a password or encryption key.
The difference becomes most apparent when restoring chats to a new device. For example, under the old system, the user needed to type their password or locate their encryption key before WhatsApp could decrypt and restore messages. With passkeys, they simply authenticate using biometrics or a screen lock from their old device, which confirms their identity and decrypts the backup automatically.
This means that a small business owner switching to a new phone can now restore years of client messages and attachments simply by scanning their fingerprint, instead of searching for a forgotten password. It is a small change in process but a significant improvement in ease of use and data recovery.
Why This Matters to UK Businesses
In the UK, WhatsApp is used by millions of professionals as an informal business communication tool. From contractors and consultants to property managers and customer service teams, many rely on WhatsApp to share documents, voice notes, and updates. This has often created a compliance and data protection challenge. Backups stored on cloud platforms without encryption could expose client data if an employee’s personal account were hacked.
By making encrypted backups easier to use, therefore, WhatsApp is now closing one of the remaining security gaps. Businesses that use WhatsApp informally can now encourage staff to enable backup encryption without worrying that forgotten passwords will lock them out of their data. For industries handling sensitive information, e.g., healthcare, construction, and legal services, this makes it simpler to protect communications while maintaining accessibility.
WhatsApp’s focus on usability could also help retain users in the face of competition. For example, rivals such as Signal have long made privacy their main selling point, while enterprise platforms like Microsoft Teams and Slack promote compliance features and centralised data management. Making encrypted backups effortless helps WhatsApp defend its position as both a consumer and small-business communication tool.
Context
The introduction of passkeys for backups also appears to align with Meta’s wider strategy to make encryption a default standard across its messaging platforms. In late 2023, Meta completed the rollout of end-to-end encryption for Messenger and Facebook chats, drawing both praise and criticism from privacy campaigners and regulators. WhatsApp’s latest enhancement, therefore, reinforces that commitment to strong encryption, while also signalling that Meta is aware of usability barriers that have historically held users back.
At the same time, this move may raise new questions for regulators, e.g., governments in the UK, EU, and elsewhere continue to debate how encrypted services fit with lawful access and online safety legislation. If backups are locked behind device-specific passkeys that even Meta cannot access, traditional data requests will yield little beyond metadata such as contact timestamps. That strengthens user privacy but complicates investigations where access to message history has previously depended on unencrypted backups in the cloud.
Potential Challenges and Criticisms
While the update marks another step forward in security and privacy, it is not without its caveats. For example, the security of passkey-encrypted backups depends on the strength of the device lock itself. A weak PIN or an easily accessible biometric can undermine the system. If someone can unlock a user’s phone, they may also be able to restore the encrypted backup. Users are therefore advised to maintain strong device security to benefit fully from the new system.
Recovery is another concern. Unlike a password, a biometric cannot be written down or stored safely elsewhere. That means if a user loses their device and has no other registered one to authorise the restore, they may permanently lose access to their encrypted backup. WhatsApp has confirmed that it will not store recovery copies of encryption keys, maintaining its position that “only you” can access your backup. This reinforces privacy but leaves no route for account recovery if the passkey cannot be used.
The staggered rollout also means adoption will be uneven. Not all users will have access immediately, and device compatibility could differ by region. For organisations using WhatsApp across multiple teams or countries, this might temporarily complicate backup policies or support processes.
There are also some technical limits to consider. For example, the new passkey feature does not address certain underlying encryption vulnerabilities identified by researchers earlier this year, such as weaknesses in WhatsApp’s “prekey” handshake mechanism that could theoretically expose some message metadata under specific conditions. Those findings relate to message exchange rather than backups, but they underline that security in complex systems is never static.
Finally, while this change enhances privacy for individuals, it introduces new complications for organisations that must retain communication records for legal or contractual reasons. Encrypted backups that only employees can decrypt may hinder internal auditing or eDiscovery processes unless alternative data management policies are in place.
WhatsApp’s decision to make passkey-encrypted backups available, therefore, reflects both a technological evolution and a strategic balancing act, i.e., strengthening privacy while trying to keep security practical for billions of users and acceptable to regulators. It reinforces Meta’s message that personal data should remain under user control, but it also leaves open questions about recovery, compliance, and how far convenience can coexist with absolute privacy.
What Does This Mean for Your Business?
WhatsApp’s passkey-encrypted backups close a long-standing gap in its privacy model by uniting strong security with genuine ease of use. The change ensures that users can now protect years of chat history without worrying about lost passwords or unmanageable encryption keys. It also signals Meta’s intent to keep WhatsApp at the forefront of privacy technology while aligning with the global shift toward passwordless authentication across major platforms.
For UK businesses, the update is both an advantage and a challenge. For example, it strengthens protection for sensitive conversations, reducing the risk of data exposure from insecure cloud backups. However, it also places more control in the hands of individual employees, limiting an organisation’s ability to monitor or recover business communications when needed. Firms that use WhatsApp informally for client contact or internal coordination will need to update their data management policies to account for encrypted, user-controlled backups.
Regulators and policymakers are likely to see this as another reminder that end-to-end encryption is now the default expectation rather than a specialist option. While it may complicate lawful access to stored message data, it reflects the direction most major tech companies are taking to meet user privacy demands. For everyday users, the result should be a simpler, more trustworthy backup system that makes security part of the normal experience rather than an optional extra.
The broader lesson here is that encryption can only achieve mass adoption when it becomes invisible to the user. WhatsApp’s move may bring that goal closer, reshaping how individuals, businesses, and governments think about control over digital information in a world where privacy and usability must now coexist.
Company Check : OpenAI Completes Shift Into For-Profit Company
OpenAI has now finished converting itself into a for-profit public benefit corporation, while keeping a mission-led foundation on top, in what may be the most important restructuring so far in the commercial AI race.
Started As Non-Profit
OpenAI was originally founded (back in 2015) as a non-profit research lab with a stated mission to ensure that artificial general intelligence (AI), i.e., AI that is smarter than humans across a wide range of tasks, benefits all of humanity. The company says that mission still applies and what has just been changed is really the legal and financial structure used to pursue it.
To give some background, from 2019 onwards, OpenAI began operating a hybrid model, where a for-profit subsidiary sat under the original non-profit parent. That 2019 model capped investor returns and was designed to let OpenAI raise money for large-scale computing without abandoning its public interest mission. The company has now gone further and completed a full recapitalisation.
The new for-profit entity is called OpenAI Group PBC, and it sits under a renamed parent called the OpenAI Foundation, which is still formally a non-profit. A public benefit corporation in US law is a for-profit company that has an explicit social purpose written into its charter and is legally required to consider wider stakeholders, not only shareholders.
Control Through The Foundation
OpenAI says this structure gives it the best of both worlds. For example, the Foundation is meant to act as a mission guardian and still controls the board of the for-profit, while the for-profit can raise capital, issue equity in the normal way, and operate much more like a conventional tech business. The OpenAI Foundation appoints all members of the OpenAI Group board and can remove directors at any time, which is intended to stop the commercial arm drifting away from the stated mission.
How The Ownership Now Looks
The OpenAI Foundation now owns about 26 per cent of OpenAI Group, a stake the company values at around 130 billion dollars, based on a 500 billion dollar valuation for OpenAI. The Foundation has also been given a warrant that could increase its ownership if OpenAI’s valuation climbs dramatically over the next 15 years, which OpenAI says is designed to ensure that the Foundation remains the single largest long-term beneficiary of OpenAI’s success.
Microsoft’s Input
Microsoft, which first partnered with OpenAI in 2019 and has provided tens of billions of dollars’ worth of cash and cloud infrastructure, will now hold roughly 27 per cent of OpenAI Group. That stake is understood to be worth in the region of 135 billion dollars. Microsoft’s total investment to date is believed to be about 13.8 billion dollars and the new deal effectively locks in a near tenfold return on paper.
Employees Have A Stake
The remaining 47 per cent or so will be held by current and former employees and other investors, including large external backers such as SoftBank. OpenAI employees themselves will collectively hold a significant equity position. The company has said publicly that Sam Altman, its co-founder and chief executive, will not personally take an equity stake in the newly restructured business.
A Move Away From The “Capped Profit” Model
Under this new arrangement, all shareholders in OpenAI Group now hold ordinary stock that rises in value if the company grows. That is an important break from the older “capped profit” model, which had limited investor upside to 100 times their investment, sometimes less. Investors had warned for months that those limits made it harder for OpenAI to raise money at the scale needed to compete with rivals such as Google, Meta, and Anthropic.
Why OpenAI Says The Change Was Necessary
OpenAI’s leadership has argued that the economics of cutting-edge AI made the previous structure unsustainable. For example, training and running increasingly capable AI models depends on enormous quantities of specialised chips, electricity, cooling, data centre space, and engineering talent.
In a livestream outlining the change, Sam Altman said OpenAI had already committed to roughly 1.4 trillion dollars of infrastructure spending, including plans for about 30 gigawatts of dedicated computing capacity, and described that as part of a “gigantic infrastructure buildout” needed to support its research and products.
Altman also said the new for-profit public benefit corporation would “be able to attract the resources we need” to achieve those goals. He framed the move not as a retreat from the original mission but as a way to make it financially viable at global scale.
The Scale Of OpenAI’s Expansion
The restructuring comes as OpenAI expands well beyond its original chatbot. The company is now developing the AI-enabled browser ChatGPT Atlas and a video generation tool called Sora. It is also turning ChatGPT into a full platform where third-party apps can run inside the chatbot.
OpenAI says ChatGPT now has more than 800 million weekly active users, up from 100 million in early 2023, and processes billions of messages a day. At its DevDay event in October 2025, the company said this user base gives developers access to “hundreds of millions” of potential customers inside ChatGPT itself.
Internally, OpenAI sees this scale as justification for moving towards a model closer to a cloud provider than a research lab. Its long-term plans include multi-hundred-billion-dollar data centre projects and major chip supply deals.
Microsoft’s Role In The New Structure
The restructuring also resets the relationship between OpenAI and Microsoft, which had become complicated and politically sensitive. For example, under the previous agreement, Microsoft had broad rights to license and deploy OpenAI’s technologies inside its own products and Azure cloud, in return for providing OpenAI with the compute capacity it needed.
At the same time, Microsoft’s access to OpenAI’s research had conditions tied to artificial general intelligence, or AGI, which created uncertainty about what would happen if OpenAI declared it had reached that milestone.
Under the updated terms, therefore, Microsoft keeps commercial rights to OpenAI’s models and products through 2032, except for consumer hardware. The two companies will also set up an independent expert panel to verify any claim that AGI has been reached, rather than leaving it to OpenAI’s own board.
Microsoft also now gains the freedom to develop AGI-level systems on its own or with other partners. OpenAI, meanwhile, can now work with other cloud and hardware providers, although a reported 250 billion dollar Azure commitment means Microsoft remains central to its infrastructure.
Businesses
For customers, especially UK and global businesses using ChatGPT and related tools, the restructuring signals that OpenAI is no longer just a research organisation. Instead, it is presenting itself as a stable, long-term commercial partner with clear funding and governance.
OpenAI’s chief financial officer has been reported as saying that the Microsoft deal improves its ability to raise capital efficiently, which should be an important reassurance for enterprise buyers who depend on OpenAI’s ongoing investment in model upgrades and infrastructure.
The company has already said it is on track for around 13 billion dollars in revenue this year and is heavily promoting GPT-powered copilots and ChatGPT Enterprise as secure, controllable assistants for regulated industries.
The Power Of Platform Reach
The scale of ChatGPT’s user base is becoming a real strategic asset. If developers can publish applications inside ChatGPT that reach those users directly, OpenAI is effectively creating its own software ecosystem. Sam Altman told developers that “your apps can reach hundreds of millions of chat users” through the interface, a clear signal of where the business is heading.
OpenAI has also promised that its Foundation will continue to fund safety and ethics work. For example, it has committed 25 billion dollars to early focus areas including technical methods to minimise AI harms and research on health and disease. The company says this proves that “mission and commercial success advance together.”
Concerns Over Oversight
Critics, however, have questioned whether a company of OpenAI’s scale can truly balance those goals. For example, the consumer advocacy group Public Citizen argues that the new model effectively turns the non-profit into “a corporate foundation” created to advance the interests of OpenAI’s for-profit arm.
Legal scholars have also raised some doubts about how enforceable a public benefit corporation’s duties really are. For example, Luís Calderón Gómez of Cardozo School of Law has been quoted as saying the law gives companies wide leeway on when to prioritise profit or purpose, calling it “a bit of an empty, unenforceable promise.”
Regulatory Approval And Scrutiny
Attorneys General in California and Delaware have examined the recapitalisation closely because OpenAI’s non-profit assets were “irrevocably dedicated to its charitable purpose.” Both regulators have now approved the change, but only after assurances that the Foundation would retain meaningful oversight.
Some commentators have highlighted how OpenAI could not simply abandon its non-profit obligations without paying fair market value for its assets, which is actually an almost impossible task given the company’s 500 billion dollar valuation.
Generally, it seems that critics are worried that this hybrid model may leave accountability in corporate hands and they fear that AI safety, transparency, and ethics will continue to be handled by internal panels and committees rather than by independent public regulators.
Implications For The AI Market
The restructuring has some implications far beyond OpenAI itself. For example, competitors like Anthropic, Google, Meta, and xAI are now competing on infrastructure scale, compute access, and data availability as much as model performance. OpenAI’s plans for vast long-term chip and energy supply agreements underline how industrialised AI development has become.
Also, Microsoft’s market value briefly passed four trillion dollars after the new deal was announced, reflecting investor confidence in AI’s commercial potential. The two companies are now bound through at least 2032 on model access and cloud contracts, yet both are free to pursue AGI-level work independently.
For governments, the question is who will verify claims that AI systems are approaching AGI? For business users, the focus will be on the stability and transparency of the providers they now depend on. For regulators, the issue is whether a structure that combines charitable oversight with profit-driven control can genuinely deliver on OpenAI’s original promise to ensure that AI benefits everyone.
What Does This Mean For Your Business?
The completed restructuring makes OpenAI one of the most commercially powerful companies in the world while still claiming a public mission at its core. It marks a decisive point where the organisation founded to serve the public good has become an essential pillar of the private AI economy. The OpenAI Foundation may retain formal control, but the market incentives now surrounding the Group mean the company’s behaviour will inevitably be judged by how well it balances its ethical commitments with investor expectations.
For regulators and policymakers, the challenge will be ensuring that OpenAI’s growing influence does not outpace public oversight. As its models shape productivity, education, and media, the concentration of technical capability and data in a single firm will raise questions about accountability and competition. The presence of Microsoft, with its 27 per cent stake, further embeds this partnership at the centre of global AI infrastructure, giving it unprecedented control over how AI reaches both consumers and enterprises.
For UK businesses, the move is likely to have practical consequences. For example, it may bring greater stability, clearer licensing, and a more predictable product roadmap for the ChatGPT tools already being deployed across finance, retail, marketing, and professional services. It also suggests that OpenAI will become a more commercially driven supplier, with pricing and support models that align with corporate software markets rather than experimental research. In this sense, the restructuring could make AI adoption easier for British firms, but also tighten dependence on a single transatlantic provider.
For investors, the shift opens the door to an eventual public offering that could rival the largest listings in history. For OpenAI’s competitors, it raises the bar for capital and infrastructure required to stay relevant. And for everyday users, it may signal a future where AI tools evolve faster but with fewer avenues for independent scrutiny.
OpenAI’s new structure may ultimately prove to be a balancing act between purpose and profit. Whether it succeeds will depend less on how well it is worded in corporate charters and more on how the company behaves when commercial pressures collide with its original promise to ensure that advanced AI benefits all of humanity.
Security Stop-Press: New AI Security Researcher ‘Aardvark’
OpenAI has introduced Aardvark, an autonomous security agent powered by GPT-5 that scans codebases to detect and fix software vulnerabilities before attackers can exploit them.
Described as “an agentic security researcher,” Aardvark continuously analyses repositories, monitors commits, and tests code in sandboxed environments to validate real-world exploitability. It then proposes human-reviewable patches using OpenAI’s Codex system.
OpenAI said Aardvark has already uncovered meaningful flaws in its own software and external partner projects, identifying 92 per cent of known vulnerabilities in benchmark tests and ten new issues worthy of CVE identifiers.
The system is currently in private beta, with OpenAI inviting select organisations to apply for early access through its website to help refine accuracy and reporting workflows. Wider availability is expected once testing concludes, with OpenAI also planning free scans for selected non-commercial open-source projects.
Businesses interested in trying Aardvark can apply to join the beta via OpenAI’s official site and begin integrating it with their GitHub environments to test how autonomous code analysis could help their own security posture.