Sustainability-in-Tech: Why Space Is Being Tested as a Home for Data Centres

As the environmental and energy costs of Earth-based data centres rise sharply, companies are beginning to test whether space could offer a more sustainable and resilient place to store critical data.

Rising Pressure On Earth-Based Data Centres

The global data centre industry has faced increasing pressure in recent years as demand has accelerated, driven by the expansion of cloud computing, streaming services and artificial intelligence. Management consultancy McKinsey estimates that global data centre demand will grow by between 19 percent and 22 percent each year through to 2030, a pace that is already placing strain on electricity grids, water resources and planning systems in many countries.

Data centres are so important now because they underpin a wide range of essential digital services, from online banking and government platforms to AI model training. As facilities have grown larger and more concentrated, their physical and environmental impact has become more visible. New developments are often located close to urban areas with strong network connectivity and access to power, increasing pressure on local infrastructure and communities.

Energy, Water And Local Resistance

Traditional data centres place some serious sustained demands on electricity networks because servers and cooling systems must operate continuously to prevent overheating. This means that many facilities also rely heavily on water-based cooling, which has become increasingly problematic in regions experiencing drought or long-term water stress.

As a result, new data centre developments in parts of Europe and North America have faced growing opposition or delays, with local authorities and communities raising concerns over water consumption, grid capacity and land use. These pressures are now colliding with national climate targets, as governments attempt to reduce emissions at the same time as demand for digital infrastructure continues to grow faster than efficiency improvements.

Why Some Firms Are Looking Beyond Earth

Against this backdrop, a small but growing group of companies operating at the intersection of space and digital infrastructure are exploring alternatives beyond Earth. The concept is not to replace terrestrial data centres, but to relocate certain types of data storage and processing to space, where resilience and long-term security are prioritised over ultra low latency.

Advances in launch technology, miniaturised electronics and solid state storage have made off planet infrastructure more technically feasible. Lower launch costs and more reliable space systems have enabled companies to begin testing whether space can support limited but valuable digital workloads.

Early Real World Experiments In Space

One of the most advanced efforts is being led by Lonestar Data Holdings, a Florida based company that has already tested a functioning data centre payload in cislunar space and is preparing for further missions around the Moon.

For example, back in February 2025, Lonestar launched its Freedom data centre payload aboard the Athena lunar lander operated by Intuitive Machines, with launch services provided by SpaceX. The payload travelled more than 300,000 kilometres and completed a series of commercial and technical tests designed to demonstrate that secure data storage and limited edge processing can operate reliably beyond Earth.

In a March 2025 press release, Lonestar confirmed that its payload successfully performed file uploads and downloads, encryption and decryption, authentication, and in space data manipulation for government and enterprise customers. The company also reported that power, temperature, CPU memory and telemetry readings remained stable throughout the mission, indicating that the system could operate within expected limits in the space environment.

Testing Sustainability Claims In Practice

Proponents of space based data centres argue that space offers physical characteristics that could reduce environmental impact compared with Earth-based facilities. For example, putting a data centre in space means that solar energy is constant and unobstructed, avoiding the intermittency associated with renewable generation on Earth. Also, heat can be dissipated through radiative cooling into the vacuum of space, thereby reducing the need for water intensive cooling systems.

Lonestar has highlighted these properties as central to its long-term plans. The company says it intends to operate around the Earth Moon L1 Lagrange point, a region of gravitational stability approximately 300,000 kilometres from Earth that allows continuous solar exposure and a relatively stable thermal environment.

In its public materials, Lonestar states that space provides “twenty four hour access to clean free solar energy and natural radiative cooling”, while also noting that physical distance can enhance resilience and security for specific categories of data.

Data Sovereignty Beyond Earth

Data sovereignty has emerged as another key factor driving interest in off planet storage. For example, governments and regulated sectors often require sensitive data to remain under defined legal jurisdictions, a requirement that can be complex in globally distributed cloud environments.

Lonestar argues that existing space law provides a framework for meeting these obligations. Under international treaties, space objects fall under the jurisdiction of the state that licenses or launches them, effectively extending national legal authority beyond Earth.

In its March 2025 announcement, the company said that “leveraging Earth’s largest satellite, the Moon, and the space around it to ensure secure data storage with data sovereignty, security, resiliency and redundancy will become increasingly vital”.

Chris Stott, Lonestar’s executive chair, described the successful in space tests as a foundational moment for the sector, stating, “This is our Kitty Hawk moment. This is where the future begins for this new resilient layer of critical global infrastructure serving us all down here on Earth.”

Independent Studies And Wider Industry Interest

Lonestar’s work actually reflects broader interest across the space and data infrastructure sectors. For example, a European Commission funded feasibility study known as Ascend, led by Thales Alenia Space, concluded in 2024 that orbiting data centres could offer environmental advantages over ground-based facilities under specific conditions.

The study suggested that a constellation delivering around 10 megawatts of computing power could be comparable to a medium sized terrestrial data centre, while avoiding land use and local water consumption. It also noted that the environmental case depends heavily on reducing emissions from launch systems across their full lifecycle.

Technical And Environmental Constraints

Despite growing interest, it must be said that some significant technical and environmental challenges remain. For example, launching hardware into space is still expensive and carbon intensive, even with reusable rockets. Also, once deployed, hardware is difficult or impossible to repair, and radiation exposure poses long-term reliability risks.

Cooling systems must be designed specifically for microgravity, limiting flexibility and upgrade options. Expanding space based data centres beyond niche, high value use cases would, therefore, require large numbers of launches and extensive orbital infrastructure, raising further questions around sustainability and space debris.

An Additional Layer Of Infrastructure

In reality, most proponents position space based data centres as a complementary layer rather than a replacement for terrestrial facilities. The strongest use cases involve disaster recovery, secure backups and long-term preservation of mission critical data, rather than latency sensitive workloads such as real time AI processing.

Lonestar has confirmed customers including the State of Florida and the Isle of Man government, both of which have highlighted resilience and independence from Earth-based risks as key factors. The company has also stated that capacity on its upcoming missions is already fully sold.

What has changed most significantly is that the concept has moved beyond theory. With functioning data storage already demonstrated in cislunar space, attention is now focused on scale, cost, environmental trade offs and how space based infrastructure may fit into wider sustainability strategies for a rapidly expanding digital economy.

What Does This Mean For Your Organisation?

It seems that space based data centres are now moving from conceptual discussion into early operational reality, but they remain a targeted response to specific pressures rather than a universal solution. The sustainability case rests on some clear trade offs. For example, space offers constant solar power, reduced water use and physical separation from climate and geopolitical risks, while also introducing new environmental costs through launches, manufacturing and long-term orbital operations. Whether the balance proves positive at scale will depend on continued reductions in launch emissions, careful limitation of use cases and a realistic assessment of where off planet infrastructure genuinely adds value.

For UK businesses, space based data storage is unlikely to replace domestic or regional data centres, but it may become relevant for organisations with strict resilience, disaster recovery or sovereignty requirements, particularly in regulated sectors such as finance, government and critical national infrastructure. For these users, space offers a potential additional layer of protection rather than a new primary platform, complementing existing cloud and on premises systems rather than displacing them.

For policymakers, regulators and infrastructure planners, the emergence of space based data centres highlights the growing tension between digital growth and environmental limits on Earth. It underlines the need to treat data infrastructure as critical national capacity, subject to the same long-term planning as energy, transport and water. Space is not a shortcut around sustainability challenges, but its growing role reflects how seriously those challenges are now being taken across the global digital economy.

Tech Tip: See Your Google or Apple Calendar in Outlook for One Clear Schedule

Viewing external calendars in Outlook lets you keep everything in one place, helping you avoid clashes, missed meetings, and constant switching between apps. Here’s how.

How To Add A Google Calendar To Outlook (Unified View)

This method subscribes Outlook to your Google Calendar using an iCal link. It is ideal for visibility rather than editing.

– Open Google Calendar in a web browser.
– In the left-hand calendar list, hover over the calendar you want and select Settings.
– Open Settings and sharing for that calendar.
– Scroll down to Integrate calendar.
– Copy the Secret address in iCal format.
– Open Outlook Calendar (Outlook on the web or the new Outlook app).
– Select Add calendar.
– Select Subscribe from web.
– Paste the iCal link.
– Select Import.

Your Google Calendar will now appear alongside your Outlook calendar and update automatically, although changes must still be made in Google Calendar.

How To Add An Apple (iCloud) Calendar To Outlook

For Apple calendars on Windows, the most reliable option is syncing via iCloud for Windows.

– Download and install iCloud for Windows.
– Sign in using your Apple ID.
– Enable Calendars and Contacts.
– Confirm the option to sync with Microsoft Outlook.
– Open Outlook and check your calendar list.

Your Apple Calendar should now appear inside Outlook and stay in sync.

Things Worth Knowing

– Google Calendar subscriptions in Outlook are usually view-only
– Updates can take a short time to appear after changes
– Most other calendar services will also work if they provide an iCal or ICS subscription link.

Lords Back Under-16 Social Media Ban

The House of Lords has voted to add a legal requirement to block under-16s from social media platforms, intensifying pressure on the government as it runs a parallel consultation on children’s online safety.

Amendment Backed

By 261 votes to 150, peers backed a cross-party amendment to the Children’s Wellbeing and Schools Bill that would require platforms to deploy “highly effective” age checks within a year, marking a rare but not unusual legislative defeat for ministers in the Lords and setting up a politically sensitive return to the Commons.

Who Is Pushing for a Ban and Why?

Support for an under-16 social media ban cuts across party lines at Westminster and is being driven by concern that existing rules are not doing enough to limit children’s exposure to online harms. The amendment in the Lords was sponsored by Conservative former schools minister Lord Nash and backed by Conservative, Liberal Democrat and crossbench peers, along with a small number from Labour. Those in favour argue that a clear national age limit would give parents and schools stronger backing when setting boundaries, while placing the responsibility for enforcement squarely on social media companies rather than families.

In The Commons Too

Momentum has also grown in the Commons. For example, more than 60 Labour MPs have publicly urged ministers to act, while the issue has been raised repeatedly at Prime Minister’s Questions. Outside Westminster, bereaved families and online safety advocates have called for decisive action, citing concerns around mental health, exposure to harmful content and compulsive use. At the same time, children’s charities and civil liberties groups have warned that a blanket ban could create unintended consequences, including displacement to less regulated services and wider use of intrusive age verification.

Australia’s Move and Why It Changed the UK Debate

It seems that UK political interest on this subject intensified after Australia introduced a minimum-age framework in late 2025. Rather than criminalising children’s use, Australia placed the onus on platforms to take “reasonable steps” to prevent under-16s from holding accounts on age-restricted social media services, with enforcement beginning in December 2025.

The Australian model matters because it focuses on accounts rather than total access. For example, under guidance from the Australian Department of Infrastructure and the eSafety Commissioner, under-16s are not penalised for attempting to use services; platforms face compliance action if they fail to implement safeguards. The framework also includes privacy protections around age assurance data and allows some logged-out access, limiting the scope of checks to user accounts.

Australia’s model has become a key reference in the UK debate, cited by ministers and peers as evidence that age-based restrictions could be enforced without universal identity checks. For example, supporters highlight its focus on blocking account creation rather than access itself, while critics argue the policy is too recent to show whether it delivers lasting reductions in harm.

Why the Lords Backed the Amendment

It seems the Lords’ vote reflected frustration with the pace of change and a belief that existing powers are not delivering fast enough. Supporters argued that the Children’s Wellbeing and Schools Bill provided a practical vehicle to force action within a defined timeframe, rather than leaving the issue to future legislation.

During the debate, Lord Nash (Conservative) described teenage social media use as a “societal catastrophe”, arguing that delaying access would give adolescents “a few more years to mature”. Other peers pointed to rising demand for child and adolescent mental health services and disruption in classrooms, while accepting that social media also offers benefits.

However, opponents in the chamber urged caution. For example, Labour peer Lord Knight warned that a blanket ban could push young people towards “less regulated platforms” and deprive them of positive connections, calling instead for young people’s voices to be heard through consultation.

What the Amendment Actually Requires

The amendment does not list specific apps. Instead, it uses the Online Safety Act’s category of “regulated user-to-user services” and sets out a process whereby, within 12 months of the Act passing, ministers would be required to:

Direct the UK Chief Medical Officers to publish advice for parents on children’s social media use at different ages and stages of development.

Introduce regulations mandating “highly effective age assurance” to prevent under-16s from becoming or being users of in-scope platforms.

Crucially, those regulations would be enforceable under the Online Safety Act, bringing them within Ofcom’s existing compliance framework, and would require affirmative approval by both Houses. In practice, that means Parliament would still vote on the detailed rules, including which services fall in scope and what counts as “highly effective”.

How a Ban Could Be Implemented and Enforced

Enforcement would likely focus on preventing account creation by under-16s rather than blocking all content. For example, platforms could be required to use a mix of age-estimation tools, document checks, device signals and repeat prompts, alongside anti-spoofing measures to deter workarounds.

Supporters of the ban argue that reducing exposure, rather than eliminating it entirely, would still lower harm by making social media use less universal among teenagers and easing peer pressure to participate. However, critics say that determined users will continue to find ways around controls, while warning that large-scale age assurance could extend far beyond children, pulling adults into verification systems and normalising online surveillance.

Restricting mainstream platforms also carries a displacement risk, e.g., with some teenagers likely to migrate to smaller or overseas services that operate with weaker moderation and fewer safeguards, potentially complicating child protection rather than improving it.

Why the Government Is Resisting for Now

The government has resisted writing an under-16 social media ban into law for now, opting instead to launch a three-month consultation on children’s online safety that includes the option of a ban alongside measures such as overnight curfews, limits on “doom-scrolling”, tougher enforcement of existing age checks and raising the digital age of consent from 13 to 16.

In a statement to the Commons, Technology Secretary Liz Kendall said the government would “look closely at the experience in Australia” and stressed the need for evidence-led policy. She acknowledged strong views in favour of a ban but warned of risks in different approaches, arguing consultation was the responsible route.

Kendall also emphasised that action is coming regardless, stating: “The question is not whether the government will take further action. We will act robustly.” The resistance, ministers argue, is about timing and design rather than principle.

What It Would Mean for Platforms, Parents and Teenagers

For platforms operating in the UK, a ban would mean heavier compliance costs, tighter onboarding processes and closer scrutiny from regulators. Advertising, influencer marketing and youth-focused features would also face new constraints, while demand for privacy-preserving age assurance services would rise.

For parents, a clear legal line could reduce the burden of negotiating platform rules alone and provide stronger backing for limits at home and in schools. For teenagers, the picture is a bit more mixed. For example, Ofcom research shows most young people report positive experiences online, with many saying social platforms actually help them feel closer to friends. Critics argue that removing access could disproportionately affect isolated or minority groups who rely on online communities.

Business and Policy Implications

Beyond families and platforms, the amendment highlights a broader policy shift. For example, treating social media access more like other age-restricted products would move the UK closer to a regulated-by-default model, with implications for digital identity, privacy and compliance across sectors.

Businesses that rely on youth audiences would need to adjust strategies, while regulators would face pressure to ensure age assurance does not expand unnecessarily. Internationally, the UK’s approach would, no doubt, be watched closely, adding to a growing global debate about how far states should go in reshaping children’s digital lives.

Criticisms Shaping the Commons Fight

As the Bill returns to MPs, the arguments are most likely to focus on scope and consequences rather than intent. For example, critics warn of surveillance creep, imperfect enforcement and the risk of pushing harms elsewhere, whereas supporters say that waiting for perfect solutions still leaves children exposed and that clear age limits would reset expectations.

It’s worth noting here that, with the government’s majority, ministers are pretty likely to overturn the amendment. That said, the Lords’ vote has at least already achieved part of its aim by forcing the issue to the centre of the legislative agenda, ensuring that the consultation’s outcome, and the next steps that follow, will be closely scrutinised.

What Does This Mean For Your Business?

The outcome now hinges on how far ministers are willing to go beyond consultation and whether political pressure in the Commons forces a clearer timetable for change. Even if the Lords amendment is removed, the debate has narrowed the government’s room for manoeuvre by placing an under-16 ban firmly within the range of realistic policy options rather than the margins of discussion. The question has, therefore, now shifted from whether intervention is justified to how prescriptive the state should be, and how quickly any new rules should take effect.

For UK businesses, particularly digital platforms, advertisers and firms operating in regulated online spaces, the policy implications are becoming harder to ignore. Stronger age assurance requirements would bring higher compliance costs and technical complexity, while also creating opportunities for providers of privacy-preserving verification tools and child safety services. More broadly, a move towards age-based restrictions on mainstream platforms would reinforce the UK’s position as a jurisdiction willing to regulate digital products in the same way as other age-sensitive services, with knock-on effects for investment decisions and product design.

For parents, schools and young people, this whole debate reflects a wider tension between protection and participation in digital life. A clear legal threshold could simplify boundary-setting and expectations, yet risks limiting access to the positive aspects of online connection that many teenagers value. How the government balances these competing interests, and whether it opts for a targeted regulatory approach or a clearer statutory ban, will shape not just children’s online experiences but the future direction of UK digital policy more broadly.

OpenAI Brings Age Prediction To ChatGPT Consumer Accounts

OpenAI has started rolling out an age prediction system on ChatGPT consumer plans as it tries to better identify under-18 users and automatically apply stronger safety protections amid rising regulatory pressure and concern about AI’s impact on young people.

Why OpenAI Is Introducing Age Prediction Now

On 20 January 2026, OpenAI confirmed it had begun deploying age prediction across ChatGPT consumer accounts, marking a significant change in how the platform determines whether users are likely to be minors. The move builds on work the company first outlined in September 2025, when it publicly acknowledged that existing age-declaration systems were insufficient on their own.

Several factors have converged to make this rollout unavoidable. For example, regulators in the UK, EU, and US have been tightening expectations around child safety online, with a growing emphasis on proactive risk mitigation rather than self-reported age alone. In the UK, the Online Safety Act places explicit duties on platforms to prevent children from encountering harmful content, while in the EU, the Digital Services Act and related guidance are pushing platforms towards more robust age assurance mechanisms. OpenAI has also confirmed that age prediction will roll out in the EU “in the coming weeks” to reflect regional legal requirements.

Reputational pressure has been another driver. Over the past two years, OpenAI and other AI providers have faced criticism for how conversational AI interacts with teenagers, including high-profile reporting on inappropriate content exposure and edge-case safety failures. OpenAI itself has acknowledged these concerns, stating that “young people deserve technology that both expands opportunity and protects their well-being.”

At the same time, OpenAI argues that improving age detection allows it to loosen unnecessary restrictions on adults. As the company puts it, more reliable age signals “enable us to treat adults like adults and use our tools in the way that they want, within the bounds of safety,” rather than applying broad safety constraints to everyone by default.

How Age Prediction Works in Practice

Rather than relying on a single data point, OpenAI’s system uses an age prediction model designed to estimate whether an account likely belongs to someone under 18. According to the company, the model analyses a combination of behavioural and account-level signals over time.

These signals include how long an account has existed, typical times of day when it is active, usage patterns across sessions, and the age a user has stated in their account settings. None of these factors alone is treated as definitive. Instead, the model weighs them together to make a probabilistic judgement about whether an account is more likely to belong to a minor.

What Happens If The System Can’t Really Tell?

OpenAI has been clear that any uncertainty by its model about a person’s age results in it erring on the side of caution. For example, when the system is not confident about a user’s age, or when available information is incomplete, it defaults to a safer under-18 experience. The company says this approach reflects established research into adolescent development, including differences in impulse control, risk perception, and susceptibility to peer influence.

The rollout is also being used as a live learning exercise. For example, OpenAI has said that deploying age prediction at scale helps it understand which signals are most reliable, allowing the model to be refined over time as patterns become clearer.

What If It Makes A Mistake?

Recognising that automated systems can make mistakes, OpenAI says it has built in a reversal mechanism for adults who are incorrectly classified as under 18. Users can confirm their age through a selfie-based check using Persona, a third-party identity verification service already used by many online platforms.

The process is designed to be quick and optional. Users can check whether additional safeguards have been applied to their account and initiate age confirmation at any time via Settings > Account. If verification is successful, full adult access is restored.

OpenAI describes Persona as a secure service and positions this step as a safeguard against long-term misclassification, rather than a requirement for general ChatGPT use.

What Protections Are Automatically Applied?

When an account is identified as likely belonging to someone under 18, ChatGPT essentially applies a stricter set of content rules, which go beyond the baseline safety filters already in place for all users.

For example, according to OpenAI, the under-18 experience is designed to reduce exposure to specific categories of sensitive content, including graphic violence or gory material, sexual, romantic, or violent role play, depictions of self-harm, and viral challenges that could encourage risky behaviour. Content promoting extreme beauty standards, unhealthy dieting, or body shaming is also restricted.

These measures build on existing teen protections applied to users who self-declare as under 18 at sign-up. The key difference is that age prediction allows these safeguards to be applied even when a user has not disclosed their age accurately.

Guided By Expert Input

OpenAI has been keen to stress that these restrictions are guided by expert input and academic literature on child development, rather than its own ad-hoc policy decisions. The company has also highlighted parental controls as a complementary layer, allowing parents to set quiet hours, disable features such as memory or model training, and receive notifications if signs of acute distress are detected.

Limitations and Trade-Offs

Despite its ambitions, OpenAI has been quite candid about the limits of age prediction. Accurately inferring age from behavioural signals is inherently difficult, particularly when adult and teenage usage patterns can overlap, and false positives remain a risk, especially for adults with irregular usage habits or newer accounts.

Privacy concerns are another potential flashpoint here. For example, while OpenAI says it relies on account-level and behavioural data already generated through normal use, critics argue that increased behavioural inference raises questions about transparency and proportionality. Even when data is not new, the way it is interpreted can feel intrusive to users.

The requirement to submit a selfie for age correction also introduces friction. Although optional, it effectively asks some adults to undergo identity verification to regain full access, a trade-off that may not sit comfortably with all users.

OpenAI has framed these compromises as necessary. For example, in a blog post back in September 2025, the company stated that “when some of our principles are in conflict, we prioritise teen safety ahead of privacy and freedom,” while committing to explain its reasoning publicly.

The Wider Debate on Age Assurance and Platform Responsibility

OpenAI’s move is happening in the middle of (and in response to) an ongoing debate about age assurance across the internet. Governments increasingly expect platforms to move beyond self-declared ages, yet there is no consensus on a perfect technical solution that balances accuracy, privacy, and usability.

In the UK, regulators have signalled that probabilistic age estimation may be acceptable when deployed responsibly and proportionately. In the EU, scrutiny is even sharper, with data protection authorities closely watching how behavioural inference models align with GDPR principles.

Somewhere In The Middle

It seems that OpenAI’s approach sits somewhere between hard identity checks and minimal self-reporting. It avoids mandatory ID verification for all users, while still asserting that platforms have a duty to intervene when there is a reasonable likelihood that a user is a child.

Critics argue that this shifts too much responsibility onto automated systems that remain opaque to users. Supporters counter that doing nothing is no longer viable given the scale and influence of generative AI tools.

What is clear is that age prediction on ChatGPT is unlikely to be the final word. For example, OpenAI has said it will “closely track rollout and use those signals to guide ongoing improvements,” while continuing dialogue with organisations such as the American Psychological Association, ConnectSafely, and the Global Physicians Network. The company has positioned this release as an important milestone rather than a finished solution, signalling that age assurance will remain an evolving part of how AI platforms are expected to operate.

Are Other AI Platforms Taking a Similar Approach?

OpenAI’s move towards age prediction appears to be part of a wider industry trend rather than an isolated decision. In fact, several major AI and consumer technology platforms are now experimenting with ways to identify younger users more reliably and adapt product experiences accordingly, although the technical and policy approaches differ.

Meta has taken one of the closest parallel paths. In January, the company confirmed it had paused teenagers’ access to its AI-powered characters across Instagram and other platforms while it redesigns its under-18 experience. Meta has said it uses a mix of declared age and its own age estimation technology to identify teen users, applying stricter safeguards and parental controls where appropriate. While Meta’s AI features differ from ChatGPT in purpose and scope, the underlying logic is similar: if a system believes a user may be under 18, additional protections are applied by default rather than relying solely on self-reported age.

Anthropic has adopted a more restrictive position. Its Claude AI assistant is marketed as an 18-plus product, with users required to confirm they meet the minimum age during account creation. Anthropic has stated that accounts identified as belonging to minors may be disabled, including where app store data suggests a user is under 18. This approach avoids probabilistic age prediction across behavioural signals, instead enforcing a clear age threshold with limited flexibility.

Microsoft’s Copilot appears to be following a more traditional tiered-access model. For example, Microsoft allows use by people aged 13 to 18 in many regions, subject to parental controls and account supervision, while reserving full functionality for adult accounts. Age is primarily determined through Microsoft account information rather than inferred behaviour, reflecting a model already familiar from Xbox and other Microsoft services.

Google’s Gemini apps seem to rely heavily on supervised accounts for younger users. Access for children under 13 must be enabled by a parent through Google’s Family Link system, which allows ongoing control over features and usage. While this does not involve behavioural age prediction, it still treats age as a core safety signal that shapes how the AI can be used.

Among more open-ended chatbot platforms, Character.AI has moved quickly towards an age-aware model. In late 2025, the company announced restrictions on under-18 users’ access to open-ended chat, alongside the development of a separate teen experience. Character.AI has also introduced an age assurance process that allows users to verify their age via a selfie check when the system believes an account may belong to a minor, closely mirroring OpenAI’s use of Persona for age confirmation.

Taken together, these approaches suggest a broad industry acceptance that self-declared age alone is no longer seen as sufficient. Platforms are experimenting with a spectrum of solutions, ranging from hard age limits through to probabilistic inference and supervised accounts, as they respond to mounting regulatory expectations and public scrutiny around child safety.

What Does This Mean For Your Business?

OpenAI’s rollout of age prediction shows an acknowledgement that general purpose AI tools are now expected to take a more active role in protecting younger users, rather than relying on self-declared age and broad safety rules. The company has positioned this as a pragmatic response to regulatory pressure, public concern, and its own experience of where existing safeguards fall short. Also, it could be seen as an explicit acceptance that there is no clean or perfect solution, only trade-offs between safety, privacy, and usability that platforms now have to make openly.

For UK businesses, this change is not just a consumer safety issue. For example, many organisations already rely on ChatGPT for research, drafting, customer support, and internal productivity, and age-based restrictions could affect how accounts behave in practice, particularly where shared logins, training environments, or younger staff are involved. In fact, more broadly, age assurance, behavioural inference, and defaulting to safer modes are becoming standard expectations for digital services, not edge cases. That has implications for compliance planning, data governance, and how businesses assess the risk profile of the tools they embed into day-to-day operations.

For regulators, parents, educators, and AI providers alike, OpenAI’s approach highlights a general move toward platform responsibility. Age prediction is being treated less as a single technical feature and more as an ongoing governance challenge that will need constant adjustment, oversight, and explanation. The outcome of this rollout will likely influence how future online safety rules are enforced in practice, and how far probabilistic systems are trusted to make judgements about users at scale. What happens next will matter well beyond ChatGPT.

Blue Origin Unveils 6 Tbps Enterprise Satellite Network

Blue Origin has announced TeraWave, a space-based communications network designed to deliver symmetrical data speeds of up to 6 terabits per second worldwide, positioning the company as a serious new contender in high-capacity global connectivity for businesses and governments.

Who Blue Origin Is and What It Does

Blue Origin is the privately owned aerospace and space technology company founded in 2000 by Jeff Bezos, the Amazon founder who remains its sole owner. Headquartered in Kent, Washington, the company develops and operates rocket engines, reusable launch vehicles, lunar landers and satellite systems, with a long-term goal of supporting sustained human activity in space.

Blue Origin is perhaps best known for its widely publicised commercial human spaceflight missions using the reusable New Shepard suborbital rocket. Since 2021, these short space tourism flights have carried a mix of company figures, paying passengers and high-profile public figures. For example, well-kown passengers have included Blue Origin founder Jeff Bezos himself, pop star Katy Perry, film producer Kerianne Flynn and journalist and pilot Lauren Sánchez, who helped organise the company’s widely publicised all-female NS-31 mission in 2025. These flights have given Blue Origin significant public visibility, even though its real longer-term focus is on making launch vehicles, lunar systems and, now, satellite infrastructure.

What Blue Origin Is Introducing?

On 21 January 2026, Blue Origin announced TeraWave, describing it as “a satellite communications network designed to deliver symmetrical data speeds of up to 6 Tbps anywhere on Earth”. The company said the system is purpose-built for enterprise, data centre and government customers that require high-capacity, resilient connectivity for critical operations rather than consumer broadband.

Deployment of the TeraWave constellation is scheduled to begin in the fourth quarter of 2027. Once operational, it is intended to serve tens of thousands of customers globally, particularly in locations where traditional fibre connectivity is expensive, slow to deploy or technically impractical.

How TeraWave Works

TeraWave uses a large, multi-orbit satellite architecture that combines low Earth orbit and medium Earth orbit spacecraft. In total, the planned constellation will consist of 5,408 satellites, including 5,280 in LEO and 128 in MEO, all optically interconnected using laser links.

This design allows data to be routed through space at very high speeds rather than relying solely on ground-based networks. According to Blue Origin, globally distributed customers will be able to access speeds of up to 144 Gbps via Q and V-band radio frequency links from the LEO constellation, while aggregate throughput of up to 6 Tbps will be available through optical links from the MEO layer.

Another Layer of Connectivity to Add to Existing Networks

Blue Origin says TeraWave “adds a space-based layer to your existing network infrastructure”, allowing enterprises to integrate satellite connectivity with existing fibre and cloud networks. The company says its enterprise-grade user and gateway terminals are designed to be rapidly deployable worldwide and to interface directly with high-capacity infrastructure such as data centres and cloud hubs.

Who Is TeraWave Actually For?

Unlike many high-profile satellite internet projects, TeraWave is not aimed at individual consumers. Blue Origin has been explicit that the network is optimised for enterprise, data centre and government users.

For example, typical use cases include connecting distributed data centres, providing resilient backhaul for cloud services, supporting critical infrastructure operators, and offering secure connectivity for defence and public sector organisations. Blue Origin highlights the ability to deliver symmetrical upload and download speeds as a key differentiator, noting that enterprises often need to move large volumes of data in both directions rather than simply consuming content.

Also A Resilience Tool

The company also positions TeraWave as a resilience tool. For example, it says the network can help “keep critical services running during fibre outages, natural disasters, cyber incidents, or maintenance events”, offering an alternative path when terrestrial networks fail.

Why Blue Origin Is Building It

Blue Origin argues that existing connectivity options leave a gap for customers that need extreme throughput, rapid scalability and geographic flexibility. Fibre remains the gold standard for capacity and latency, but deploying diverse fibre routes can be prohibitively expensive or slow, particularly outside major urban centres.

Designed To Complement Rather Than Replace Fibre

TeraWave, therefore, is intended to complement, rather than replace, fibre by providing additional route diversity and on-demand capacity. Blue Origin says the network addresses “the unmet needs of customers who are seeking higher throughput, symmetrical upload and download speeds, more redundancy, and rapid scalability”.

Comparison With Starlink and Others

The most obvious comparison is with Starlink, operated by SpaceX. For example, Elon Musk’s Starlink currently dominates the satellite internet market with thousands of satellites in low Earth orbit and millions of users worldwide. However, Starlink is primarily focused on consumer and small business broadband rather than the high-capacity enterprise connectivity needed to link data centres and critical systems.

Also, Starlink’s typical user speeds are measured in hundreds of megabits per second rather than tens or hundreds of gigabits, and its service is not designed to offer terabit-scale point-to-point connectivity. TeraWave’s emphasis on symmetrical throughput, optical inter-satellite links and enterprise gateways places it in a different category.

Amazon’s Project Kuiper is another relevant competitor. For example, while Jeff Bezos remains Amazon’s executive chairman, Kuiper is actually a very separate venture from Blue Origin. Kuiper is also focused on global broadband access, with plans for more than 3,000 satellites, but like Starlink it targets consumers and small organisations rather than large enterprises and governments.

Traditional satellite operators and terrestrial network providers may also see TeraWave as a disruptive entrant. For example, by offering space-based links capable of moving massive volumes of data between hubs, TeraWave could compete with some long-haul fibre routes for specific use cases, particularly where latency requirements are less stringent than cost and resilience concerns.

Benefits for Businesses and Other Stakeholders

For large organisations, the potential benefits are clear. TeraWave could provide rapid deployment of high-capacity connectivity in new locations, reduce dependence on single fibre routes, and support disaster recovery planning. Data-intensive industries such as cloud services, media distribution, scientific research and defence may find the ability to scale capacity on demand particularly attractive.

Governments may also value the sovereign and security implications of a network designed for critical operations, especially if it offers alternatives to existing commercial satellite providers.

Drawbacks

Despite its promise, TeraWave faces several challenges. For example, building and launching more than 5,400 satellites is capital-intensive, and Blue Origin has not disclosed the total cost of the project or its pricing model for customers. Enterprises will want clarity on latency, reliability under heavy load, and how seamlessly the service integrates with existing network management tools.

There are also regulatory and environmental considerations here. Large constellations raise concerns about orbital congestion, space debris and astronomical interference. Blue Origin will need to demonstrate responsible satellite operations and coordination with other operators.

Critics may also question whether demand for multi-terabit satellite connectivity will actually materialise at the scale Blue Origin anticipates, particularly as terrestrial fibre continues to expand in many regions.

Criticisms and Industry Skepticism

Some analysts have suggested that satellite networks, regardless of throughput, can’t fully match fibre for latency-sensitive applications. Others point to the risk of overcapacity if multiple mega-constellations target overlapping markets.

There is also competitive pressure from established players. For example, SpaceX continues to expand and improve Starlink’s capabilities at pace, while traditional telecom providers are investing heavily in terrestrial and subsea infrastructure.

That said, TeraWave represents quite a significant strategic move for Blue Origin. By targeting enterprise and government users with extreme throughput and resilience, the company is trying to carve out a distinct position in the evolving global connectivity landscape, one that could reshape how large organisations think about network architecture in the years ahead.

What Does This Mean For Your Business?

TeraWave sits somewhere between ambition and execution, with its real impact depending on whether Blue Origin can translate a technically impressive design into a reliable, commercially viable service. If it does, it would give large organisations a new way to think about global connectivity, one that treats space not as a last resort but as an integrated part of core network architecture. That change matters because it challenges long-held assumptions about where capacity, resilience and scale must come from.

For UK businesses in particular, organisations with distributed operations, international data flows or growing reliance on cloud and data centre infrastructure may see value in an additional high-capacity route that is not tied to physical cables or single geographic corridors. TeraWave could appeal to sectors such as finance, research, media, logistics and critical infrastructure, where downtime and congestion carry real operational and financial risk. At the same time, cost, regulatory alignment and performance guarantees will determine whether it becomes a practical option rather than a theoretical one.

Governments may also weigh the resilience and security benefits of TeraWave against regulatory and environmental concerns. Telecom providers are also likely to be looking at whether space-based capacity of this scale alters the economics of long-distance connectivity. Competing satellite operators will now face some pressure to clarify their own enterprise strategies as expectations around throughput and symmetry continue to rise.

What is clear is that Blue Origin is signalling a broader intent to play a long-term role in global infrastructure, not just launch services or spaceflight milestones. TeraWave does not replace fibre, nor does it make existing networks obsolete, but it does introduce a credible alternative layer that could reshape how capacity is planned and protected. Whether that promise holds will only become clear once satellites are in orbit and customers begin to test its limits in real-world conditions.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives