Lords Back Under-16 Social Media Ban

The House of Lords has voted to add a legal requirement to block under-16s from social media platforms, intensifying pressure on the government as it runs a parallel consultation on children’s online safety.

Amendment Backed

By 261 votes to 150, peers backed a cross-party amendment to the Children’s Wellbeing and Schools Bill that would require platforms to deploy “highly effective” age checks within a year, marking a rare but not unusual legislative defeat for ministers in the Lords and setting up a politically sensitive return to the Commons.

Who Is Pushing for a Ban and Why?

Support for an under-16 social media ban cuts across party lines at Westminster and is being driven by concern that existing rules are not doing enough to limit children’s exposure to online harms. The amendment in the Lords was sponsored by Conservative former schools minister Lord Nash and backed by Conservative, Liberal Democrat and crossbench peers, along with a small number from Labour. Those in favour argue that a clear national age limit would give parents and schools stronger backing when setting boundaries, while placing the responsibility for enforcement squarely on social media companies rather than families.

In The Commons Too

Momentum has also grown in the Commons. For example, more than 60 Labour MPs have publicly urged ministers to act, while the issue has been raised repeatedly at Prime Minister’s Questions. Outside Westminster, bereaved families and online safety advocates have called for decisive action, citing concerns around mental health, exposure to harmful content and compulsive use. At the same time, children’s charities and civil liberties groups have warned that a blanket ban could create unintended consequences, including displacement to less regulated services and wider use of intrusive age verification.

Australia’s Move and Why It Changed the UK Debate

It seems that UK political interest on this subject intensified after Australia introduced a minimum-age framework in late 2025. Rather than criminalising children’s use, Australia placed the onus on platforms to take “reasonable steps” to prevent under-16s from holding accounts on age-restricted social media services, with enforcement beginning in December 2025.

The Australian model matters because it focuses on accounts rather than total access. For example, under guidance from the Australian Department of Infrastructure and the eSafety Commissioner, under-16s are not penalised for attempting to use services; platforms face compliance action if they fail to implement safeguards. The framework also includes privacy protections around age assurance data and allows some logged-out access, limiting the scope of checks to user accounts.

Australia’s model has become a key reference in the UK debate, cited by ministers and peers as evidence that age-based restrictions could be enforced without universal identity checks. For example, supporters highlight its focus on blocking account creation rather than access itself, while critics argue the policy is too recent to show whether it delivers lasting reductions in harm.

Why the Lords Backed the Amendment

It seems the Lords’ vote reflected frustration with the pace of change and a belief that existing powers are not delivering fast enough. Supporters argued that the Children’s Wellbeing and Schools Bill provided a practical vehicle to force action within a defined timeframe, rather than leaving the issue to future legislation.

During the debate, Lord Nash (Conservative) described teenage social media use as a “societal catastrophe”, arguing that delaying access would give adolescents “a few more years to mature”. Other peers pointed to rising demand for child and adolescent mental health services and disruption in classrooms, while accepting that social media also offers benefits.

However, opponents in the chamber urged caution. For example, Labour peer Lord Knight warned that a blanket ban could push young people towards “less regulated platforms” and deprive them of positive connections, calling instead for young people’s voices to be heard through consultation.

What the Amendment Actually Requires

The amendment does not list specific apps. Instead, it uses the Online Safety Act’s category of “regulated user-to-user services” and sets out a process whereby, within 12 months of the Act passing, ministers would be required to:

Direct the UK Chief Medical Officers to publish advice for parents on children’s social media use at different ages and stages of development.

Introduce regulations mandating “highly effective age assurance” to prevent under-16s from becoming or being users of in-scope platforms.

Crucially, those regulations would be enforceable under the Online Safety Act, bringing them within Ofcom’s existing compliance framework, and would require affirmative approval by both Houses. In practice, that means Parliament would still vote on the detailed rules, including which services fall in scope and what counts as “highly effective”.

How a Ban Could Be Implemented and Enforced

Enforcement would likely focus on preventing account creation by under-16s rather than blocking all content. For example, platforms could be required to use a mix of age-estimation tools, document checks, device signals and repeat prompts, alongside anti-spoofing measures to deter workarounds.

Supporters of the ban argue that reducing exposure, rather than eliminating it entirely, would still lower harm by making social media use less universal among teenagers and easing peer pressure to participate. However, critics say that determined users will continue to find ways around controls, while warning that large-scale age assurance could extend far beyond children, pulling adults into verification systems and normalising online surveillance.

Restricting mainstream platforms also carries a displacement risk, e.g., with some teenagers likely to migrate to smaller or overseas services that operate with weaker moderation and fewer safeguards, potentially complicating child protection rather than improving it.

Why the Government Is Resisting for Now

The government has resisted writing an under-16 social media ban into law for now, opting instead to launch a three-month consultation on children’s online safety that includes the option of a ban alongside measures such as overnight curfews, limits on “doom-scrolling”, tougher enforcement of existing age checks and raising the digital age of consent from 13 to 16.

In a statement to the Commons, Technology Secretary Liz Kendall said the government would “look closely at the experience in Australia” and stressed the need for evidence-led policy. She acknowledged strong views in favour of a ban but warned of risks in different approaches, arguing consultation was the responsible route.

Kendall also emphasised that action is coming regardless, stating: “The question is not whether the government will take further action. We will act robustly.” The resistance, ministers argue, is about timing and design rather than principle.

What It Would Mean for Platforms, Parents and Teenagers

For platforms operating in the UK, a ban would mean heavier compliance costs, tighter onboarding processes and closer scrutiny from regulators. Advertising, influencer marketing and youth-focused features would also face new constraints, while demand for privacy-preserving age assurance services would rise.

For parents, a clear legal line could reduce the burden of negotiating platform rules alone and provide stronger backing for limits at home and in schools. For teenagers, the picture is a bit more mixed. For example, Ofcom research shows most young people report positive experiences online, with many saying social platforms actually help them feel closer to friends. Critics argue that removing access could disproportionately affect isolated or minority groups who rely on online communities.

Business and Policy Implications

Beyond families and platforms, the amendment highlights a broader policy shift. For example, treating social media access more like other age-restricted products would move the UK closer to a regulated-by-default model, with implications for digital identity, privacy and compliance across sectors.

Businesses that rely on youth audiences would need to adjust strategies, while regulators would face pressure to ensure age assurance does not expand unnecessarily. Internationally, the UK’s approach would, no doubt, be watched closely, adding to a growing global debate about how far states should go in reshaping children’s digital lives.

Criticisms Shaping the Commons Fight

As the Bill returns to MPs, the arguments are most likely to focus on scope and consequences rather than intent. For example, critics warn of surveillance creep, imperfect enforcement and the risk of pushing harms elsewhere, whereas supporters say that waiting for perfect solutions still leaves children exposed and that clear age limits would reset expectations.

It’s worth noting here that, with the government’s majority, ministers are pretty likely to overturn the amendment. That said, the Lords’ vote has at least already achieved part of its aim by forcing the issue to the centre of the legislative agenda, ensuring that the consultation’s outcome, and the next steps that follow, will be closely scrutinised.

What Does This Mean For Your Business?

The outcome now hinges on how far ministers are willing to go beyond consultation and whether political pressure in the Commons forces a clearer timetable for change. Even if the Lords amendment is removed, the debate has narrowed the government’s room for manoeuvre by placing an under-16 ban firmly within the range of realistic policy options rather than the margins of discussion. The question has, therefore, now shifted from whether intervention is justified to how prescriptive the state should be, and how quickly any new rules should take effect.

For UK businesses, particularly digital platforms, advertisers and firms operating in regulated online spaces, the policy implications are becoming harder to ignore. Stronger age assurance requirements would bring higher compliance costs and technical complexity, while also creating opportunities for providers of privacy-preserving verification tools and child safety services. More broadly, a move towards age-based restrictions on mainstream platforms would reinforce the UK’s position as a jurisdiction willing to regulate digital products in the same way as other age-sensitive services, with knock-on effects for investment decisions and product design.

For parents, schools and young people, this whole debate reflects a wider tension between protection and participation in digital life. A clear legal threshold could simplify boundary-setting and expectations, yet risks limiting access to the positive aspects of online connection that many teenagers value. How the government balances these competing interests, and whether it opts for a targeted regulatory approach or a clearer statutory ban, will shape not just children’s online experiences but the future direction of UK digital policy more broadly.

OpenAI Brings Age Prediction To ChatGPT Consumer Accounts

OpenAI has started rolling out an age prediction system on ChatGPT consumer plans as it tries to better identify under-18 users and automatically apply stronger safety protections amid rising regulatory pressure and concern about AI’s impact on young people.

Why OpenAI Is Introducing Age Prediction Now

On 20 January 2026, OpenAI confirmed it had begun deploying age prediction across ChatGPT consumer accounts, marking a significant change in how the platform determines whether users are likely to be minors. The move builds on work the company first outlined in September 2025, when it publicly acknowledged that existing age-declaration systems were insufficient on their own.

Several factors have converged to make this rollout unavoidable. For example, regulators in the UK, EU, and US have been tightening expectations around child safety online, with a growing emphasis on proactive risk mitigation rather than self-reported age alone. In the UK, the Online Safety Act places explicit duties on platforms to prevent children from encountering harmful content, while in the EU, the Digital Services Act and related guidance are pushing platforms towards more robust age assurance mechanisms. OpenAI has also confirmed that age prediction will roll out in the EU “in the coming weeks” to reflect regional legal requirements.

Reputational pressure has been another driver. Over the past two years, OpenAI and other AI providers have faced criticism for how conversational AI interacts with teenagers, including high-profile reporting on inappropriate content exposure and edge-case safety failures. OpenAI itself has acknowledged these concerns, stating that “young people deserve technology that both expands opportunity and protects their well-being.”

At the same time, OpenAI argues that improving age detection allows it to loosen unnecessary restrictions on adults. As the company puts it, more reliable age signals “enable us to treat adults like adults and use our tools in the way that they want, within the bounds of safety,” rather than applying broad safety constraints to everyone by default.

How Age Prediction Works in Practice

Rather than relying on a single data point, OpenAI’s system uses an age prediction model designed to estimate whether an account likely belongs to someone under 18. According to the company, the model analyses a combination of behavioural and account-level signals over time.

These signals include how long an account has existed, typical times of day when it is active, usage patterns across sessions, and the age a user has stated in their account settings. None of these factors alone is treated as definitive. Instead, the model weighs them together to make a probabilistic judgement about whether an account is more likely to belong to a minor.

What Happens If The System Can’t Really Tell?

OpenAI has been clear that any uncertainty by its model about a person’s age results in it erring on the side of caution. For example, when the system is not confident about a user’s age, or when available information is incomplete, it defaults to a safer under-18 experience. The company says this approach reflects established research into adolescent development, including differences in impulse control, risk perception, and susceptibility to peer influence.

The rollout is also being used as a live learning exercise. For example, OpenAI has said that deploying age prediction at scale helps it understand which signals are most reliable, allowing the model to be refined over time as patterns become clearer.

What If It Makes A Mistake?

Recognising that automated systems can make mistakes, OpenAI says it has built in a reversal mechanism for adults who are incorrectly classified as under 18. Users can confirm their age through a selfie-based check using Persona, a third-party identity verification service already used by many online platforms.

The process is designed to be quick and optional. Users can check whether additional safeguards have been applied to their account and initiate age confirmation at any time via Settings > Account. If verification is successful, full adult access is restored.

OpenAI describes Persona as a secure service and positions this step as a safeguard against long-term misclassification, rather than a requirement for general ChatGPT use.

What Protections Are Automatically Applied?

When an account is identified as likely belonging to someone under 18, ChatGPT essentially applies a stricter set of content rules, which go beyond the baseline safety filters already in place for all users.

For example, according to OpenAI, the under-18 experience is designed to reduce exposure to specific categories of sensitive content, including graphic violence or gory material, sexual, romantic, or violent role play, depictions of self-harm, and viral challenges that could encourage risky behaviour. Content promoting extreme beauty standards, unhealthy dieting, or body shaming is also restricted.

These measures build on existing teen protections applied to users who self-declare as under 18 at sign-up. The key difference is that age prediction allows these safeguards to be applied even when a user has not disclosed their age accurately.

Guided By Expert Input

OpenAI has been keen to stress that these restrictions are guided by expert input and academic literature on child development, rather than its own ad-hoc policy decisions. The company has also highlighted parental controls as a complementary layer, allowing parents to set quiet hours, disable features such as memory or model training, and receive notifications if signs of acute distress are detected.

Limitations and Trade-Offs

Despite its ambitions, OpenAI has been quite candid about the limits of age prediction. Accurately inferring age from behavioural signals is inherently difficult, particularly when adult and teenage usage patterns can overlap, and false positives remain a risk, especially for adults with irregular usage habits or newer accounts.

Privacy concerns are another potential flashpoint here. For example, while OpenAI says it relies on account-level and behavioural data already generated through normal use, critics argue that increased behavioural inference raises questions about transparency and proportionality. Even when data is not new, the way it is interpreted can feel intrusive to users.

The requirement to submit a selfie for age correction also introduces friction. Although optional, it effectively asks some adults to undergo identity verification to regain full access, a trade-off that may not sit comfortably with all users.

OpenAI has framed these compromises as necessary. For example, in a blog post back in September 2025, the company stated that “when some of our principles are in conflict, we prioritise teen safety ahead of privacy and freedom,” while committing to explain its reasoning publicly.

The Wider Debate on Age Assurance and Platform Responsibility

OpenAI’s move is happening in the middle of (and in response to) an ongoing debate about age assurance across the internet. Governments increasingly expect platforms to move beyond self-declared ages, yet there is no consensus on a perfect technical solution that balances accuracy, privacy, and usability.

In the UK, regulators have signalled that probabilistic age estimation may be acceptable when deployed responsibly and proportionately. In the EU, scrutiny is even sharper, with data protection authorities closely watching how behavioural inference models align with GDPR principles.

Somewhere In The Middle

It seems that OpenAI’s approach sits somewhere between hard identity checks and minimal self-reporting. It avoids mandatory ID verification for all users, while still asserting that platforms have a duty to intervene when there is a reasonable likelihood that a user is a child.

Critics argue that this shifts too much responsibility onto automated systems that remain opaque to users. Supporters counter that doing nothing is no longer viable given the scale and influence of generative AI tools.

What is clear is that age prediction on ChatGPT is unlikely to be the final word. For example, OpenAI has said it will “closely track rollout and use those signals to guide ongoing improvements,” while continuing dialogue with organisations such as the American Psychological Association, ConnectSafely, and the Global Physicians Network. The company has positioned this release as an important milestone rather than a finished solution, signalling that age assurance will remain an evolving part of how AI platforms are expected to operate.

Are Other AI Platforms Taking a Similar Approach?

OpenAI’s move towards age prediction appears to be part of a wider industry trend rather than an isolated decision. In fact, several major AI and consumer technology platforms are now experimenting with ways to identify younger users more reliably and adapt product experiences accordingly, although the technical and policy approaches differ.

Meta has taken one of the closest parallel paths. In January, the company confirmed it had paused teenagers’ access to its AI-powered characters across Instagram and other platforms while it redesigns its under-18 experience. Meta has said it uses a mix of declared age and its own age estimation technology to identify teen users, applying stricter safeguards and parental controls where appropriate. While Meta’s AI features differ from ChatGPT in purpose and scope, the underlying logic is similar: if a system believes a user may be under 18, additional protections are applied by default rather than relying solely on self-reported age.

Anthropic has adopted a more restrictive position. Its Claude AI assistant is marketed as an 18-plus product, with users required to confirm they meet the minimum age during account creation. Anthropic has stated that accounts identified as belonging to minors may be disabled, including where app store data suggests a user is under 18. This approach avoids probabilistic age prediction across behavioural signals, instead enforcing a clear age threshold with limited flexibility.

Microsoft’s Copilot appears to be following a more traditional tiered-access model. For example, Microsoft allows use by people aged 13 to 18 in many regions, subject to parental controls and account supervision, while reserving full functionality for adult accounts. Age is primarily determined through Microsoft account information rather than inferred behaviour, reflecting a model already familiar from Xbox and other Microsoft services.

Google’s Gemini apps seem to rely heavily on supervised accounts for younger users. Access for children under 13 must be enabled by a parent through Google’s Family Link system, which allows ongoing control over features and usage. While this does not involve behavioural age prediction, it still treats age as a core safety signal that shapes how the AI can be used.

Among more open-ended chatbot platforms, Character.AI has moved quickly towards an age-aware model. In late 2025, the company announced restrictions on under-18 users’ access to open-ended chat, alongside the development of a separate teen experience. Character.AI has also introduced an age assurance process that allows users to verify their age via a selfie check when the system believes an account may belong to a minor, closely mirroring OpenAI’s use of Persona for age confirmation.

Taken together, these approaches suggest a broad industry acceptance that self-declared age alone is no longer seen as sufficient. Platforms are experimenting with a spectrum of solutions, ranging from hard age limits through to probabilistic inference and supervised accounts, as they respond to mounting regulatory expectations and public scrutiny around child safety.

What Does This Mean For Your Business?

OpenAI’s rollout of age prediction shows an acknowledgement that general purpose AI tools are now expected to take a more active role in protecting younger users, rather than relying on self-declared age and broad safety rules. The company has positioned this as a pragmatic response to regulatory pressure, public concern, and its own experience of where existing safeguards fall short. Also, it could be seen as an explicit acceptance that there is no clean or perfect solution, only trade-offs between safety, privacy, and usability that platforms now have to make openly.

For UK businesses, this change is not just a consumer safety issue. For example, many organisations already rely on ChatGPT for research, drafting, customer support, and internal productivity, and age-based restrictions could affect how accounts behave in practice, particularly where shared logins, training environments, or younger staff are involved. In fact, more broadly, age assurance, behavioural inference, and defaulting to safer modes are becoming standard expectations for digital services, not edge cases. That has implications for compliance planning, data governance, and how businesses assess the risk profile of the tools they embed into day-to-day operations.

For regulators, parents, educators, and AI providers alike, OpenAI’s approach highlights a general move toward platform responsibility. Age prediction is being treated less as a single technical feature and more as an ongoing governance challenge that will need constant adjustment, oversight, and explanation. The outcome of this rollout will likely influence how future online safety rules are enforced in practice, and how far probabilistic systems are trusted to make judgements about users at scale. What happens next will matter well beyond ChatGPT.

Blue Origin Unveils 6 Tbps Enterprise Satellite Network

Blue Origin has announced TeraWave, a space-based communications network designed to deliver symmetrical data speeds of up to 6 terabits per second worldwide, positioning the company as a serious new contender in high-capacity global connectivity for businesses and governments.

Who Blue Origin Is and What It Does

Blue Origin is the privately owned aerospace and space technology company founded in 2000 by Jeff Bezos, the Amazon founder who remains its sole owner. Headquartered in Kent, Washington, the company develops and operates rocket engines, reusable launch vehicles, lunar landers and satellite systems, with a long-term goal of supporting sustained human activity in space.

Blue Origin is perhaps best known for its widely publicised commercial human spaceflight missions using the reusable New Shepard suborbital rocket. Since 2021, these short space tourism flights have carried a mix of company figures, paying passengers and high-profile public figures. For example, well-kown passengers have included Blue Origin founder Jeff Bezos himself, pop star Katy Perry, film producer Kerianne Flynn and journalist and pilot Lauren Sánchez, who helped organise the company’s widely publicised all-female NS-31 mission in 2025. These flights have given Blue Origin significant public visibility, even though its real longer-term focus is on making launch vehicles, lunar systems and, now, satellite infrastructure.

What Blue Origin Is Introducing?

On 21 January 2026, Blue Origin announced TeraWave, describing it as “a satellite communications network designed to deliver symmetrical data speeds of up to 6 Tbps anywhere on Earth”. The company said the system is purpose-built for enterprise, data centre and government customers that require high-capacity, resilient connectivity for critical operations rather than consumer broadband.

Deployment of the TeraWave constellation is scheduled to begin in the fourth quarter of 2027. Once operational, it is intended to serve tens of thousands of customers globally, particularly in locations where traditional fibre connectivity is expensive, slow to deploy or technically impractical.

How TeraWave Works

TeraWave uses a large, multi-orbit satellite architecture that combines low Earth orbit and medium Earth orbit spacecraft. In total, the planned constellation will consist of 5,408 satellites, including 5,280 in LEO and 128 in MEO, all optically interconnected using laser links.

This design allows data to be routed through space at very high speeds rather than relying solely on ground-based networks. According to Blue Origin, globally distributed customers will be able to access speeds of up to 144 Gbps via Q and V-band radio frequency links from the LEO constellation, while aggregate throughput of up to 6 Tbps will be available through optical links from the MEO layer.

Another Layer of Connectivity to Add to Existing Networks

Blue Origin says TeraWave “adds a space-based layer to your existing network infrastructure”, allowing enterprises to integrate satellite connectivity with existing fibre and cloud networks. The company says its enterprise-grade user and gateway terminals are designed to be rapidly deployable worldwide and to interface directly with high-capacity infrastructure such as data centres and cloud hubs.

Who Is TeraWave Actually For?

Unlike many high-profile satellite internet projects, TeraWave is not aimed at individual consumers. Blue Origin has been explicit that the network is optimised for enterprise, data centre and government users.

For example, typical use cases include connecting distributed data centres, providing resilient backhaul for cloud services, supporting critical infrastructure operators, and offering secure connectivity for defence and public sector organisations. Blue Origin highlights the ability to deliver symmetrical upload and download speeds as a key differentiator, noting that enterprises often need to move large volumes of data in both directions rather than simply consuming content.

Also A Resilience Tool

The company also positions TeraWave as a resilience tool. For example, it says the network can help “keep critical services running during fibre outages, natural disasters, cyber incidents, or maintenance events”, offering an alternative path when terrestrial networks fail.

Why Blue Origin Is Building It

Blue Origin argues that existing connectivity options leave a gap for customers that need extreme throughput, rapid scalability and geographic flexibility. Fibre remains the gold standard for capacity and latency, but deploying diverse fibre routes can be prohibitively expensive or slow, particularly outside major urban centres.

Designed To Complement Rather Than Replace Fibre

TeraWave, therefore, is intended to complement, rather than replace, fibre by providing additional route diversity and on-demand capacity. Blue Origin says the network addresses “the unmet needs of customers who are seeking higher throughput, symmetrical upload and download speeds, more redundancy, and rapid scalability”.

Comparison With Starlink and Others

The most obvious comparison is with Starlink, operated by SpaceX. For example, Elon Musk’s Starlink currently dominates the satellite internet market with thousands of satellites in low Earth orbit and millions of users worldwide. However, Starlink is primarily focused on consumer and small business broadband rather than the high-capacity enterprise connectivity needed to link data centres and critical systems.

Also, Starlink’s typical user speeds are measured in hundreds of megabits per second rather than tens or hundreds of gigabits, and its service is not designed to offer terabit-scale point-to-point connectivity. TeraWave’s emphasis on symmetrical throughput, optical inter-satellite links and enterprise gateways places it in a different category.

Amazon’s Project Kuiper is another relevant competitor. For example, while Jeff Bezos remains Amazon’s executive chairman, Kuiper is actually a very separate venture from Blue Origin. Kuiper is also focused on global broadband access, with plans for more than 3,000 satellites, but like Starlink it targets consumers and small organisations rather than large enterprises and governments.

Traditional satellite operators and terrestrial network providers may also see TeraWave as a disruptive entrant. For example, by offering space-based links capable of moving massive volumes of data between hubs, TeraWave could compete with some long-haul fibre routes for specific use cases, particularly where latency requirements are less stringent than cost and resilience concerns.

Benefits for Businesses and Other Stakeholders

For large organisations, the potential benefits are clear. TeraWave could provide rapid deployment of high-capacity connectivity in new locations, reduce dependence on single fibre routes, and support disaster recovery planning. Data-intensive industries such as cloud services, media distribution, scientific research and defence may find the ability to scale capacity on demand particularly attractive.

Governments may also value the sovereign and security implications of a network designed for critical operations, especially if it offers alternatives to existing commercial satellite providers.

Drawbacks

Despite its promise, TeraWave faces several challenges. For example, building and launching more than 5,400 satellites is capital-intensive, and Blue Origin has not disclosed the total cost of the project or its pricing model for customers. Enterprises will want clarity on latency, reliability under heavy load, and how seamlessly the service integrates with existing network management tools.

There are also regulatory and environmental considerations here. Large constellations raise concerns about orbital congestion, space debris and astronomical interference. Blue Origin will need to demonstrate responsible satellite operations and coordination with other operators.

Critics may also question whether demand for multi-terabit satellite connectivity will actually materialise at the scale Blue Origin anticipates, particularly as terrestrial fibre continues to expand in many regions.

Criticisms and Industry Skepticism

Some analysts have suggested that satellite networks, regardless of throughput, can’t fully match fibre for latency-sensitive applications. Others point to the risk of overcapacity if multiple mega-constellations target overlapping markets.

There is also competitive pressure from established players. For example, SpaceX continues to expand and improve Starlink’s capabilities at pace, while traditional telecom providers are investing heavily in terrestrial and subsea infrastructure.

That said, TeraWave represents quite a significant strategic move for Blue Origin. By targeting enterprise and government users with extreme throughput and resilience, the company is trying to carve out a distinct position in the evolving global connectivity landscape, one that could reshape how large organisations think about network architecture in the years ahead.

What Does This Mean For Your Business?

TeraWave sits somewhere between ambition and execution, with its real impact depending on whether Blue Origin can translate a technically impressive design into a reliable, commercially viable service. If it does, it would give large organisations a new way to think about global connectivity, one that treats space not as a last resort but as an integrated part of core network architecture. That change matters because it challenges long-held assumptions about where capacity, resilience and scale must come from.

For UK businesses in particular, organisations with distributed operations, international data flows or growing reliance on cloud and data centre infrastructure may see value in an additional high-capacity route that is not tied to physical cables or single geographic corridors. TeraWave could appeal to sectors such as finance, research, media, logistics and critical infrastructure, where downtime and congestion carry real operational and financial risk. At the same time, cost, regulatory alignment and performance guarantees will determine whether it becomes a practical option rather than a theoretical one.

Governments may also weigh the resilience and security benefits of TeraWave against regulatory and environmental concerns. Telecom providers are also likely to be looking at whether space-based capacity of this scale alters the economics of long-distance connectivity. Competing satellite operators will now face some pressure to clarify their own enterprise strategies as expectations around throughput and symmetry continue to rise.

What is clear is that Blue Origin is signalling a broader intent to play a long-term role in global infrastructure, not just launch services or spaceflight milestones. TeraWave does not replace fibre, nor does it make existing networks obsolete, but it does introduce a credible alternative layer that could reshape how capacity is planned and protected. Whether that promise holds will only become clear once satellites are in orbit and customers begin to test its limits in real-world conditions.

UK Government Begins Testing of Digital Driving Licence

The UK government has begun testing a digital version of the driving licence as part of wider plans to modernise how people prove their identity and access public services through their smartphones.

Starts With Veteran Card

The testing marks a significant step in the rollout of a new GOV.UK Wallet, which is designed to allow people to store official government-issued documents digitally, starting with a digital Veteran Card and an early version of a mobile driving licence later this year.

Testing Began In December 2025

The digital driving licence is being tested privately within government, following initial development work led by the Government Digital Service and the Driver and Vehicle Licensing Agency. Testing began in December, involving a small group of staff from GDS and DVLA, and is intended to inform a broader rollout planned for later in 2026.

The licence will be accessed through the GOV.UK One Login app, which already provides a single sign-on system for accessing government services. Within that app, the driving licence will function as a digital credential, allowing users to prove both their right to drive and, eventually, their age in everyday situations.

Importantly, the digital licence is optional. Physical photocard licences will remain valid, and drivers will not be required to switch to a digital version.

Why The Government Is Introducing A Digital Driving Licence

The move is actually part of a broader strategy to modernise public sector technology and reduce inefficiency across government services. According to the Department for Science, Innovation and Technology, reforms to how the public sector builds and uses technology could unlock up to £45 billion in efficiency savings over time.

Digital credentials are seen as a key part of this effort. For example, by allowing documents to be issued and stored digitally, the government aims to reduce administrative delays, cut costs linked to printing and postage, and make services easier to access.

Science Secretary Peter Kyle has framed the initiative as part of a wider shift away from paper-based bureaucracy. In a statement accompanying the announcement, he said overflowing drawers of government letters and time spent waiting for appointments would “soon be consigned to history”, with GOV.UK Wallet allowing official documents to be issued virtually for those who choose to use it.

How The Digital Licence Will Work

The digital driving licence will sit within the GOV.UK Wallet, which is being built on top of the existing GOV.UK One Login infrastructure. Users will need to verify their identity through One Login, after which eligible credentials can be added to the app.

Thankfully, the government says that security is a central part of the design. For example, the wallet uses built-in smartphone protections, including biometric checks such as facial recognition, similar to those used for mobile banking and contactless payments. This means that even if a phone is lost, access to digital documents should remain restricted to the verified user.

Unlike a physical licence, the digital version can be issued immediately after a successful application, rather than arriving days later by post. Government officials argue this reduces the risk of documents being lost during house moves or misplacement.

Making It Verifiable By Third Parties

Testing is also reportedly focused on how the digital driving licence can be verified by organisations outside government. Unlike a physical photocard, digital credentials do not have visible security features, so checks rely on secure, programmatic verification rather than visual inspection.

With the user’s consent, a digital licence should enable third parties such as retailers selling age-restricted products, employers carrying out right-to-drive checks, car hire companies and online services to be able to confirm that a licence is genuine and valid. The system is being developed in partnership with approved digital identity providers, allowing the digital licence to be used in the same everyday situations as its physical equivalent.

Who Is Involved In Delivering The Scheme?

The project is being led by the Government Digital Service, which sits within the Department for Science, Innovation and Technology, working closely with the DVLA. The digital Veteran Card, launched earlier as a first credential, was developed in partnership with the Ministry of Defence, the Office for Veterans’ Affairs and Defence Business Services.

The government says that more than 15,000 veterans have already added their digital Veteran Card to the GOV.UK One Login app, a figure the government has pointed to as early evidence of demand for digital credentials.

The driving licence trial represents a more complex test case, given how widely the licence is used as both proof of identity and proof of age.

Who The Digital Licence Is For

In practical terms, the digital driving licence is aimed at everyday users who already rely on their photocard licence for routine tasks. These include proving age when buying age-restricted items, confirming identity for online services, or demonstrating driving entitlement.

Transport Secretary Heidi Alexander described the digital licence as “a game changer for the millions of people who use their driving licence as ID”, arguing that it would make everyday interactions faster, easier and more secure.

The government has also emphasised that the system is designed to put users in control of their own data. Sharing a digital credential requires explicit consent, and only the information needed for a specific check should be shared.

Wider Context And Industry Concerns

The introduction of a government-backed digital wallet has not been without controversy. For example, when plans were first announced, private sector digital identity providers raised concerns that GOV.UK Wallet could compete directly with commercial age-verification and identity services.

Since then, GDS has engaged extensively with the digital identity industry, holding an initial industry kick-off event and nearly 30 follow-up meetings. The government has confirmed that approved third-party digital identity apps will be able to verify credentials stored in the GOV.UK Wallet, rather than being locked out of the ecosystem.

This collaboration is centred around the Digital Verification Service industry, which plays a key role in enabling secure identity checks across retail, online services and regulated sectors.

Privacy And Security Questions

As with any digital identity system, privacy and security remain central concerns. For example, storing identity documents on smartphones raises questions about data protection, device security and potential misuse.

The government’s position is that digital credentials can be more secure than physical documents. Unlike a plastic card, a digital licence can’t be visually copied, and its authenticity can be checked programmatically. Facial recognition and encryption are intended to reduce the risk of fraud or impersonation.

However, critics argue that digital systems can introduce new attack surfaces, particularly if users’ phones are compromised or if verification services are poorly implemented. These risks are likely to be scrutinised closely as the trial expands beyond internal testing.

What Happens Next?

Throughout 2026, the government plans to continue testing, refining and expanding the digital driving licence in partnership with the private sector. A wider rollout is expected later in the year, enabling more drivers to add their licence to the GOV.UK Wallet.

In terms of the longer-term ambition, it is broader still. By the end of 2027, the government intends for all public services to offer a digital alternative alongside paper or card-based credentials. Future additions to the wallet are expected to include DBS checks and other forms of government-issued proof.

Alongside the wallet, a new GOV.UK App is scheduled to launch in summer 2026, bringing together personalised services, notifications and, potentially, an AI-powered chatbot to help users navigate government information more easily.

The digital driving licence trial essentially sits at the centre of this wider transformation, acting as both a technical test and a signal of how the government intends people to interact with public services in the years ahead.

What Does This Mean For Your Business?

The digital driving licence trial shows how far the government is willing to go in shifting everyday identity checks away from physical documents and into a single, smartphone-based system. By starting with a licence that is already widely used as both proof of identity and proof of age, the government is testing not just the technology itself, but public confidence in digital credentials and the supporting verification infrastructure. How smoothly this transition works in real-world settings will be critical to whether the wider GOV.UK Wallet vision gains traction.

For UK businesses, the implications are practical rather than abstract. For example, retailers, employers, car hire firms and online platforms could eventually benefit from faster, more reliable identity and age checks, with less reliance on visual inspection and reduced exposure to forged or expired documents. At the same time, these organisations will need to adapt their systems and processes to support digital verification, raising questions around cost, integration and responsibility if checks fail or data is mishandled.

For citizens and other stakeholders, including privacy groups and digital identity providers, the trial represents a balancing act between convenience, security and trust. The government’s decision to keep the digital licence optional and to involve private sector verification services suggests an attempt to avoid forcing adoption or creating a closed system. Whether that balance holds as the scheme moves beyond testing and into wider use will shape how digital identity is accepted across the UK in the years ahead.

Company Check : TikTok Finalises Deal Creating New American-Controlled Entity

TikTok has formally completed a deal to split its US operations from its global business, establishing a new majority-American joint venture designed to address long-running national security concerns and prevent the platform from being banned in the United States.

A Long-Running Political And Legal Battle

The agreement, announced on 23 January, brings to an end a legal and political saga that has stretched back several years and repeatedly placed the future of TikTok in the US market in doubt, despite its vast popularity with users, creators, and advertisers.

A Deal Years In The Making

The creation of TikTok USDS Joint Venture LLC follows sustained pressure from successive US administrations over the app’s Chinese ownership and the perceived risk that American user data or content recommendation systems could be influenced by Beijing.

Concerns first surfaced publicly during Donald Trump’s first term in 2020, when he attempted to ban TikTok outright. While that effort failed, scrutiny intensified under President Joe Biden, culminating in legislation passed in 2024 that required TikTok’s Chinese parent company, ByteDance, to sell its US operations or face an effective ban.

That law then set a deadline of January 2025, briefly forcing TikTok offline for US users before enforcement was paused. After returning to the White House, Donald Trump repeatedly extended deadlines while negotiations continued with American and international investors. In September 2025, Trump signed an executive order establishing a framework that would allow TikTok to keep operating in the US if it restructured ownership, governance, and technical controls. The announcement this week confirms that those requirements have now been met.

What The New Joint Venture Looks Like

Under the new agreement, TikTok USDS Joint Venture LLC has been established as an independent entity with majority American ownership and control. TikTok says the joint venture will now be responsible for securing US user data, the US version of the app, and the content recommendation algorithm.

In its official announcement, TikTok said the joint venture had been created “in compliance with the Executive Order signed by President Trump on September 25, 2025”, and that it would allow “more than 200 million Americans and 7.5 million businesses to continue to discover, create, and thrive” on the platform.

ByteDance, TikTok’s China-based parent company, retains a 19.9 per cent minority stake in the new company, below the threshold required to trigger a ban under US law. The remaining ownership is held by a consortium of non-Chinese investors.

Three managing investors, Oracle, Silver Lake, and MGX, each hold 15 per cent stakes. Additional investors include the Dell Family Office, Vastmere Strategic Investments, Alpha Wave Partners, General Atlantic affiliate Via Nova, and NJJ Capital, the family office of French telecoms entrepreneur Xavier Niel.

Financial terms of the transaction have not yet been disclosed, although US Vice President JD Vance previously suggested the deal valued TikTok’s US operations at around $14bn, a figure that analysts have noted is lower than earlier estimates.

Governance And Oversight

The joint venture will be overseen by a seven-member board of directors, with a majority of American members. TikTok’s global chief executive Shou Zi Chew will remain on the board, alongside senior figures from the investment groups backing the deal.

Independent oversight is built into the structure through the appointment of Raul Fernandez, chief executive of DXC Technology, as chair of the board’s Security Committee. TikTok says this committee will play a central role in ensuring compliance with national security and data protection requirements.

Also, Adam Presser, previously a senior executive within TikTok and TikTok US Data Security, has been appointed as chief executive of the joint venture. Will Farrell, formerly of Booz Allen Hamilton, a major US government and cybersecurity consultancy, will serve as chief security officer.

Data Protection And Cybersecurity Commitments

A central feature of the agreement is the handling of US user data, which has been at the heart of lawmakers’ concerns. TikTok says all US user data will be stored and protected within Oracle’s secure cloud infrastructure based in the United States.

According to the company, the joint venture will operate a “comprehensive data privacy and cybersecurity program” that will be audited and certified by third-party cybersecurity specialists. TikTok says that this programme will adhere to recognised standards including the NIST Cybersecurity Framework, NIST SP 800-53, ISO 27001, and the Cybersecurity and Infrastructure Security Agency’s security requirements for restricted transactions.

In its announcement (in the TikTok online newsroom), the company said the mandate of the joint venture is “to secure US user data, apps and the algorithm through comprehensive data privacy and cybersecurity measures”, while ensuring ongoing accountability through transparency reporting and external certification.

Algorithm Control Moves To The US

The platform’s recommendation algorithm, widely regarded as TikTok’s most valuable and sensitive asset, has been a particular point of contention, i.e., because it determines what content US users see and how information spreads at scale. Under the new structure, the algorithm used for US users will be retrained, tested, and updated using only US user data.

TikTok says this algorithm will be secured entirely within Oracle’s US cloud environment, with ongoing software assurance processes in place. These include regular source code review and validation, carried out with Oracle acting as a “trusted security partner”.

While the algorithm has been licensed to the US joint venture, experts have noted that retraining it exclusively on American data could subtly alter the type of content users see, although the scale and impact of any changes remain uncertain.

Trust, Safety And Content Moderation

Responsibility for trust and safety policies, including content moderation decisions, will now sit with the joint venture rather than TikTok’s global parent. TikTok says this gives the US entity “decision-making authority” over how content rules are set and enforced for American users.

The safeguards introduced under the agreement will not only apply to TikTok itself, but also to other apps operated by the company in the US, including CapCut and Lemon8.

Maintaining A Global Platform

Despite the separation, TikTok has been keen to stress that the platform will continue to function as a global service. For example, an interoperability framework allows US users to interact with creators worldwide, while US-based creators and businesses can still reach international audiences.

Commercial activities such as advertising, marketing, and e-commerce will continue to be supported through TikTok’s global infrastructure, even as core data and security functions are localised within the US.

Political Reaction And Wider Context

President Trump welcomed the announcement, posting on social media that he was “so happy to have helped in saving TikTok”, and publicly thanking Chinese President Xi Jinping for approving the deal.

The White House and China’s embassy in Washington have been contacted by US media for comment, although neither had issued a formal response at the time of writing.

However, the resolution of TikTok’s US situation stands in contrast to developments elsewhere. For example, in Canada, a federal court recently suspended a government order that would have shut down TikTok’s business operations, forcing a fresh national security review and temporarily protecting access for around 14 million Canadian users.

For TikTok, the establishment of TikTok USDS Joint Venture LLC provides regulatory certainty in its most important overseas market, ending years of uncertainty for users, creators, advertisers, and employees. The longer-term implications for how the platform operates, evolves, and competes in the US will become clearer as the new structure beds in.

What Does This Mean For Your Business?

This deal reshapes how TikTok operates in the United States, placing data control, algorithm oversight, and content moderation firmly within a US governed structure while allowing the platform to continue operating at scale. For American regulators, it represents a rare example of a global consumer technology company being forced to localise core technical systems rather than simply make policy assurances. For TikTok, it removes the immediate threat of a shutdown in a market that accounts for more than 200 million users and millions of businesses that rely on the platform for reach and revenue.

The agreement also sets a clear precedent for how national governments may approach foreign owned digital platforms in future. Data residency, third party security audits, and domestic control of recommendation algorithms are no longer abstract policy demands but concrete requirements that companies may be expected to meet. Other technology firms with global user bases will be watching closely to see how rigorously these safeguards are enforced and whether similar frameworks are proposed elsewhere.

For UK businesses, the outcome matters even though the restructuring is US focused. For example, TikTok remains a major marketing, commerce, and discovery platform for British brands, creators, and agencies that depend on its global reach and advertising tools. Any material changes to how the algorithm is trained or moderated in the US could influence wider content trends, advertising performance, and platform strategy internationally, particularly if similar regulatory pressures emerge in Europe or the UK over data protection and platform governance.

It looks as though advertisers, creators, and partners now gain a period of stability after years of uncertainty, but that stability comes with closer scrutiny of how TikTok balances growth, safety, and regulatory compliance. The platform now appears to have secured its position in the US for the time being, but the structure of the deal makes clear that access to major markets increasingly depends on transparency, technical controls, and political acceptability as much as user growth or innovation.

Security Stop-Press : LastPass Warns Customers Over Vault Backup Phishing Scam

Popular password management service, LastPass, has issued an alert after customers were targeted in a phishing campaign designed to steal account credentials using fake maintenance warnings.

The campaign began around 19 January and involves emails falsely claiming that LastPass is about to carry out system maintenance. Recipients are urged to back up their password vaults within 24 hours, a tactic intended to create urgency and prompt rushed action.

LastPass said the emails use multiple sender addresses and subject lines such as “LastPass Infrastructure Update: Secure Your Vault Now”. Links in the messages lead to a convincing fake website, initially hosted via an Amazon S3 bucket, before redirecting users to a spoofed LastPass login page designed to capture master passwords.

The company stressed it will never ask users to back up vaults or share master passwords by email. The timing of the campaign over a US holiday weekend suggests attackers were attempting to delay detection and extend the lifespan of the scam.

For businesses and other users, the alert is a reminder to be wary of urgent security emails, avoid clicking embedded links, and access LastPass only through its official website or app.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives