Sustainability-in-Tech : Floating AI Data Centres Powered By Ocean Waves

A US startup has raised $140 million to build autonomous floating AI data centres powered by ocean waves, as the technology industry searches for new ways to meet the rapidly growing energy demands of artificial intelligence.

Why AI Is Driving A New Search For Energy

The rapid growth of AI has created a major infrastructure challenge, with data centres now consuming increasing amounts of electricity, cooling water, and computing hardware. Industry forecasts suggest AI-related power demand could rise dramatically over the next decade as more businesses adopt large language models, AI assistants, image generation, automation systems, and real-time inference services.

Panthalassa, an Oregon-based renewable energy and ocean technology company, believes the answer may lie far offshore.

The company has developed autonomous floating platforms designed to generate electricity directly from ocean waves while simultaneously powering AI computing systems onboard. Rather than transmitting electricity back to land through undersea cables, the platforms process AI workloads at sea and send the results back via satellite connections.

US tech billionaire Peter Thiel, whose Founders Fund has backed the company, described the scale of the challenge directly, stating: “The future demands more compute than we can imagine. Extra-terrestrial solutions are no longer science fiction. Panthalassa has opened the ocean frontier.”

How The Floating Data Centres Work

Panthalassa’s systems, known as Ocean nodes, are large steel floating structures deployed in deep ocean regions with strong and consistent wave activity.

The motion of the waves drives internal turbines that generate electricity continuously. That power is then used directly onboard to run AI chips and inference systems housed inside sealed computing containers.

One of the key advantages is cooling. For example, traditional AI data centres consume enormous quantities of water and energy to keep high-performance chips from overheating. Panthalassa instead uses the surrounding ocean as what it calls “free supercooling”, reducing the need for conventional cooling infrastructure while potentially extending chip lifespan.

The company says its Ocean-3 pilot systems will be deployed in the North Pacific later this year ahead of planned commercial operations in 2027.

Garth Sheldon-Coulson, Panthalassa’s co-founder and CEO, said: “We’ve built a technology platform that operates in the planet’s most energy-dense wave regions, far from shore, and turns that resource into reliable clean power.”

He added: “We’re now ready to build factories, deploy fleets, and provide a sustainable new source of energy for humanity.”

Why The Idea Is Attracting Attention

The concept is gaining attention because many land-based data centres are already running into physical and environmental limits.

Large AI facilities require enormous amounts of grid power, land, cooling infrastructure, and permitting approvals. In some regions, utilities have warned that electricity networks may struggle to support projected AI growth without major upgrades.

Panthalassa argues that moving AI infrastructure offshore could reduce pressure on national grids while avoiding many of the environmental and planning conflicts associated with large terrestrial facilities.

The company also claims its systems rely mainly on abundant materials such as steel rather than scarce minerals, potentially making large-scale deployment easier than some alternative clean energy technologies.

Investor John Doerr described the system as “a game changer in addressing global energy needs and clean power generation”, adding that it represents “a triple win: workers benefit, communities benefit, and we gain a strategic asset that strengthens American technological leadership.”

Other Companies Are Exploring Similar Ideas

Panthalassa is not alone in looking for unconventional locations and power sources for future data centres.

For example, Microsoft previously tested underwater data centres through its Project Natick programme, placing sealed server containers on the seabed off Scotland’s Orkney Islands. The company reported lower server failure rates than conventional land-based facilities, partly because of the stable underwater environment and reduced human interference.

Meanwhile, Aikido Technologies recently announced plans for floating offshore wind-powered data centres in the North Sea, with pilot deployments expected near Norway before larger UK projects later this decade.

Also, British company Core Power has explored floating nuclear-powered platforms capable of supplying electricity to offshore computing facilities and military infrastructure.

Elsewhere, some firms are experimenting with placing data centres in colder climates such as Iceland, Norway, and northern Sweden, where naturally low temperatures reduce cooling costs and improve energy efficiency. Major cloud providers including Google and Meta have increasingly prioritised locations with access to renewable energy and cooler operating conditions.

Even Meta’s expanding AI-driven age assurance systems, which analyse images, video, behavioural signals, and account activity to estimate user age, form part of the wider trend driving demand for increasingly large AI compute infrastructure.

Still Some Challenges

Despite the enthusiasm surrounding offshore AI infrastructure, some major practical questions remain.

Open-ocean environments are among the harshest operating conditions on Earth, exposing equipment to corrosion, storms, maintenance difficulties, and communication challenges. Wave energy itself has historically struggled with reliability and commercial scalability, despite decades of experimentation.

There are also environmental questions around marine ecosystems, shipping routes, and the long-term impact of deploying large autonomous industrial systems at sea.

Commercial viability remains another unknown. Panthalassa’s business model depends not on selling electricity, but on selling AI computing capacity generated offshore. Whether this can compete economically with rapidly expanding terrestrial AI infrastructure remains uncertain.

What Does This Mean For Your Business?

For UK businesses, the story highlights how AI is increasingly reshaping not just software and automation, but the global infrastructure required to support digital services.

The energy demands created by AI systems are already influencing investment decisions across energy, construction, semiconductors, cooling technology, networking, and cloud computing. Businesses involved in these sectors may see growing opportunities linked to alternative energy generation, distributed computing, and sustainable infrastructure development.

The story also underlines how sustainability is becoming tightly connected to AI deployment. Organisations adopting AI tools may face growing scrutiny around the environmental impact of the computing resources they consume, particularly as governments and investors place greater emphasis on carbon reduction and energy efficiency.

At the same time, the search for cleaner AI infrastructure is likely to accelerate innovation far beyond traditional data centres, creating new commercial opportunities while also raising new technical, environmental, and regulatory challenges that businesses will increasingly need to understand.

Video Update : New Computer Use Feature In Copilot’s Researcher

Microsoft’s helpful new Computer Use feature in Copilot Researcher allows the AI to interact directly with websites and software on a user’s behalf, helping automate complex online tasks, speed up research workflows, and reduce the time spent manually navigating between apps and services.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip : Use Chrome Reading Mode To Remove Clutter And Listen Aloud

Google Chrome’s built-in Reading Mode can turn busy web pages into a clean, distraction-free reading view, making it easier to focus on important information without adverts, pop-ups, videos, or visual clutter getting in the way.

On desktop, Reading Mode opens in a side panel that can be widened for a more immersive layout, while Android displays the page in a simplified full-screen view. You can also customise the font, text size, spacing, and background colour to make reading more comfortable.

One particularly useful feature is the small play button inside Reading Mode, which uses AI-powered text-to-speech to read the page aloud while highlighting the words as it goes. This can be especially helpful for reviewing long articles, reducing screen fatigue, multitasking, or improving accessibility.

How It Works

Reading Mode strips away unnecessary page elements and focuses only on the main content, helping you absorb information faster and with fewer distractions. The built-in read-aloud feature also allows you to consume content hands-free while keeping your place visually on screen.

How To Use It in Chrome On Desktop

– Open the web page you want to read.

– Click the three-dot Chrome menu in the top-right corner.

– Select More tools, then Reading mode.

– The page will appear in a simplified side panel.

– Use the controls at the top of the panel to customise the appearance.

– Click the small play button to have Chrome read the page aloud using AI-powered voice narration.

How To Use It In Chrome On Android

In Android, Chrome’s Reading Mode opens as a simplified full-screen reading view rather than a side panel. To use it:

– Open the web page you want to read.

– Tap the three-dot Chrome menu in the top-right corner.

– Select Reading mode (or Simplified view on some versions of Android).

– The page will open in a cleaner, distraction-free layout without adverts or clutter.

– Use the controls to adjust text size, font style, and background colour for easier reading.

– Tap the play button to have Chrome’s AI-powered voice feature read the page aloud while highlighting the text as it goes.

Meta Smart Glasses Security Controversy

Meta has terminated its contract with outsourcing firm Sama, leading to more than 1,000 Kenyan workers losing their jobs after they revealed they had been reviewing highly sensitive footage captured by users of its AI-powered smart glasses, raising fresh concerns about privacy, labour practices, and the hidden human layer behind AI.

What The Workers Reported Seeing

The controversy began in February when workers employed by Sama in Nairobi told Swedish newspapers that their role involved reviewing and labelling video footage captured by Meta’s Ray-Ban smart glasses. According to those accounts, the material included deeply private scenes, with one worker stating, “We see everything – from living rooms to naked bodies.”

The footage was reportedly not limited to staged or deliberately shared content. Instead, it reflected everyday life captured by wearable cameras, including people undressing, using the toilet, and handling sensitive personal information. The workers’ role was to annotate this material so that Meta’s AI systems could learn to interpret visual and contextual data more effectively.

Meta acknowledged that human review forms part of its AI training process, stating that “photos and videos are private to users” and that human reviewers are used to “improve product performance” with user consent. However, the scale and nature of the material described by workers has intensified scrutiny over how that consent is obtained and understood in practice.

Why Did Meta End The Contract?

Less than two months after the investigation was published, Meta moved to end its relationship with Sama, a US-based outsourcing company that provides data annotation services, employing workers to review and label images and video to train AI systems, a decision that resulted in redundancy notices being issued to 1,108 workers with just days’ notice. The company’s official explanation was that Sama “did not meet our standards,” although it did not specify which standards had been breached or when concerns were first identified.

Disputed By Sama

Sama has strongly disputed that characterisation, stating that it had “consistently met the operational, security and quality standards required” and had not been informed of any shortcomings before the contract was terminated.

The timing of the decision has led to further questions, with labour groups and campaigners arguing that the termination may have been linked to the workers speaking out rather than performance issues, while Naftali Wambalo of the Africa Tech Workers Movement suggested that the standards in question may relate less to quality and more to confidentiality, describing them as “standards of secrecy,” a claim that Meta has not publicly addressed.

The Human Layer Behind AI

The episode highlights a reality that is often overlooked in discussions about artificial intelligence. Before AI systems can recognise images, understand context, or respond to real-world inputs, large volumes of data must be manually labelled by human workers.

In this case, that process meant individuals in Kenya reviewing unfiltered footage captured by wearable devices used by people in entirely different parts of the world. The work sits at the intersection of privacy, labour rights, and technology development, with those carrying out the task often having limited visibility, protection, or influence over how the data is used.

Not The First Time For Meta

It seems this is not the first time Meta’s relationship with outsourced labour has come under scrutiny. For example, previous contracts involving content moderation have been linked to claims of psychological harm, low pay, and inadequate support, with some former workers reporting symptoms consistent with post-traumatic stress. Sama itself exited parts of that work in recent years, acknowledging the challenges involved.

Regulatory Pressure

The revelations have prompted regulatory attention in multiple jurisdictions. For example, the UK’s Information Commissioner’s Office described the reports as “concerning” and requested further information from Meta, while Kenya’s data protection authority has launched its own investigation into the handling of the footage.

Legal challenges are also emerging. A class action lawsuit in the United States alleges that Meta misrepresented the privacy protections of its smart glasses, while privacy groups in Europe continue to question how user data is processed and whether consent mechanisms meet regulatory standards.

The concern centres on a key distinction, because while Meta’s policies may disclose that data can be used to train AI systems, the extent to which users understand that their footage could be viewed by human reviewers remains unclear, particularly when that footage includes sensitive or intimate situations.

What This Means For AI Development

The decision to end the Sama contract does not remove the need for human input in AI systems. Instead, it exposes the tension between rapid technological development and the practical realities of how that development is supported.

Training AI models at scale requires vast amounts of labelled data, and that requirement does not disappear as systems become more advanced. What changes is the level of scrutiny applied to how that data is collected, processed, and reviewed, particularly when it involves real-world human behaviour rather than curated datasets.

Smart glasses themselves represent a significant step forward in AI-enabled consumer devices, combining real-time image capture with on-device and cloud-based processing. However, their effectiveness depends on continuous learning, which in turn depends on the availability of human-labelled data.

What Does This Mean For Your Business?

This story illustrates how organisations adopting AI tools may need to look beyond the technology itself and consider the full data lifecycle, including how training data is sourced, handled, and reviewed, particularly where external providers or offshore teams are involved.

For UK businesses, this has clear implications around compliance and accountability, because under UK GDPR and data protection law, responsibility does not disappear when data is passed to a third party, meaning organisations must be confident not only in how systems perform but also in how the underlying data is being processed and by whom.

Reducing risk therefore means ensuring that suppliers and partners meet clear standards not only for technical performance but also for data governance, worker welfare, and transparency, with strong contractual controls, regular audits, and clear oversight of third-party processes becoming essential, especially when sensitive or personal data is involved.

The broader lesson, and what may be surprising to many, is that AI systems are not purely automated but are built on human input at multiple stages, and any weakness in that chain can create reputational, legal, and ethical risk, leaving businesses that properly understand and manage that reality far better placed to use AI responsibly while maintaining trust with customers, regulators, and stakeholders.

OneDrive Removes Local Recycle Bin Fallback For Cloud Deletions

Microsoft is changing how OneDrive handles deleted files, removing a long-standing fallback that many users rely on without realising it and increasing the risk of accidental data loss across synced devices.

What Has Changed In OneDrive?

Although the change is straightforward, it is also significant, because when a file is deleted from the OneDrive website, mobile app, or another synced device, it will no longer appear in the local Recycle Bin on a Windows PC or the Trash on a Mac.

Instead, the file is removed directly from the local device and can only be recovered from the OneDrive web-based recycle bin, which applies even if the file was previously available offline on that device.

Files deleted locally on the computer will continue to behave as expected and appear in the Recycle Bin, with the key difference being where the deletion is initiated, because if it starts in the cloud, the local recovery route no longer exists.

For most users, this represents a change in behaviour rather than a change in capability, and that distinction is exactly where the risk begins.

Why Is Microsoft Making This Change To OneDrive?

Microsoft’s reasoning seems to be focused on performance and consistency, as OneDrive usage has expanded, particularly in business environments with large file libraries and multiple synced devices, making it more complex to manage file state across locations.

By removing the local Recycle Bin step for cloud-initiated deletions, OneDrive can process changes faster and avoid maintaining duplicate recovery paths, meaning that instead of files appearing in multiple locations depending on how they were deleted, there is now a single, central recovery point in the OneDrive recycle bin.

From a system design perspective, this seems to make some sense, as it could simplify synchronisation, reduce overhead, and create a more predictable model for file recovery.

However, what works from an engineering perspective does not always align with how people actually use technology in practice.

The Risk

The core issue is not the removal of recovery altogether but the removal of a familiar and highly visible fallback that users have come to rely on.

For many people, the Recycle Bin is an instinctive fallback, because if something is deleted by mistake, the first place they look is the desktop bin, a behaviour that has been consistent across Windows systems for decades and is deeply ingrained.

However, under the new model, that is no longer true for cloud-initiated deletions, as a file removed via a mobile app or web browser will not appear locally, which can create confusion and delay recovery, particularly if users do not realise the change has taken place.

This is the kind of thing that happens in day-to-day use. For example, a quick deletion on a phone, a shared file removed by a colleague, or a mistaken action in the browser can now bypass the local recovery point entirely, even though in each case the file still exists in the OneDrive recycle bin, but only if the user knows to look there.

Without that awareness, the perceived loss can quickly become a real one, especially if recovery windows expire or if users assume the file has already been permanently deleted.

Operational Impact For Organisations

For organisations, the impact is less about the technical change itself and more about how it alters day-to-day processes around file management and recovery.

From a compliance perspective, UK GDPR and broader data protection responsibilities remain unchanged, meaning organisations are still accountable for ensuring that data can be recovered when needed, even though the route to recovery has changed.

Support teams are likely to see an increase in queries where users cannot find deleted files in expected locations, particularly in the early stages of rollout. Helpdesk processes that previously relied on guiding users to the local Recycle Bin will need to be updated to reflect the correct recovery path through the OneDrive web interface.

There is also a clear training requirement, as users need to understand that the method used to delete a file now determines how it must be recovered. Without that clarity, simple mistakes are more likely to escalate into avoidable support incidents.

Policies and internal documentation should also be reviewed to ensure that any references to local recovery for OneDrive files are accurate, especially in environments with remote working and multiple devices.

Reducing The Risk In Practice

Managing this change effectively comes down to awareness, process, and control.

Users should be informed clearly that cloud-initiated deletions bypass the local Recycle Bin and that recovery must be carried out through OneDrive itself, which is a simple message but one that can prevent a large number of avoidable issues.

Organisations may also want to review retention settings, particularly in business environments where the default 93-day recycle bin period can be adjusted, while extending retention or implementing additional backup solutions can provide an extra layer of protection.

From a technical standpoint, ensuring that version history and backup policies are in place becomes even more important, as the removal of one recovery route increases reliance on others, and those systems need to be robust and well understood.

A Small Change With Wider Implications

This update to OneDrive is a good example of how relatively small technical changes can have disproportionate real-world impact. The functionality to recover deleted files still exists, but the way users access it has changed, and that is enough to introduce risk.

For businesses, the key takeaway is that data protection is not just about systems and policies, but also about how people interact with them. When familiar behaviours are disrupted, even for valid technical reasons, the gap between expectation and reality is where problems tend to emerge.

Organisations that recognise this early and adapt their guidance, support, and controls accordingly will be far better placed to avoid unnecessary data loss and maintain confidence in how their information is managed.

What Does This Mean For Your Business?

For UK businesses, the bigger issue is not the technical change itself but how easily it can create a gap between how systems behave and what users expect to happen.

When a familiar behaviour changes without being widely understood, the risk increases because people continue to act on old assumptions, particularly in fast, everyday situations where files are deleted quickly and without much thought.

This is where data loss risk begins to build, not through system failure, but through misunderstanding, delay, and missed recovery opportunities.

The key response is to take control of that gap. Businesses that clearly communicate how file deletion now works, reinforce the correct recovery process, and ensure appropriate backup and retention measures are in place will be far better positioned to avoid unnecessary disruption.

This story serves as a reminder that cloud platforms continue to evolve in ways that can subtly change risk profiles, and organisations that actively monitor and adapt to those changes will be better placed to protect both their data and their day-to-day operations.

UK Plans New Social Media Restrictions For Under-16s

Social media restrictions for under-16s are moving closer to reality in the UK as ministers commit to action following a major consultation, signalling a significant change in how young people access digital platforms.

Why The UK Is Moving Towards Social Media Restrictions

The UK government has made it clear that some form of restriction on social media use for under-16s will be introduced, even if a full ban is not adopted, with ministers now focused on deciding how those measures should work in practice.

This change comes after growing concern about the impact of social media on children’s mental health, behaviour, and safety, alongside mounting political pressure from campaigners, parents, and members of Parliament. The Children’s Wellbeing and Schools Bill is central to this process, as it gives ministers the power to introduce restrictions through regulation rather than requiring entirely new legislation.

The consultation, which closes later this month, is designed to gather evidence on what combination of measures would be most effective, with ministers emphasising that the objective is not simply to act quickly but to ensure that any changes are workable and enforceable at scale, and that the approach should be “evidence-led, with input from independent experts” .

What Type Of Restrictions Are Being Considered?

Rather than focusing solely on an outright ban, the government is currently exploring a range of targeted interventions aimed at reducing harm while preserving some level of access.

One key area is the design of platforms themselves, with proposals to limit or remove features that encourage prolonged use, such as infinite scrolling, autoplay, and algorithm-driven content feeds. These features have come under increasing scrutiny for keeping users engaged for extended periods, often without clear stopping points.

Age verification is another major focus, with stronger enforcement expected to play a central role in any future framework, particularly given evidence that many children already bypass existing age limits by registering with false dates of birth.

The consultation is also examining the potential for time-based controls, including overnight curfews, as well as restrictions on access to AI chatbots and other emerging technologies that may expose children to inappropriate or harmful interactions, as part of a broader effort “to examine the most effective ways to ensure that children have ‘healthy online experiences’” .

Taken together, these measures point to a more granular approach, where specific features and behaviours are regulated rather than applying a single blanket rule across all platforms.

The Evidence Driving The Debate

The policy push is underpinned by a growing body of data and research highlighting both the scale of social media use among young people and the risks associated with it.

For example, recent figures show that social media use is nearly universal among teenagers, with around 95 per cent of 13 to 15-year-olds actively using platforms and the vast majority holding their own accounts. At the same time, a significant proportion of children report exposure to harmful or distressing content, including material linked to self-harm, bullying, and unrealistic body image expectations.

The Online Safety Act 2023 already requires platforms to take steps to protect children from harmful content, including enforcing age limits and removing illegal material. However, ongoing enforcement actions and investigations suggest that compliance has been uneven and that further intervention may be needed to achieve meaningful improvements.

Concerns have also been raised about the underlying design of platforms, particularly features that drive prolonged engagement, with policymakers pointing to risks from “design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing” .

How Other Countries Are Approaching The Issue

Several countries have already introduced or are actively considering similar restrictions to the ones the UK is now considering.

For example, Australia has taken the most direct approach, introducing a nationwide ban on social media access for under-16s, with platforms required to take reasonable steps to prevent children from creating or maintaining accounts. Early enforcement efforts led to millions of accounts being removed, demonstrating that large-scale intervention is technically possible, although questions remain about long-term effectiveness and circumvention.

Spain has signalled its intention to follow a similar path, while France has already introduced measures requiring parental consent for younger users and is exploring tighter controls. Across the European Union, regulators have also focused on platform design, with actions taken against companies over addictive features and insufficient child protection measures.

These international examples highlight how governments are increasingly willing to intervene directly in platform access, and how enforcement and user behaviour remain challenging, particularly where young people find alternative routes to access services.

What Challenges Still Need To Be Addressed

Implementing effective restrictions is likely to prove complex, particularly given the global nature of social media platforms and the ease with which users can bypass controls.

Age verification remains one of the most difficult issues, as systems must be robust enough to prevent misuse while also protecting user privacy and remaining practical for widespread adoption. Even with improved verification methods, there is a risk that children will migrate to less regulated platforms or use shared accounts to maintain access.

There are also broader questions about how restrictions might affect positive uses of social media, including communication, education, and community building, particularly for young people who rely on online spaces for support and connection.

These competing factors explain why the government has opted for a consultation-led approach, aiming to balance safety, practicality, and unintended consequences before finalising its strategy.

What Does This Mean For Your Business?

For UK businesses, the immediate impact will depend on how directly they interact with younger audiences, but the broader implications extend well beyond youth-focused platforms.

Changes to social media regulation are likely to influence how digital platforms operate more widely, particularly in areas such as content moderation, user verification, and the design of engagement features. Businesses that rely on social media for marketing, customer engagement, or recruitment may see shifts in platform behaviour, audience reach, and compliance requirements over time.

Stronger age verification and feature restrictions could also affect advertising strategies, especially where campaigns currently reach mixed-age audiences, requiring more careful targeting and clearer segmentation.

There is also a wider regulatory signal that digital products are increasingly being judged not just on functionality and growth, but on their impact on users, particularly vulnerable groups. This trend is already visible in areas such as data protection and online safety, and it is likely to extend further as governments respond to public concern about digital harms.

Organisations involved in technology, digital services, education, or safeguarding should be paying close attention, as the outcome of this consultation will help shape the next phase of UK digital regulation. Businesses that understand how these changes affect platform design, user behaviour, and compliance expectations will be better placed to adapt as new rules are introduced and enforced.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives