Sustainability-in-Tech : How ‘Nuclear Batteries’ Could Unlock Clean Energy Efficiency

A fusion energy startup is developing a new class of nuclear battery that could help solve one of the biggest challenges in clean energy, turning radiation directly into electricity rather than wasting it as heat.

What Avalanche Energy Is Building

Avalanche Energy, a US-based fusion startup, has been awarded a $5.2 million contract from the Defense Advanced Research Projects Agency (DARPA) to develop compact “nuclear batteries” using advanced radiovoltaic technology.

These devices generate electricity by converting energy from radioactive decay, specifically alpha particles, into electrical power using semiconductor materials. The concept is similar to solar panels, but instead of converting sunlight, they convert radiation directly into electricity.

According to the company, the goal is to produce systems capable of delivering more than 10 watts per kilogram, enough to power a laptop-class device for months from a unit weighing only a few kilograms.

This is a significant step forward compared to traditional radioisotope batteries, which have historically been reliable but very low power.

Why This Matters For Fusion Energy

While the immediate application is compact power systems, the real significance lies in how this technology could support the future of fusion energy.

Fusion reactions generate enormous amounts of energy, but capturing that energy efficiently has proved difficult. Most approaches still rely on heating water and driving turbines, which introduces inefficiencies and limits overall output.

Avalanche’s approach focuses on direct energy conversion, capturing the energy of charged particles before it is lost as heat.

As the company explains, “The direct energy conversion technologies we’re developing under Rads to Watts will be essential for extracting power from fusion reactions efficiently.”

This matters because improving energy capture is one of the key barriers to making fusion commercially viable. Even if a reactor produces more energy than it consumes, that energy still needs to be converted into usable electricity in a practical and efficient way.

A Step Towards Portable, Low-Carbon Power

Beyond fusion, these nuclear batteries could offer a new type of long-duration, low-maintenance power source.

Unlike conventional batteries, they don’t need recharging in the traditional sense. Instead, they produce a steady flow of electricity over extended periods, making them suitable for environments where access to power is limited or unreliable.

DARPA’s interest reflects this potential. The programme is focused on systems that can operate in extreme environments, including space, remote locations, and infrastructure where logistics make refuelling difficult.

In terms of this broader ambition, Avalanche says: “We’re building the capabilities today that will enable tomorrow’s fusion systems to deliver reliable, portable energy for defence, space, and commercial applications.”

In sustainability terms, this could point to a future where certain applications currently dependent on diesel generators or frequent battery replacement could move to cleaner, longer-lasting alternatives.

How This Fits Into The Wider Industry

It should be noted here that Avalanche is not alone in exploring alternative ways to generate long-duration power from nuclear processes.

Companies such as US-based Zeno Power are developing radioisotope power systems designed for remote infrastructure, including maritime and Arctic applications. Zeno focuses on long-life nuclear batteries that can operate for years without maintenance.

Also, organisations like NASA and the US Department of Energy have long used radioisotope thermoelectric generators in space missions, including the Perseverance and Curiosity Mars rovers, demonstrating the reliability of nuclear-based power systems over decades.

In the private sector, firms such as Kronos Advanced Technologies and Arkenlight are also researching next-generation radiovoltaic and betavoltaic systems aimed at improving efficiency and power density.

What makes Avalanche’s approach distinct is its direct link to fusion. For example, rather than treating nuclear batteries as a standalone product, it is using them as a stepping stone towards solving a core technical challenge in fusion energy itself.

This reflects a broader trend in the industry, where companies are focusing on specific bottlenecks such as materials, energy capture, and system design, rather than attempting to solve fusion as a single problem.

What Does This Mean For Your Organisation?

For businesses, this development is less about immediate adoption and more about understanding where energy technology is heading.

The key takeaway is that the future of clean energy is not just about generation, it is about efficiency, portability, and reliability. Technologies that can deliver consistent, low-carbon power in difficult environments will open up new operational possibilities.

In the shorter term, this kind of innovation signals a move towards more resilient energy systems. Businesses operating in remote locations, critical infrastructure, or energy-intensive sectors may benefit from future solutions that reduce reliance on traditional fuel supply chains.

It also highlights the pace at which energy innovation is moving. Fusion is often seen as a distant goal, but the supporting technologies being developed today, including advanced materials and direct energy conversion systems, are already shaping the path towards it.

While nuclear batteries may not be powering offices or factories tomorrow, they represent a step towards a more flexible, sustainable energy landscape where power can be generated and used far more efficiently than it is today.

Video Update : Cowork Now Available In Copilot

Microsoft’s new ‘Cowork’ feature in Copilot lets you assign tasks by simply describing the outcome, with Copilot creating a plan, using your Microsoft 365 data, and carrying out tasks across apps in the background while keeping you in control at every step.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip : Check If Your Files Are Only Saved In Downloads

Important files are often left in the Downloads folder and never backed up, so moving them to a synced or backed-up location helps prevent accidental data loss if your device fails or is lost.

Why This Matters

The Downloads folder is one of the most commonly used locations for saving files, especially when opening email attachments or downloading documents from the web.

However, it is often not included in automatic backup or cloud sync settings.

This means files stored there may only exist on one device.

If that device is lost, damaged or replaced, anything stored only in Downloads could be permanently lost.

How To Check Your Downloads Folder In Windows

  1. Open File Explorer.
  2. Click on Downloads in the left-hand menu.
  3. Review the files stored there.

Look for anything important that should be kept long term.

What To Do Next

  • Move important files to Documents, Desktop or another backed-up folder.
  • Or save them directly into OneDrive or your company’s shared storage.

If your organisation uses OneDrive folder backup, ensure key folders are being synced properly.

What To Watch For

  • Files in Downloads are often temporary by nature.
  • Important documents can easily be forgotten there.
  • Backups and sync tools may not include this folder by default.

A Practical Approach

Take a minute to check your Downloads folder now.

Moving important files into a backed-up location is a simple habit that can prevent unnecessary data loss and ensure your work is properly protected.

Google Brings ‘Q-Day’ Closer With 2029 Encryption Warning

Google has warned that the moment quantum computers can break today’s encryption may arrive within the next few years, accelerating timelines for businesses to prepare for a fundamental change in digital security.

What Is ‘Q-Day’?

Q-Day refers to the point at which a quantum computer becomes powerful enough to break widely used cryptographic systems such as RSA and elliptic curve encryption, which underpin everything from online banking to software updates.

Google’s position is that this is no longer a theoretical concern for the distant future. As the company warned in its earlier guidance, “the encryption currently used to keep your information confidential and secure could easily be broken by a large-scale quantum computer in coming years.”

The Risk Is Already Emerging

Attackers are also believed to be collecting encrypted data today with the intention of decrypting it later once quantum capabilities become available, a tactic often referred to as ‘store now, decrypt later’.

Google Revises Its Timeline

In a recent update, Google has set out a more urgent timeline for the transition to post-quantum cryptography, signalling that the industry may have less time than previously expected to prepare for this moment.

The company has now introduced a 2029 target for completing its migration to quantum-resistant cryptography, bringing forward urgency compared to earlier industry expectations that placed large-scale quantum threats in the mid-2030s, and stating: “We’re setting a timeline for post-quantum cryptography migration to 2029.”

Not A Direct Prediction

It’s worth noting here that this isn’t a direct prediction from Google of when exactly quantum computers will most likely break encryption, but it provides some guidance and a reassessment of how quickly organisations need to act.

Why The Updated Timeline?

Google said the change is based on recent progress in “quantum computing hardware development, quantum error correction, and quantum factoring resource estimates”.

In simple terms, it seems the technical barriers that once made quantum threats feel distant are being reduced faster than expected.

Google’s update of Q-Day is not simply about setting a date, it is about creating urgency. The company has made this explicit in a recent blog post about the update, stating: “As a pioneer in both quantum and PQC, it’s our responsibility to lead by example and share an ambitious timeline.” It added that the goal is to “provide the clarity and urgency needed to accelerate digital transitions not only for Google, but also across the industry.”

This reflects a broader concern that organisations are underestimating the scale and complexity of the transition required.

This urgency also reflects the scale of what organisations are being asked to do. For example, moving from current cryptographic standards to post-quantum alternatives is not a simple upgrade. It involves identifying where encryption is used, replacing algorithms across systems, updating infrastructure, and ensuring compatibility across supply chains and partners.

The UK’s National Cyber Security Centre has already described this transition as a “complex change programme”, highlighting the scale of the task facing organisations.

The Gap Between Awareness And Readiness

Despite growing awareness of quantum risks, most organisations are not ready.

Part of the challenge is that the threat itself is difficult to fully understand. Quantum computers are often described as vastly more powerful than today’s systems, and for many businesses, this means the practical implications are unclear. Understanding how and when these machines could break existing encryption, and what that means for real-world systems, is not straightforward without some specialist knowledge.

Research cited in industry reports suggests that while a majority of businesses expect quantum-enabled attacks within the next five years, only a small proportion have a clear roadmap in place to address them.

This means that while many organisations accept that quantum threats are coming, there is still uncertainty about how serious those risks are, when they are likely to materialise, and what practical steps should be taken. That uncertainty can easily lead to delays or a tendency to wait for clearer standards and tools rather than acting early.

Google’s revised timeline challenges that assumption by bringing forward its own migration target and signalling that waiting may not be a viable strategy.

What Google Is Already Doing To Help

Alongside announcing its timeline update, Google says it is actively deploying post-quantum cryptography across its own platforms.

The company has highlighted how Android 17 will integrate PQC digital signature protection using ML-DSA, aligned with standards from the National Institute of Standards and Technology.

This is part of a broader effort to build what Google describes as a “new, quantum-resistant chain of trust”, ensuring that systems remain secure even as computing capabilities evolve.

Google says it has also been working on PQC for several years, including deploying quantum-resistant key exchange mechanisms in Chrome and internal systems, and contributing to global standards development, all of which points to the fact that the transition is not only necessary, but already underway.

Why This Matters

The implications extend far beyond large technology providers. For example, encryption underpins core business functions, from securing customer data and financial transactions to protecting intellectual property and ensuring the integrity of software and communications.

If current cryptographic systems become vulnerable, the impact will not be limited to future systems. Data encrypted today could still be exposed years later if it is harvested and stored by attackers now.

That means the risk is already present, even if the technology required to exploit it fully is not yet available.

What Does This Mean For Your Business?

For most organisations, the key issue here is not whether quantum computing will affect them, but how prepared they are for the transition it will require.

Google’s updated timeline suggests that preparation needs to begin sooner rather than later, particularly for systems that rely on long-lived data or digital signatures that must remain secure for many years.

This will involve building what is often referred to as crypto agility, the ability to update cryptographic algorithms without disrupting services, as well as developing a clear inventory of where and how encryption is used across the organisation. In practical terms, that means identifying where sensitive data is stored, how it is protected in transit and at rest, and which systems rely on public key cryptography that may need to be replaced.

It also means starting to assess whether existing platforms, applications and suppliers are capable of supporting post-quantum cryptography, and whether updates, migrations or architectural changes will be required. Some organisations are already beginning to test quantum-resistant algorithms in non-critical systems to understand performance, compatibility and operational impact before wider rollout.

Engagement with suppliers and partners will also be important, as cryptographic systems rarely operate in isolation and weaknesses in third-party systems can undermine otherwise secure environments.

Taken together, Google’s update suggests that the window for treating quantum security as a future concern is narrowing, and that organisations that begin mapping, testing and planning now will be in a far stronger position than those that wait.

Scammers Using Virtual Smartphones To Slip Past Fraud Checks

Fraudsters are increasingly using rentable “cloud phones” that look and behave like real smartphones, creating a new problem for banks, fintechs and businesses that have come to trust the device in a customer’s hand.

Now Using Cloud Phones

According to a recent report by security firm Group-IB, a growing number of scammers are no longer relying on crude emulators or racks of physical handsets to run fraud at scale. Instead, they are turning to cloud phones, effectively remote Android devices running in datacentres, which can be rented cheaply and accessed over the internet.

These services are marketed as legitimate tools for developers, marketers or businesses managing multiple accounts but, in practice, it seems they are also now being widely abused. As the report explains, “what began as a simple scheme to inflate social media metrics has evolved into a sophisticated threat that is quietly reshaping the economics of digital fraud.”

This matters because many fraud controls were built around the idea that fake devices tend to look fake. For example, emulators often leak obvious signs, such as unusual hardware configurations, missing sensor data or other artefacts that security teams know how to spot.

Cloud phones, however, don’t give off these more obvious signals. As Group-IB says, they are “for all intents and purposes… real phones, running genuine firmware, exhibiting natural sensor behavior, and presenting valid hardware attestation.” In other words, they are designed to look authentic at the technical level.

Why They Are So Hard To Detect

Fraud detection systems have traditionally relied on identifying unusual devices, spotting changes in device identity, or flagging suspicious technical signals, all of which have proven effective against earlier generations of emulators and virtual environments.

Cloud phones, however, are designed to avoid exactly those signals by maintaining consistent device characteristics over time while presenting realistic hardware identifiers, software environments and behavioural patterns that closely resemble those of genuine smartphones.

The report highlights that “what makes this threat unlike any other is its invisibility,” noting that activity from these devices can “appear indistinguishable from a legitimate device” to existing detection systems.

Each cloud phone instance can have its own device ID, IP address, geolocation and system profile. Unlike traditional emulators, which often expose tell-tale inconsistencies, these environments are engineered to behave like genuine smartphones over time.

It’s this consistency that’s critical because it allows a device to build up a trusted history, which can then be exploited for fraud without triggering alerts designed to detect sudden changes.

How The Fraud Works In Practice

Group-IB’s report traces how this technology has moved from social media manipulation into financial crime. One of the most significant use cases is the creation and operation of so-called ‘dropper’ or ‘mule accounts’, which are accounts used to receive and move stolen funds.

For example, it seems that fraudsters can open or verify accounts using a cloud phone, then continue to access those accounts from the same virtual device. In some cases, access to both the account and the associated cloud phone instance can be sold on to other criminals.

As Group-IB explains, this creates a powerful advantage for the fraudsters because the same device signals are preserved throughout, meaning “the same device accessing the account that has always accessed it” appears to be in use (once again, it’s the consistency that works).

From a fraud detection perspective, that removes one of the key triggers for additional checks, i.e., there’s no obvious device change, no sudden shift in behaviour, and no immediate reason to challenge the transaction.

The Scale Of The Problem

This development comes at a time when authorised push payment fraud (where victims are tricked into sending money directly to a scammer, often through social engineering) is already a major issue. For example, in the UK alone, losses reached £485.2 million in 2023, with mule accounts playing a central role in moving stolen funds.

Cloud phones make these accounts easier to create, operate and scale. Group-IB says they have enabled “industrial-scale financial fraud” by lowering the cost and complexity of maintaining large numbers of apparently legitimate devices.

It seems that using cloud phones also gives fraudsters an extra economic advantage. Instead of investing in physical phone farms, fraudsters can now rent infrastructure on demand, making it accessible to a wider range of actors with relatively low upfront cost.

Why This Challenges Existing Security Models

For years, device fingerprinting has been a reliable layer in fraud prevention. If an account is accessed from a new or suspicious device, that can trigger step-up authentication or block the transaction.

Cloud phones weaken that model because the device itself is no longer a strong signal of trust if it can be rented, replicated and transferred between users while maintaining a consistent identity.

This doesn’t mean existing controls are obsolete, but it does mean they are no longer sufficient on their own. Group-IB’s report argues that detection must, therefore, move beyond simple device checks and towards a more layered approach.

Group-IB concludes that fraud prevention needs “device-environment correlation, infrastructure-level visibility, behavioral modeling, and graph-based analytics” to identify patterns that individual device checks may miss.

What Does This Mean For Your Business?

For financial institutions, the message from this report is clear. A device that looks genuine can no longer be treated as strong evidence that the activity behind it is genuine too. Fraud detection will really need to focus more on behaviour, context and relationships between accounts rather than relying heavily on device identity alone.

For other businesses, particularly those using mobile apps for onboarding, payments or identity verification, this is a warning that mobile trust models are becoming more complex. Controls that once worked well may now need to be reassessed.

There is also a broader operational implication. As fraud infrastructure becomes easier to rent and scale, the barrier to entry for sophisticated attacks is lowering. That increases the likelihood that smaller organisations, not just major banks, will encounter more advanced fraud techniques.

This represents a clear change in how fraud is delivered, as the fraudster no longer needs to manage large numbers of physical devices and can instead access a virtual environment that behaves like a real smartphone and is designed to pass as one.

Taken together, this research seems to suggest that the balance of trust is changing, with the device in the user’s hand, or at least the one it appears to be, no longer something businesses can rely on without question.

Most IT Leaders Don’t Fully Trust Their Cybersecurity Vendors

New global research shows that while organisations rely heavily on cybersecurity providers, only a small minority fully trust them, exposing a growing gap between dependence and confidence.

A Critical Dependency (With Limited Confidence)

Cybersecurity vendors essentially sit at the heart of modern business operations, responsible for protecting systems, data, and day-to-day continuity. For many organisations, particularly those without large internal IT teams, these providers effectively act as an extension of the business itself.

However, new research from Sophos suggests that this reliance is not matched by confidence. Its Cybersecurity Trust Reality 2026 report, based on a survey of 5,000 IT and security leaders across 17 countries, found that only 5 per cent of respondents say they fully trust their cybersecurity vendors.

This disappointing statistic suggests that businesses are placing critical operational resilience in the hands of providers they don’t completely trust, which raises questions about how risk is actually being managed in practice.

Why Is There A Trust Issue?

One of the most striking findings is not just the lack of trust, but how difficult organisations find it to assess vendors in the first place.

According to the report, 79 per cent of organisations struggle to evaluate the trustworthiness of new cybersecurity providers, while 62 per cent report the same challenge with vendors they already use. This suggests that trust gaps do not disappear once a contract is signed.

The reasons for this are largely practical rather than emotional. For example, many organisations report that vendor information is either not detailed enough, difficult to interpret, or inconsistent across sources. Others admit they lack the internal expertise needed to properly assess technical claims.

As the report explains, organisations are often left trying to validate complex security capabilities without clear, standardised evidence, making meaningful comparisons between providers difficult.

This is where trust begins to shift from a perception issue to a structural one. If organisations cannot independently verify what vendors claim, trust becomes inherently fragile.

Trust As A Measurable Risk Factor

The report makes the important point that, within organisations, trust is no longer seen as a soft or abstract concept, but as something that directly influences risk.

As Sophos notes, “Trust is not an abstract concept in cybersecurity, it’s a measurable risk factor,” highlighting how uncertainty around vendor capability feeds directly into business risk assessments and decision-making.

The report reinforces this further, stating that “CISOs are being asked to prove trust, not assume it,” reflecting the growing expectation that confidence in vendors must be backed by evidence rather than reputation.

This is reflected in how organisations report the impact of low trust. More than half, 51 per cent, say it increases concern that they are more likely to experience a significant cyber incident.

Other consequences are more operational. For example, 45 per cent say it makes them more likely to switch vendors, while others report increased oversight requirements and reduced confidence in their overall security posture.

In effect, a lack of trust doesn’t just create anxiety, it drives cost, complexity, and ongoing disruption.

A Disconnect Between IT And Leadership

Another layer of complexity seems to come from internal misalignment. The report found that 78 per cent of organisations experience differences of opinion between IT teams and senior leadership when assessing vendor trustworthiness.

This reflects the different priorities at play. For example, technical teams tend to focus on performance, reliability, and day-to-day effectiveness, while leadership is more concerned with accountability, compliance, and reputational risk.

When those perspectives do not align, decision-making becomes more difficult. Vendor selection, contract renewal, and incident response planning can all be affected by differing views on how much confidence should be placed in a provider.

What Builds Trust?

The research also highlights a clear shift in what organisations look for when evaluating vendors.

Across both IT teams and senior leadership, the strongest driver of trust is no longer brand reputation or marketing claims, but verifiable evidence. This includes independent certifications, third-party assessments, documented vulnerability disclosures, and demonstrable operational maturity.

Transparency also plays a central role. Organisations increasingly expect clear communication during incidents, visibility into how security processes operate, and evidence that issues are identified and resolved effectively.

As the report makes clear, trust is something that must be demonstrated continuously, not assumed.

This becomes even more important as AI is integrated into cybersecurity tools. Organisations are now asking not just what a system does, but how it makes decisions, how it is governed, and how risks are managed.

What Does This Mean For Your Business?

For UK businesses, this research highlights a critical issue that often sits beneath the surface of cybersecurity strategy.

Most organisations assume that choosing a reputable vendor is enough to reduce risk. In reality, the challenge is not just selecting a provider, but being able to verify, monitor, and validate what that provider is doing over time.

This means trust can no longer be treated as a one-off decision made during procurement. It needs to be actively maintained through ongoing oversight, clear reporting, and defined accountability.

It also suggests that businesses should place greater emphasis on evidence when assessing vendors. Certifications, independent testing, and transparent disclosure practices are becoming essential, not optional.

There is also a need to address internal alignment. Ensuring that IT teams and leadership share a common understanding of vendor risk can help avoid fragmented decision-making and improve overall resilience.

Ultimately, the findings show that cybersecurity is not just about technology, but about confidence in the organisations delivering it. When that confidence is missing, even the most advanced tools can leave businesses feeling exposed.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives