Sustainability-in-Tech : Satellite To Detect Methane Leaks From Space

German climate technology company AIRMO is developing a new satellite system designed to precisely detect methane emissions from individual sources on Earth, potentially transforming how greenhouse gas leaks are monitored worldwide.

Why Methane Monitoring Is Becoming Critical

Methane is one of the most powerful greenhouse gases contributing to global warming. Scientists estimate it accounts for roughly 30 per cent of the warming currently affecting the planet.

Despite the seriousness of this situation, methane emissions from individual facilities are often poorly measured. For example, many oil and gas operators still rely on estimates rather than direct measurements, which can lead to significant underreporting.

At the same time, regulatory pressure is increasing. The European Union’s Methane Regulation and initiatives such as the Oil and Gas Methane Partnership (OGMP) 2.0 now require far more accurate emissions reporting across the energy sector.

These developments are driving demand for monitoring technologies capable of detecting leaks at specific sites rather than relying on broad regional estimates.

How AIRMO’s Space Sensor Technology Works

AIRMO, founded in 2022 and based in Berlin and Luxembourg, is developing a compact sensor payload that can be mounted on small satellites.

The system combines a short-wave infrared (SWIR) pushbroom spectrometer with a proprietary micro LiDAR (Light Detection and Ranging) system. Together, these sensors analyse how methane interacts with light reflected from the Earth’s surface.

The spectrometer identifies the chemical signature of methane, while the LiDAR component measures atmospheric conditions such as aerosols and wind patterns that can affect measurement accuracy.

According to the company, this combination significantly improves the precision of emissions measurements compared with spectrometer-only systems.

The result is a sensor small enough to fly on nanosatellites but powerful enough to detect methane plumes from very small individual sources.

Why Combining LiDAR And SWIR Improves Accuracy

Traditional satellite monitoring systems often rely only on spectrometry (identifying substances by how they absorb light). While effective for detecting large emissions areas, these systems can struggle to identify smaller leaks from individual industrial assets.

AIRMO’s approach adds LiDAR to correct for atmospheric effects that can distort readings.

The company explains that its satellite system is designed to deliver “reliable and accurate point source measurements”, allowing emissions to be traced back to specific infrastructure.

It also says the technology allows operators to “allocate emissions to individual sources — a huge leap forward in global emissions monitoring technology innovation.”

This level of precision could make it easier for regulators and companies to identify leaks quickly and repair them before they release large quantities of methane.

From Drones And Aircraft To Satellites

AIRMO doesn’t just make satellites. The company already deploys methane monitoring technologies on drones, aircraft and ground-based systems. These tools are currently used to inspect pipelines, storage facilities, LNG terminals and other energy infrastructure.

Drone-mounted sensors can detect leaks as small as one gram of methane per hour, while aircraft systems can survey hundreds of square kilometres during a single flight.

These airborne systems allow operators to identify leaks during inspections. Satellites could extend that capability by providing continuous monitoring across large geographic areas.

In Orbit By 2027

AIRMO says it plans to launch its first satellite in 2027 in partnership with Bulgarian satellite manufacturer EnduroSat.

The longer-term goal is to deploy a constellation of 12 satellites capable of providing near real-time global methane monitoring.

According to AIRMO, its satellite network will eventually provide “global and near real time capabilities” for monitoring greenhouse gas emissions.

Other Companies

AIRMO is actually part of a rapidly growing field of companies developing space-based methane monitoring systems.

Several satellite operators already provide emissions detection services. Canadian company GHGSat, for example, operates satellites capable of detecting methane emissions from individual industrial sites.

Other initiatives include MethaneSAT, backed by the Environmental Defense Fund, and the Carbon Mapper project, which is developing satellites designed to detect methane and carbon dioxide emissions from major industrial sources.

These systems are helping researchers and regulators identify previously unknown methane leaks around the world.

AIRMO’s technology aims to improve on existing systems by combining spectrometry and LiDAR sensors in a smaller satellite platform, potentially delivering higher resolution measurements with lower deployment costs.

How A Satellite Constellation Could Transform Emissions Monitoring

If successful, AIRMO’s satellite constellation could provide continuous global coverage of methane emissions.

Once fully deployed, the system is expected to deliver frequent updates from orbit, allowing operators and regulators to detect leaks much faster than current inspection-based approaches.

Rapid detection is important because methane leaks can persist unnoticed for long periods, releasing large quantities of greenhouse gases into the atmosphere.

Improved monitoring could help energy companies identify and repair leaks more quickly while also supporting more accurate emissions reporting.

What Does This Mean For Your Organisation?

For energy companies and infrastructure operators, technologies such as AIRMO’s satellite monitoring system could significantly change how methane emissions are measured and managed.

Regulators are increasingly demanding site-level emissions data rather than estimates, particularly under EU methane regulations and international reporting frameworks such as OGMP 2.0. Many UK organisations that import, trade or finance energy infrastructure are already affected by these reporting expectations.

Satellite-based monitoring could help companies identify leaks faster, verify emissions data and demonstrate compliance with emerging environmental standards. This may reduce regulatory risk and help organisations respond more quickly when emissions problems occur.

For UK businesses involved in finance, insurance and investment, improved methane monitoring could also provide more reliable data when assessing climate risk, sustainability performance and ESG reporting claims.

Companies involved in supply chains linked to oil, gas and energy infrastructure may also face increasing expectations from regulators, investors and customers to demonstrate that emissions are being measured accurately and managed responsibly.

As satellite sensing technologies continue to evolve, tools capable of detecting emissions from individual facilities may become an increasingly important part of global climate monitoring and corporate environmental accountability.

Video Update Find EVERYTHING With New 365 Copilot Search

Microsoft’s new 365 Copilot Search is designed to help you find virtually anything across your Microsoft 365 environment, emails, files, chats and apps, and this video shows how it can surface the exact information you need in seconds instead of hunting through folders and messages.

[Note – To Watch This Video without glitches/interruptions, It may be best to download it first]

Tech Tip : Check Which Browser Extensions Can Access Your Business Data

Browser extensions can read and change the content of websites you visit, so regularly reviewing and removing unused extensions in Chrome or Microsoft Edge is a quick way to reduce the risk of unnecessary access to email, documents and other business information viewed in your browser.

Why This Matters

Browser extensions are small add-ons that provide useful features such as password managers, AI assistants, grammar checkers or screenshot tools.

To work properly, many extensions request permission to read and change the data on websites you visit. This can include pages in services such as Outlook, Gmail, Microsoft 365, Google Workspace, CRM systems or internal company tools.

Most extensions are legitimate. The problem is that people often install them, forget about them and leave them running for years.

Security researchers regularly highlight cases where browser extensions are sold to new developers, updated with malicious code, or granted far broader permissions than users realise.

Reviewing extensions periodically helps reduce unnecessary access to sensitive business data.

How To Check Extensions In Google Chrome

– Open Chrome.

– Click the three-dot menu in the top-right corner.

– Select Extensions.

– Click Manage Extensions.

You will now see a list of all installed extensions.

For each extension you can:

– Turn it off using the toggle switch.

– Click Remove to uninstall it.

– Select Details to see what permissions it has, including whether it can read data on websites you visit.

If you do not recognise an extension or no longer use it, removing it is usually the safest option.

How To Check Extensions In Microsoft Edge

– Open Microsoft Edge.

– Click the three-dot menu in the top-right corner.

– Select Extensions.

– Choose Manage extensions.

You will see all installed extensions.

From here you can:

– Disable extensions using the toggle switch.

– Click Remove to uninstall them.

– Select Details to view the permissions each extension has.

Edge also shows which extensions are allowed to read and change site data, helping you decide whether they should remain installed.

What To Look For

When reviewing extensions, pay particular attention to:

– Tools you installed once but no longer use.

– Old productivity or AI tools you were testing.

– Screenshot or PDF utilities you forgot about.

– Extensions you do not recognise.

– Anything with permission to read and change all website data.

Removing unnecessary extensions reduces the number of third-party tools that can interact with the information displayed in your browser.

A Practical Approach

Set a reminder every few months to review your browser extensions.

Most business users discover several extensions they no longer need. Removing them is a simple way to improve security and keep your browser running efficiently.

Medical Chatbot Hacked Into Giving Dangerous Advice

Security researchers have demonstrated that a healthcare AI chatbot used in a US medical pilot can be manipulated into producing dangerous advice and misleading clinical notes, raising new questions about how safely AI can operate inside real healthcare systems.

What Happened?

Doctronic is a US telehealth platform built around an AI medical assistant (a medical chatbot) designed to help patients understand symptoms, manage conditions and connect with licensed doctors. The system is intended to act as a first point of contact in a digital care pathway, gathering patient information, offering guidance and preparing summaries for clinicians.

The idea of Doctronic is that patients can consult the AI about symptoms, medications or health concerns, and the system prepares structured information that helps doctors review cases more quickly.

Can Be Manipulated

However, the platform has recently attracted attention after being examined by Mindgard, an AI security company that specialises in testing the safety of AI systems.

In its research, Mindgard showed that the chatbot could be manipulated into spreading vaccine conspiracy theories, recommending methamphetamine as a treatment for social withdrawal, generating altered clinical guidance and even advising users how to cook methamphetamine.

According to the researchers, the issue stems from weaknesses in the chatbot’s internal instructions. As Mindgard explained: “System prompts are the ‘keys to the kingdom’ when it comes to chatbots.”

The issue is particularly sensitive because Doctronic is currently being used in a pilot programme in the US state of Utah. The project operates within a regulatory “sandbox”, which allows new technologies to be tested under controlled conditions. As part of the trial, the system can assist with managing patient queries and renewing certain existing prescriptions before cases are reviewed by a human clinician.

Why The Exploit Matters

The issue is more serious than a typical chatbot error or AI hallucination because Doctronic sits inside a healthcare workflow. The system generates structured medical summaries and guidance that clinicians may review as part of patient care. If that output is manipulated or incorrect, it could appear credible enough to influence how a case is interpreted.

The researchers warned that this creates a new type of risk. As they put it, “the most dangerous advice can come from the most well-intended of chatbots.”

How The Prompt Injection Works

According to Mindgard, the weakness it discovered involved a type of attack known as prompt injection.

Large language models (LLMs) operate based on internal instructions known as system prompts. These hidden instructions guide how the AI behaves, what rules it follows and what information it should refuse to provide.

Mindgard said it was able to trick the chatbot into revealing those internal instructions by manipulating how the conversation was framed. By convincing the system that the session had not yet begun, the researchers prompted it to recite its own internal instructions.

Once those instructions were exposed, the chatbot became easier to influence. The researchers then introduced fabricated regulatory bulletins and policy updates, which the system treated as legitimate information.

This allowed them to push the AI towards unsafe advice, including altered medication guidelines and fabricated medical guidance.

Why SOAP Note Persistence Raises The Stakes

The most concerning aspect of the experiment involved clinical documentation.

When users request a consultation with a human clinician, the system generates a structured medical summary known as a SOAP note. These documents summarise the patient’s situation and provide context before the appointment begins.

Mindgard found that manipulated information introduced during a compromised session could appear in these summaries and be passed on to clinicians.

In its report, the company warned that this could “actively undermine the human professionals who might trust its authoritative-looking output.”

While the document itself is not a prescription, it becomes part of the clinical context surrounding the patient. In busy healthcare environments, that context can influence how clinicians interpret a case.

In other words, manipulated AI output could enter a legitimate medical workflow.

What Utah Says About The Limits Of The Pilot

Officials involved in the Utah pilot have, however, been keen to point out that the programme includes safeguards.

The trial is limited to renewing certain existing medications and does not allow prescriptions for controlled substances. Additional checks are also applied before any prescription renewal is approved.

Doctronic has said it has reviewed the research findings and continues to strengthen its safeguards against adversarial prompts and manipulation attempts.

Those limitations reduce the immediate risk in this particular pilot. However, the research highlights the types of challenges developers may face as AI systems move deeper into healthcare processes.

The Wider Evidence On Medical Chatbot Risk

This incident also aligns with concerns raised by other recent academic research.

A major study led by the University of Oxford earlier this year examined how people interact with AI systems when seeking medical advice. The study compared people using AI chatbots with those using traditional sources of information.

Researchers found that participants using AI tools were no better at identifying appropriate courses of action than those relying on other methods such as online searches. In some cases, users struggled to interpret the mixture of correct and incorrect advice produced by the models.

The study concluded that strong performance on medical knowledge tests does not necessarily translate into safe real-world interactions with patients.

Crucially, the researchers argued that systems intended for healthcare use must be evaluated in real-world conditions with human users before being widely deployed.

What Does This Mean For Your Business?

For healthcare providers and regulators, the findings reinforce a familiar lesson from other safety-critical industries. Introducing AI into a workflow does not simply add automation. It changes how information flows and how people trust that information.

Healthcare systems already rely on structured documentation and clinical summaries. If AI systems begin generating those summaries, their reliability becomes a core safety question rather than a technical curiosity.

For organisations developing AI tools in high-trust environments such as healthcare, finance or legal services, the message is that technical accuracy alone is not enough. Systems must also be resilient to manipulation, misuse and subtle changes in context.

The Doctronic case illustrates that prompt security, audit trails and robust human oversight are not optional features but fundamental safeguards when AI systems begin influencing decisions that affect real people.

Although AI may eventually become a valuable support tool in healthcare, the evidence emerging so far suggests that the journey from promising technology to safe clinical practice is likely to be longer and more complex than first thought.

Why OpenAI Has Agreed To Deploy AI Inside Pentagon Systems

OpenAI has reached an agreement with the Pentagon to deploy its AI models inside classified US government systems, highlighting how rapidly artificial intelligence is becoming part of national security infrastructure.

Department of Defense?

Before getting any further into this new story, it should be noted that, in its public statements, OpenAI refers to the US defence department as the “Department of War”, a name used by the current administration. However, many Americans continue to use the name the Department of Defense, noting that the department is formally established in US law under that title and that any official renaming would require approval from Congress.

What The System Is

Getting back to the main story here, OpenAI is the US technology company behind widely used AI models such as ChatGPT and GPT-4-class systems. These models are designed to analyse text, generate reports, summarise information and assist with complex analytical tasks.

Organisations already use similar AI systems for activities such as software development, intelligence analysis, planning and research. Governments have increasingly been exploring how these capabilities could support national security and defence operations.

What Happened With OpenAI and the Pentagon?

OpenAI confirmed in a recent announcement on its website that it has reached an agreement allowing its AI models to be deployed in classified government environments operated by the Pentagon.

According to OpenAI, these systems will operate inside secure networks used for sensitive national security work, and the deployment will run through a cloud-based architecture rather than installing the models directly on military hardware.

The company says this approach allows the government to use advanced AI capabilities while OpenAI retains control of its safety systems.

In its online announcement, OpenAI said collaboration between governments and AI developers will increasingly be required as the technology becomes more powerful. As the company wrote: “We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process.”

Why OpenAI Says The Deal Is Necessary

OpenAI argues that modern defence organisations are likely to require increasingly capable AI systems.

Military planners already use AI in areas such as intelligence analysis, operational modelling, logistics planning and cyber defence because these systems can process large volumes of information and identify patterns that may be difficult for human analysts to detect quickly.

OpenAI said it believes providing these capabilities with clear safeguards is preferable to governments relying on less controlled deployments.

As the company explained: “We think the US military absolutely needs strong AI models to support their mission especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies into their systems.”

How The Deployment Will Work

OpenAI says its models will not be embedded directly into weapons systems or military hardware.

Instead, the deployment will operate through cloud-based APIs managed by OpenAI. This architecture allows the company to maintain its safety controls, monitoring systems and model updates.

OpenAI also says cleared company personnel will remain involved in the deployment.

As the company stated: “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections.”

Areas Where It Cannot Be Used

Crucially, for many people, the agreement also establishes three areas where OpenAI technology cannot be used. These include mass domestic surveillance, directing autonomous weapons systems and making high-stakes automated decisions.

Why The Deal Attracted Attention

The agreement emerged during a period of tension between the Pentagon and parts of the AI industry.

Anthropic, one of OpenAI’s main competitors, had been negotiating with the Pentagon but refused to remove safeguards limiting the use of its models for domestic surveillance or fully autonomous weapons systems.

After negotiations broke down, the Pentagon designated Anthropic a “supply chain risk”, preventing US defence contractors from continuing to use its technology.

OpenAI’s agreement followed shortly afterwards, prompting debate about how AI companies should engage with military organisations.

The Wider Context Of Military AI

The OpenAI agreement is part of a broader expansion of AI inside US defence systems.

Elon Musk’s AI company xAI has also reached an agreement allowing its Grok model to be used in classified military networks. As US news website Axios reported: “Elon Musk’s artificial intelligence company xAI has signed an agreement to allow the military to use its model, Grok, in classified systems.”

Axios also highlighted the strategic significance of the move, noting that “up to now, Anthropic’s Claude has been the only model available in the systems on which the military’s most sensitive intelligence work, weapons development and battlefield operations take place.”

These developments seem to suggest that the Pentagon is actively expanding its access to multiple frontier AI systems.

What Does This Mean For Your Business?

For technology companies and organisations using AI systems, the agreement shows how advanced AI models are increasingly being treated as strategic infrastructure rather than purely commercial tools.

Governments are beginning to integrate AI capabilities into systems used for intelligence analysis, defence planning and national security operations. That process is likely to deepen as AI systems become more capable.

For AI developers, this creates a growing responsibility around governance, safeguards and oversight. Decisions about how models are deployed now involve legal, ethical and political considerations as well as technical ones.

For businesses more broadly, the story highlights a wider trend. As AI systems become more powerful and widely adopted, questions about acceptable use, risk management and operational oversight are moving from theoretical discussions into real-world policy decisions.

In short, the OpenAI agreement seems to show that the future of advanced AI will be shaped not only by technological innovation but also by how governments, companies and regulators decide these systems should be used.

Why Meta Will Allow Rival AI Chatbots On WhatsApp In Europe

Meta has agreed to allow rival AI chatbots to operate on WhatsApp in Europe for the next 12 months, but providers will have to pay a per-message fee to access the platform.

Regulatory Pressure

The decision follows regulatory pressure from the European Commission, which has been investigating whether Meta unlawfully restricted competition by blocking third-party AI assistants from WhatsApp.

What Triggered The Dispute

The dispute began in October 2025 when Meta updated the terms governing its WhatsApp Business platform. The change effectively prevented developers of general-purpose AI assistants from offering their chatbots through the WhatsApp Business API.

As a result, from 15 January 2026, the only AI assistant available directly on WhatsApp was Meta’s own product, Meta AI. Rival services such as ChatGPT, Claude and other conversational AI systems could not be integrated through the platform in the same way.

Several AI companies complained to regulators that the move disrupted their ability to reach users and could limit competition in the fast-growing AI assistant market.

Regulators Step In

The European Commission responded by opening a formal investigation into the policy. In February it issued a Statement of Objections outlining its preliminary view that the restriction could breach EU competition rules.

The Commission said Meta is likely to hold a dominant position in the European market for consumer communication apps through WhatsApp. Blocking rival AI assistants from accessing the platform could therefore limit competition in a rapidly developing technology sector.

Regulators also warned that WhatsApp acts as an important gateway for companies trying to reach consumers with digital services. Excluding third-party AI assistants could create barriers for smaller competitors seeking to enter or expand in the market.

In light of those concerns, the Commission said it was considering imposing interim measures to prevent serious and irreparable harm to competition while the investigation continues

Meta Changes Course

Shortly after the Commission signalled it might intervene, Meta announced a policy change.

The company said it will allow general-purpose AI chatbot providers to operate on WhatsApp through the Business API in Europe for the next 12 months. Meta said the move should remove the need for immediate regulatory intervention while the investigation runs its course.

In a statement, Meta said: “For the next 12 months, we’ll support general-purpose AI chatbots using the WhatsApp Business API in Europe in response to the European Commission’s regulatory process.”

The company added that the move should allow regulators time to complete their investigation without disruption, saying: “We believe that this removes the need for any immediate intervention as it gives the European Commission the time it needs to conclude its investigation.”

However, the concession comes with an important condition.

AI providers will be charged for each message their chatbot sends through the WhatsApp Business platform.

How The Pricing Works

Meta has introduced a new pricing category specifically for developers of AI assistants.

Under the policy, third-party AI providers must pay a fee for every non-template message sent to users through WhatsApp. A non-template message is a standard conversational reply rather than a pre-written automated notification.

According to Meta’s developer documentation, the price will vary depending on the country but typically ranges from about €0.0490 to €0.1323 per message.

That structure could become expensive for AI providers because chatbot interactions often involve multiple messages in a single conversation. A typical exchange with an AI assistant may include dozens of prompts and replies.

Meta has clarified that the new pricing model only applies to developers offering general-purpose AI assistants through the platform.

Businesses using WhatsApp for customer service automation, such as retailers deploying support chatbots, will not be affected by the change.

Where The Policy Applies

The charging model applies in markets where Meta is legally required to permit AI assistants to operate on the WhatsApp platform.

The system is already in effect in Italy and is being extended across a wide range of European countries including France, Germany, Spain, Ireland, the Netherlands, Sweden and Portugal.

Meta says these changes are designed to comply with regulatory requirements while ensuring the company can manage the technical demands created by AI chatbots operating on messaging infrastructure.

The Wider Competition Debate

The dispute reflects a growing global debate about competition in the artificial intelligence market.

Messaging platforms such as WhatsApp represent one of the most direct ways for AI assistants to reach large numbers of users. With more than two billion users worldwide, WhatsApp is a particularly valuable distribution channel.

Regulators are increasingly concerned that technology platforms that control these channels could favour their own AI services over those developed by competitors.

Meta has previously argued that the AI market remains highly competitive and that consumers can access AI assistants through many other routes, including dedicated apps, websites and operating systems.

What Does This Mean For Your Business?

For businesses developing or deploying AI assistants, the decision highlights how platform access and regulation are becoming key factors in the AI economy.

Messaging apps such as WhatsApp provide direct access to very large audiences and are increasingly seen as a natural place for AI assistants to interact with users. However, access to those platforms may depend on pricing models, regulatory decisions and the policies of the platform owner.

The introduction of per-message charges for AI providers also shows how quickly the economics of AI services can change when they rely on third-party infrastructure. Organisations planning to deliver AI services through messaging platforms will need to consider not only development costs but also ongoing usage fees and platform dependency risks.

Businesses building AI services should therefore monitor platform rules, competition investigations and integration costs carefully. As AI assistants become more embedded in messaging environments, the ability to access those ecosystems on fair and predictable terms may prove just as important as the technology itself.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives