Sustainability-in-Tech : ‘Green Software’ Extends Device Lifespans
In this article, we look at how ‘green software’ can be used to enable devices, such as phones, to run longer and can make them more carbon-efficient when in operation.
The Carbon Cost of the Upgrading Cycle
A significant environmental impact of mobile phones comes not from their daily usage but from their production. For example, around 80 per cent of a phone’s total carbon emissions are generated during its manufacturing process, with only 20 per cent linked to its operational use. This means that the frequent cycle of upgrading devices has a substantial carbon cost. Each time a new phone is produced, considerable energy and resources are expended, increasing overall carbon emissions. This highlights an urgent need for more sustainable technology practices.
Green Software
One solution lies in green software, which aims to prolong the lifespan of devices by keeping them efficient for longer. By improving software to use fewer resources and run smoothly on older hardware, green software can reduce the pressure to upgrade, ultimately decreasing the environmental footprint associated with constant hardware production. It’s hoped that this approach not only helps conserve resources but also represents a meaningful way to minimise the carbon impact of our increasingly technology-driven lives.
The Environmental Cost of Technology and the Role of Software
With the growth of the information technology and communications (ICT) sector, the carbon footprint of technology is expected to escalate. In 2020, ICT accounted for around 1.4 per cent of global greenhouse gas emissions, and by 2040, that share is projected to rise to 14 per cent. This trend highlights an urgent need for sustainable practices in tech. Software efficiency can play a significant role in this, not only enabling devices to consume less energy but also extending their life through optimised performance. This approach may reduce carbon emissions, both by lowering the demand for new hardware and by making existing technology operate more efficiently.
The Difference With Green Software
While traditional software is often designed with user experience and functionality in mind, green software prioritises energy efficiency and carbon-conscious practices. Developed by companies such as the Green Software Foundation, tools like the Software Carbon Intensity (SCI) metric offer a way to measure software’s carbon footprint, covering both the direct emissions of the software and the embedded carbon of the hardware on which it runs. This approach is a step towards creating transparent, trackable measures of software’s environmental impact.
Extending Device Life with Green Software
Extending the lifespan of electronic devices can significantly reduce the need for new hardware production and its associated emissions. /e/OS is an example of a notable player in the green software sphere, designed to provide extended support to older Android devices, long after manufacturers have ended their support. Unlike traditional operating systems that may introduce unnecessary features or “bloatware” that can slow down a device, /e/OS minimises resource use and runs efficiently on older hardware, even on devices over ten years old. By offering regular security updates and optimised performance, /e/OS helps users maximise the lifespan of their phones, reducing the need to upgrade prematurely.
It should also be noted here that, beyond its environmental benefits, /e/OS is marketed very much as privacy-centric and emphasises the fact that it offers a “deGoogled” experience, where users can avoid data tracking while using essential smartphone features. This appeals to users who value both sustainability and data privacy, possibly making it a well-rounded solution in the green software landscape (other green software is also available). That said, the /e/OS system’s impact on reducing electronic waste is noteworthy. For example, each phone kept in use for an extra year prevents an estimated 55kg of CO₂ from being emitted due to avoided production.
Carbon-Efficient Operations Through Green Coding Practices
Efficient coding practices are another core aspect of green software. Many modern applications run on cloud servers, where energy consumption is often unmonitored or underestimated. As highlighted by Asim Hussain, Executive Director of the Green Software Foundation, developers rarely seem to consider energy use in server applications due to a lack of monitoring tools. To tackle this, the Green Software Foundation (US-based, founded in 2021 as a global initiative launched by Microsoft, Accenture, GitHub, and ThoughtWorks) developed the SCI metric to measure the carbon intensity of software, allowing developers to track and optimise their applications. The Foundation’s Impact Framework enables developers to estimate emissions based on observable server resource usage, providing actionable insights to improve energy efficiency.
Code Smells
A further initiative, ecoCode (a France-based collaborative project that helps developers create energy-efficient code), identifies “code smells,” or signs that software could run more efficiently. By identifying inefficient code, such as unnecessary database queries or overly complex algorithms, ecoCode encourages developers to create lighter, more efficient applications. For example, as highlighted in a recent article on Yahoo by Tariq Shaukat, CEO of Sonar, “A lot [of code smells] would fall under the umbrella of overly complex code. The second [type] is things that run in an inefficient way: You’re updating or pulling data more frequently than you need to. Another one is bloat. How do you make your app as lean and streamlined as possible?”. Simplifying such code not only improves performance but also reduces the carbon footprint associated with the software’s operation. Companies that adopt ecoCode principles can potentially cut their operational emissions, creating software that uses fewer server resources without compromising functionality.
Examples of Green Software Companies and Their Impact
The green software landscape has seen a growing number of organisations committed to sustainability, each bringing unique solutions. Besides /e/OS (previously mentioned), other companies leading the charge in sustainable software and hardware solutions include:
– Fairphone. This company stands out for its ethical approach to mobile phone production. Though primarily focused on hardware, Fairphone’s software practices contribute to a longer device lifespan. Fairphone’s modular design allows users to easily replace or upgrade components, while its operating system is built to avoid bloatware, resulting in extended device functionality. The Fairphone 3, for example, received software updates for five years post-launch, significantly longer than many mainstream smartphones. This approach aligns with the company’s mission to reduce electronic waste, a priority for its environmentally conscious customer base.
– Mycroft AI. Headquartered in Kansas City in the US, this green software company takes a sustainability-focused approach to AI. The open-source voice assistant, for example, focuses on resource-efficient operation and privacy. Unlike typical AI systems that constantly transmit data to central servers, Mycroft AI allows users to run the software locally, thereby reducing energy consumption and eliminating the need for large data centres. This minimises Mycroft AI’s overall carbon footprint and provides users with a privacy-friendly alternative to more data-intensive virtual assistants.
– Murena, the company behind /e/OS, also complements its mobile OS with a suite of privacy-focused applications, from email to cloud storage. The company’s commitment to open-source practices ensures transparency, allowing users to inspect and verify that the software prioritises minimal resource use and respects data privacy. Murena’s ecosystem, powered by low-impact services, is designed for users who want a comprehensive, privacy-respecting experience without the environmental impact of conventional, high-energy digital services.
– Sailfish OS, developed by the Finnish company Jolla, is a Linux-based mobile operating system designed to be energy-efficient and adaptable. Its lightweight architecture ensures that devices operate smoothly without excessive resource consumption, thereby extending battery life and reducing the need for frequent hardware upgrades. Sailfish OS supports a range of devices, including older models, promoting device longevity and reducing electronic waste. Additionally, its open-source nature allows for community-driven development, fostering transparency and continuous optimisation for energy efficiency.
– PostmarketOS, an open-source project based in Switzerland, aims to provide a sustainable alternative to traditional mobile operating systems. It is designed to run on a wide array of devices, including those no longer supported by their manufacturers, effectively extending their usable life. By offering a streamlined and bloatware-free experience, PostmarketOS reduces the energy consumption of devices, contributing to a lower carbon footprint. The project emphasises privacy and user control, aligning with the principles of green software by minimising resource use and maximising device longevity.
The Growing Importance of Software Sustainability
While the demand for sustainable tech solutions is increasing, the adoption of green software practices remains limited. For example, Gartner estimates that only 10 per cent of large companies currently include sustainability as a criterion in their software procurement, although this is expected to reach 30 per cent by 2027. This shift in priorities reflects a growing recognition among businesses of the importance of reducing their digital carbon footprint.
What About Big Tech Companies?
Microsoft, Google, and Intel could be considered green software companies in so much as they are members of the Green Software Foundation, actively working on reducing the environmental impact of their digital services. Microsoft, for example, has committed to becoming carbon negative by 2030 and is working on tools to help developers reduce energy consumption. By making their software more carbon-efficient, these big companies hope to lead the charge in digital sustainability.
Encouraging a Culture of Sustainability in Software Development
The transition to sustainable tech solutions is not without its challenges. Encouraging developers to prioritise energy efficiency requires a cultural shift within organisations. In an article recently published by Yahoo, for example, Peter Campbell, Director of Green Software at Kainos, discussed the challenges of integrating sustainability into software development. He noted, “We thought that if we educated internally and externally, it would get magical adoption from all our teams. Turns out it doesn’t work as simply as that. The culture piece is really hard, not just to get people to act, but to keep prioritising it. There are so many priorities from our customers that sustainability sometimes isn’t the loudest one.”
The Green Software Foundation’s free courses on sustainability in software aim to address this cultural challenge, equipping developers and engineers with the knowledge to build more efficient applications. These initiatives are important in making green software development a mainstream practice, ensuring that sustainability becomes an integral part of the digital landscape.
What Does This Mean For Your Organisation?
Looking ahead, green software holds promise not only for environmentally conscious consumers but also for businesses aiming to reduce their carbon footprint. For business users, incorporating green software could offer a practical path to extend device lifespans, reduce operational costs, and align with growing environmental expectations from customers and investors alike. As tools like /e/OS and PostmarketOS demonstrate, using lighter, bloat-free software can mean fewer disruptions, improved device performance, and greater privacy control, all of which are key benefits for organisations seeking sustainable, reliable, and secure digital tools.
For green software companies, the path forward is both challenging and ripe with opportunity. As seen with Mycroft AI and ecoCode, sustainable solutions in tech are gaining traction, with businesses increasingly recognising that energy-efficient software can directly translate to lower emissions. However, these companies also face the dual challenge of innovating in ways that are both carbon-efficient and market-competitive.
Big tech players, meanwhile, are under mounting pressure to demonstrate leadership in digital sustainability. With members like Microsoft and Google spearheading initiatives within the Green Software Foundation, there is hope that their influence could accelerate wider industry adoption of green software practices. Their commitments to carbon reduction, as in Microsoft’s ambition to be carbon negative by 2030, appear to reflect a shift in priorities, yet achieving these goals demands that green principles are integrated deeply within all levels of software development and hardware lifecycle management.
As for phone manufacturers, some, like Fairphone, are already paving the way with modular, long-lasting devices, mainstream manufacturers are beginning to extend software support for their devices, a positive step but one that should really expand further. As consumer expectations for durability and sustainability grow, the pressure is mounting for manufacturers to adopt green software practices that can support hardware for longer periods. If big brands make this shift, they have the power to reshape the device industry, potentially reducing electronic waste at a global scale.
Video Update : Cut & Paste Enhancements With CoPilot
As well as the standard Cut & Paste (including formatting etc) functions, CoPilot enables you to dynamically enhance your clipboard text on the fly … it’s pretty neat!
[Note – To Watch This Video without glitches/interruptions, It’s best to download it first]
Tech Tip – Use “Auto-Hide Taskbar” to Increase Screen Space and Focus
The Auto-Hide Taskbar option hides the taskbar when not in use, maximising screen space and creating a cleaner desktop environment, which can enhance focus and reduce distractions. Here’s how to use it:
How to Enable Auto-Hide Taskbar
Open Taskbar Settings:
– Right-click on the taskbar (at the bottom of your Windows screen) and select Taskbar settings.
Toggle Auto-Hide:
– For Windows 10, under “Taskbar,” check the box for ‘Automatically hide the taskbar in desktop mode’.
– For Windows 11, scroll to “Taskbar behaviours,” then check the box for ‘Automatically hide the taskbar’.
– When enabled, the taskbar will remain hidden until you move your mouse to the bottom edge of the screen, which is especially useful for smaller displays or a cleaner look.
How to Reverse It Back Again
Open Settings:
– Press Win + I to open the Settings app.
Search for Taskbar Settings:
– In the Settings search bar, type “taskbar settings”.
Select Taskbar Settings:
– Select ‘Taskbar settings’ from the search results.
– This takes you directly to the taskbar options where you can toggle Auto-hide on or off quickly.
– Alternatively, right-click any blank area on the taskbar itself (if it’s hidden, move your mouse to the bottom of the screen to reveal it) and choose Taskbar settings directly from the context menu.
Featured Article : How New Data Laws Will Affect You
Here, we look at how the Data Use and Access Bill is poised to reshape how our personal data is handled in the UK and we also review the significant changes it will bring, with implications for the NHS and beyond.
What Is the Data Use and Access Bill?
Introduced as a cornerstone of the government’s plan to modernise data governance, the Data Use and Access Bill aims to overhaul existing data laws to improve economic growth, streamline public services, and enhance data security. Originating from a need to update the UK’s data legislation post-Brexit, the bill seeks to replace or amend elements of the EU’s General Data Protection Regulation (GDPR) to better suit national interests. The government claims that streamlining data usage and access could generate £10 billion of economic benefit. While the exact date of its enactment remains uncertain, the bill is expected to come into force within the coming year, subject to parliamentary approval.
How Will It Affect Our Data Handling?
At the heart of the bill lies a fundamental shift in how personal data will be managed, accessed, and shared across both public and private sectors. For individuals, this means their data could be used more extensively to improve services, but it also raises concerns about privacy and consent.
In the context of the NHS, the bill mandates that all IT systems adopt common data formats, enabling real-time sharing of patient information such as pre-existing conditions, appointments, and test results between NHS trusts, GPs, and ambulance services. The Department for Science, Innovation and Technology (DSIT) estimates this could free up 140,000 hours of NHS staff time annually. The government envisions that by breaking down data silos, patient care will become more efficient, reducing medical errors and eliminating the need for repeat tests.
What About Patient Passports?
Many people will have heard the term ‘patient passport’. As part of the UK’s NHS digital transformation strategy, this will be the centralised digital record that holds a patient’s comprehensive health information, including medical history, test results, and treatment notes. It’s hoped that this passport will allow healthcare providers to access a patient’s entire medical record seamlessly across different healthcare settings, whether at GP surgeries, hospitals, or through ambulance services. By consolidating data, the aim of patient passports is to reduce redundancies, prevent repeated tests, and improve continuity of care, ensuring clinicians can make quicker, well-informed decisions in critical moments.
Privacy Warnings
However, privacy advocates have said that increased data sharing must be balanced with safeguards, including protecting patient passports from third-party access. For example, one key question they’re asking is who exactly will have access to this sensitive health data? The potential involvement of multinational tech firms (known for less-than-stellar transparency records) adds to this concern. For example, the Good Law Project (a key privacy advocate), has raised concerns about the NHS’s partnership with private data firms, especially Palantir, for managing the Federated Data Platform (FDP). They argue that without sufficient scrutiny, sensitive patient data could be open to misuse or could be shared without adequate patient control. The group has highlighted potential issues with the National Data Opt-Out (NDOO), which allows patients to restrict their data from being used outside of their direct care but doesn’t yet fully cover the FDP, sparking concerns that the NDOO’s limitations might not uphold patients’ data rights effectively.
Beyond Healthcare – The Police
Beyond healthcare, the bill also proposes allowing police forces to automate certain manual data tasks. Currently, officers must log each instance they access personal information on the police database. Automating such steps could save an estimated 1.5 million hours per year, enabling officers to focus more on frontline duties. While increased efficiency is welcomed, civil liberties groups express concern over potential overreach and lack of oversight. Liberty, a UK human rights organisation, points out that “automation without accountability could lead to unchecked surveillance and data misuse.”
Infrastructure Too
The bill also introduces the creation of a digital “National Underground Asset Register,” requiring infrastructure firms to upload data on underground pipes and cables. This initiative aims to reduce the 600,000 accidental strikes on buried assets annually, minimising disruption from roadworks and construction projects.
A Digital Register of Births and Deaths
Another aspect of the bill that’s drawn attention is a plan for the creation of a digital register for births and deaths. This register is proposed to simplify how vital records are accessed and managed, with the goal of moving away from paper-based systems. Creating a digital registry should, it’s argued, make it easier for individuals and relevant authorities to access official records, such as birth and death certificates. This digital transformation will also align with broader efforts to streamline public records, similar to electronic registration in other sectors.
Consumer Data
The bill also discusses enhancing how consumer data (like energy usage or purchasing history) might be used to provide personalised services. For example, individuals could use data about their energy consumption to choose better tariffs, or purchasing data could inform tailored online shopping deals.
The Digital Revolution in the NHS
The digital revolution within the NHS is a critical component of the broader objectives outlined in the Data Use and Access Bill. The government’s new 10-year strategy for the NHS in England aims to transform how patients interact with the health service, mirroring the convenience and accessibility offered by modern banking apps.
Currently, the NHS App’s functionality is limited due to the fragmented nature of patient records, which are held separately by GPs and hospitals. The government’s push for a single, unified patient record (the patient passport) is intended to bridge this gap. As Health Secretary Wes Streeting has stated, “Moving from analogue to digital is essential if we are to create a more efficient, patient-centred NHS” (BBC, 2023).
This shift is anticipated to speed up patient care, reduce redundant testing, and minimise medical errors. For example, immediate access to a patient’s full medical history could enable faster diagnosis and treatment decisions, potentially saving lives.
Open to Abuse?
However, this digital transformation is not without controversy. Privacy campaigners, such as MedConfidential (a UK group advocating for privacy and transparency in health data usage), have expressed concerns that a single patient record / patient passport system could be “open to abuse” if not properly safeguarded. The involvement of private firms like Palantir, which has been awarded contracts to create databases joining up individual records, exacerbates these fears. As Sam Smith of MedConfidential says, “Handing over vast amounts of sensitive health data to companies with questionable track records poses significant risks to patient confidentiality”.
Too Hasty?
There has also been a public backlash against the perceived haste in implementing these changes without adequate consultation. A “national conversation” has been launched to gather public input, but critics argue that more needs to be done to ensure transparency and trust. As Rachel Power, Chief Executive of the Patients Association, said in a Patients Association Statement (2023): “For far too long, patients have felt their voices weren’t fully heard in shaping the health service. Any digital transformation must put patients at the heart of its evolution.”
The Backlash and Privacy Concerns
Despite assurances, scepticism remains. For example, the launch of the public engagement exercise was marred by inappropriate and irrelevant submissions, suggesting a disconnect between the government’s intentions and public perception. Also, reports about patient passports and usage of wearable technology (like Fitbits) to monitor health conditions remotely (to offer convenience and improved care) have also raised further privacy issues.
The British Medical Association (BMA) has expressed caution, stating that any move towards increased data sharing must be accompanied by “rigorous ethical standards and patient consent”. Critics fear that without proper oversight, personal health data could be exploited by private companies or misused by the state.
What About the Financial Aspects?
Many have highlighted that the financial aspects can’t be ignored. For example, Prof Nicola Ranger, General Secretary of the Royal College of Nursing, has said (in an RCN Press Release, 2023) that any future plans will require “new investment” to be successful and that, “Digital transformation is not just about technology; it’s about investing in people and processes to make it work effectively.”
Efficiency Gains
With figures in mind, as highlighted earlier, key examples of the efficiency savings that the proposed Data Use and Access Bill could bring by streamlining data use across sectors (especially in healthcare and law enforcement) include:
– An estimated £10 billion boost to the economy (UK government), primarily through simplifying data access and by reducing administrative inefficiencies and fostering innovation across sectors.
– Saving NHS staff 140,000 hours by standardising data formats across NHS trusts, hospitals, and GPs. This saved time could then be redirected to patient care, improving treatment speed and accessibility for patients.
– Automation of routine data tasks, such as logging access to personal data in police databases, could free up 1.5 million hours annually for the police. This reduction in administrative tasks could allow more time for frontline work, which could strengthen law enforcement efficiency and public safety.
Balancing Efficiency and Privacy
The implications of the Data Use and Access Bill extend beyond immediate efficiency gains. By fostering a more data-driven approach, the UK hopes to position itself as a leader in the global digital economy. The government asserts that modernising data laws will not only improve public services but also attract investment and innovation in sectors like artificial intelligence and biotechnology.
Public Trust Needed
However, the success of this ambitious agenda hinges on public trust. Past experiences with data initiatives, such as the failed Care.data programme in 2016, have left a legacy of scepticism. That programme sought to share GP records for research and planning but was abandoned due to public outcry over privacy concerns.
As Prof Sir Nigel Shadbolt, co-founder of the Open Data Institute, has said: “Data can be a powerful tool for good, but only if handled responsibly. Building and maintaining public trust is essential for any data initiative to succeed.”
Government Says Data Will Be Protected
In response to these challenges, the government has pledged to implement strict data protection measures. The bill is expected to outline clear guidelines on consent, data minimisation, and purpose limitation. Additionally, there will be provisions for individuals to access, correct, or delete their data, aligning with principles established under GDPR.
However, critics argue that replacing or modifying GDPR protections could weaken individual rights. The Information Commissioner’s Office (ICO), the UK’s data protection authority, has urged caution. In a statement last year, the ICO said, “Any changes to data protection laws must not dilute the rights of individuals or reduce the accountability of organisations.”
There is also the matter of international scrutiny to consider. As the UK diverges from EU data regulations, questions are being asked about the adequacy decisions that currently allow for the free flow of data between the UK and EU countries. Losing this status could have significant repercussions for businesses operating across borders.
Looking Ahead
The Data Use and Access Bill represents a significant step towards modernising the UK’s data infrastructure. While the potential benefits in terms of efficiency, economic growth, and improved public services are substantial, it seems clear that they must be carefully balanced against the imperative to protect individual privacy and maintain public trust. The coming months will be crucial as the bill progresses through Parliament and the national conversation unfolds.
What Does This Mean For Your Business?
As the Data Use and Access Bill stands poised for implementation, it signals a transformation across public services, private enterprise, and individual rights. For the government, this legislation offers a pathway to harness data as a tool for national progress. The projected £10 billion economic boost, alongside potential time savings within the NHS and police forces, embodies the bill’s intent to streamline services, foster efficiency, and support sectors such as artificial intelligence and biotechnology. For the government, success means creating a framework where data is a secure, accessible resource that fuels growth, with implications not only domestically but also in terms of the UK’s reputation on the international stage.
For the public, the stakes are particularly high. On one hand, individuals stand to benefit from improved public services, from faster healthcare diagnoses and treatments to enhanced law enforcement capabilities. But this convenience comes with concerns around privacy, choice, and transparency. Past data initiatives like Care.data have shown that public trust can falter without robust consent frameworks and clear assurances on data security. Therefore, establishing transparency and giving individuals genuine control over their information are pivotal if the public is to feel safeguarded rather than surveilled.
In healthcare, the NHS’s anticipated transformation via digital records and patient passports could make a tangible difference in patient care given the estimation that it could free up over 140,000 hours in staff time to improve responsiveness and patient outcomes. However, this potential relies on more than just technical feasibility. For example, some would say that significant investment in staff training and infrastructure, as well as strict privacy protocols, are needed to prevent data misuse. Partnerships with private tech companies, which bring efficiency but sometimes questionable records on transparency, will need to be tightly regulated to ensure that patient data is handled responsibly and ethically.
The police, meanwhile, are expected to gain valuable hours through automation, potentially redirecting 1.5 million hours away from administrative duties to active police work, which many would welcome. However, without careful oversight, automated data access could risk privacy rights and lead to unintentional overreach, a concern for civil liberties advocates who call for accountability mechanisms to match this increased efficiency.
Third-party companies, particularly in tech, are also significant stakeholders in this bill. The opportunity to innovate and participate in data-driven public projects is substantial, yet comes with the responsibility to uphold rigorous privacy standards. For UK businesses, especially those relying on cross-border data flows, alignment with international data regulations will be critical. Divergence from GDPR raises questions about future adequacy agreements with the EU, impacting data-dependent enterprises if this alignment weakens.
As this ambitious bill moves forward, its success depends not only on the economic and operational benefits it promises but also its commitment to protecting individual rights and maintaining public trust. Establishing transparent, secure data frameworks that place privacy and consent at the forefront will be essential. With appropriate safeguards, the Data Use and Access Bill could indeed lead the UK into a new era of responsible data innovation. Without them, however, it risks compromising the very rights it aims to modernise.
Tech Insight : What Is ‘Open Washing’ ?
With many tech giants now using ‘open’ as in ‘open source’ as a marketing term, we look at what the issues around this are, why it needs to be discouraged, and how this can be achieved.
What is Open Source?
To understand the question about ‘open washing’, it’s important to understand what real open source is. Defined and stewarded by the Open Source Initiative (OSI), open source goes beyond simply sharing code. In fact, it means giving users the rights to view, modify, and redistribute the software without undue restrictions. According to the OSI’s Open Source Definition, true open-source software adheres to ten principles, including free redistribution, access to source code, and the right to create derivative works. Open-source licences must also be non-discriminatory, ensuring that anyone, anywhere, can access and modify the software for any purpose.
These principles are meant to support innovation, community-driven improvement, and freedom from vendor lock-in, which is why open source has become so important in technology.
Not Everyone’s a Fan of Open Source
Despite the positive aspects of the principles of open source and its widespread use, not everyone is sold on it, with critics pointing to risks in security and sustainability. For example, while the transparency in open-source code may allow anyone to inspect for flaws, it also enables malicious actors to exploit vulnerabilities. In many cases, open-source projects tend to lack dedicated security teams, meaning patches can be slow to release, leaving users exposed. Financial viability is another issue; many open-source projects rely on volunteer developers or donations, making funding unpredictable and threatening long-term support and innovation. Without the financial backing of licensing fees that proprietary software can leverage, sustaining high-quality development and support over time is a challenge. Some critics also argue that while open source enables collaboration, it often lacks the reliability and consistent support associated with proprietary systems, creating potential pitfalls for users and developers alike.
So, What Is Open Washing?
‘Open washing’ is a term coined by internet policy researcher Michelle Thorne in 2009, referring to where using the word ‘open as a marketing term allows companies to appear open while maintaining control over their products. The term open washing is, therefore, along the same lines as the term ‘greenwashing,’ where companies claim to be environmentally friendly without substantive action. In open washing, companies appear to use “open” branding to exploit open source’s positive connotations without meeting its core values of transparency and accessibility. This co-opting of the term, therefore, undermines the foundational principles of openness, confusing consumers and diluting the legitimacy of the open-source community (open washing is a negative term).
Why Has Open Washing Become More Common?
Open source’s transformation from a fringe movement to a widely adopted practice has also made it highly attractive to companies looking to capitalise on its reputation. In the early 2000s, companies were wary of open source. For example, Microsoft’s then-CEO Steve Ballmer even called Linux a “cancer” due to the licence requirements that would obligate them to make their entire codebase open if it incorporated open-source elements. Today, however, open source is seen as innovative, ethical, and collaborative. It is endorsed by tech giants, governments, and educational institutions alike, with open-source projects like Linux, Kubernetes, and TensorFlow at the core of many enterprise systems.
The Appeal of Open Washing in AI and Big Tech
The stakes are especially high in the field of AI. Many AI models, particularly those from major tech corporations, operate under significant secrecy, which allows them to avoid scrutiny on issues ranging from ethical concerns to regulatory compliance. Open washing appears, therefore, to have become a convenient way for these companies to leverage the credibility of open source without actually relinquishing control or opening their models for true public or scientific examination.
For example, research by Andreas Liesenfeld and Mark Dingemanse at Radboud University surveyed 45 models marketed as open source and found that few actually meet the standards of true openness. The researchers found that only a handful (e.g. AllenAI’s OLMo or BigScience’s BloomZ) genuinely embody open principles.
In contrast, models from Google, Meta, and Microsoft often allow limited access to specific aspects, such as the AI model’s weights, but withhold full transparency into the training datasets or the processes behind fine-tuning – factors that are crucial for replicability and accountability.
Regulatory Incentives for Open Washing
The regulatory environment has also further incentivised open washing, particularly with the introduction of the EU’s AI Act, which came into force on 1 August 2024. This legislation, set to shape the governance of AI in Europe, includes special exemptions for open-source models. These exemptions mean that open-source AI products face fewer compliance requirements, especially regarding dataset transparency and ethical considerations. However, the EU has yet to define “open source” for AI models explicitly, leading to a gap that companies can exploit by labelling restricted models as open.
This regulatory grey area appears to have encouraged large corporations to stretch the definition of open source. By classifying their models as ‘open,’ they can benefit from reduced regulatory burdens while still keeping proprietary information hidden. This kind of open washing could, therefore, shield companies from scrutiny and enable them to bypass scientific and ethical standards that would otherwise apply.
Why Open Washing Undermines Openness and Transparency
The widespread practice of open washing could be seen as posing a risk to the integrity of the tech industry. For example, when companies brand restrictive products as open, they dilute the meaning of open source and weaken public trust. This practice could harm consumers and developers who assume these models are accessible for improvement, modification, or auditing. Without full transparency, end-users and even governments can’t fully grasp the capabilities and limitations of these tools, potentially leading to misuse and ethical oversights.
What Does the Open Source Initiative Say About It?
The Open Source Initiative (OSI) is a global nonprofit organisation that promotes and protects open-source software by maintaining the Open Source Definition, approving compliant licences, and advocating for open-source practices across industries. It is also, therefore, one of the most outspoken critics of open washing. For example, the OSI says that “misuse of ‘open’ erodes the fundamental trust” in open-source communities. According to the OSI, this dilution of open-source principles not only misleads the public but also endangers the health of the open-source ecosystem itself, as genuine open-source projects may struggle to gain traction when overshadowed by well-marketed, quasi-open products.
Composite Measures of Openness
Recognising that transparency in AI is multi-faceted, researchers have now proposed a composite measure of openness that includes access to datasets, training protocols, licensing clarity, and the model’s documentation. An example of this composite measure is a framework on openness in generative AI, presented at this year’s ACM Conference on Fairness, Accountability, and Transparency (FAccT), by Andreas Liesenfeld and Mark Dingemanse, researchers from Radboud University’s Centre for Language Studies in the Netherlands, specialising in language and AI studies.
Their framework, with its 14 dimensions of openness, highlights how open-source claims cannot rest on a single factor, such as access to model weights or basic documentation. Instead, the researchers say these claims should involve comprehensive access across multiple domains, offering the public, scientists, and policymakers a way to meaningfully assess openness. The idea is that by developing and implementing composite standards, the tech community could, therefore, discourage open washing and promote genuine transparency.
Clearer Definitions and Standards for Open Source AI
The current ambiguity around open source, particularly in AI, highlights the need for clearer standards. To tackle open washing, the OSI has recently started working on a formal definition for open-source AI, collaborating with various stakeholders to address unique considerations, like access to training data and replicability. This evolving framework aims to set definitive standards for what constitutes open source in the AI landscape, with the goal of curbing open washing and providing a measure for consumers and regulators to gauge the authenticity of open-source claims.
The Role of Public Awareness and Advocacy
To counter open washing, it may be important for both consumers and developers to recognise and question the authenticity of open-source claims. Community-driven transparency tools, such as open-source databases and audit platforms, can play a role in empowering users to make informed decisions. As Dingemanse notes, “evidence-based openness assessment is essential for a healthy tech landscape.” Awareness campaigns and advocacy groups can also shed light on open washing practices, pressuring corporations to align with true open-source standards.
What Does This Mean for Your Business?
As technology continues to evolve and embed itself deeper into everyday life, the importance of distinguishing genuine openness from ‘open washing’ becomes ever more critical. Open-source software’s promise lies in its potential for transparency, innovation, and community-driven growth. However, when companies engage in open washing, they undermine these principles, eroding public trust and complicating the regulatory landscape. This practice not only weakens the authenticity of open-source initiatives but also risks obscuring the boundaries between proprietary and truly open technologies, leading to a diluted understanding of what “open” truly represents.
The movement to counter open washing is gaining momentum through research, community initiatives, and regulatory efforts, yet it ultimately depends on public awareness and industry accountability. Informed consumers and developers play a vital role in demanding transparency and authenticity from tech giants. With organisations like the Open Source Initiative working to refine definitions and create accountability standards, there is hope for a future where open-source principles are upheld, respected, and protected. Clear standards and genuine openness are essential to sustaining an ecosystem where “open” means more than marketing, symbolising a commitment to collaboration, integrity, and the shared progress of technology.
With clearer definitions, regulatory oversight, and a strong community voice, it appears possible for the tech industry to preserve the values of openness and transparency while guarding against open washing. By holding companies accountable to genuine open-source principles, users, developers, and policymakers could help ensure that “open” remains a meaningful and respected term in the technology landscape.
Tech News : Meta Hunting Celeb-Scams
Meta, the parent company of Facebook and Instagram, has revealed a new plan to combat the growing number of fake investment scheme celebrity scam ads by using facial recognition technology to weed them out.
What’s the Problem?
Fake ads featuring celebrities, known as “celeb-bait” scams by Meta, have become a plague on social media platforms in recent years, particularly ads promoting fraudulent investments, cryptocurrency schemes, or fake product endorsements. These scams use unauthorised images and fabricated comments from popular figures like Elon Musk, financial expert Martin Lewis, and Australian billionaire Gina Rinehart to lure users into clicking through to fraudulent websites, where they are often asked to share personal information or make payments under false pretences.
Also, deepfakes have been created using artificial intelligence to superimpose celebrities’ faces onto endorsement videos, producing highly realistic content that even seasoned internet users may find convincing. For example, Martin Lewis, founder of MoneySavingExpert and a frequent victim of such scams, recently told BBC Radio 4’s Today programme that he receives “countless” notifications about fake ads using his image, sharing that he feels “sick” over how they deceive unsuspecting audiences.
How Big Is the Problem?
The prevalence of scams featuring celebrity endorsements has skyrocketed, reflecting a global trend in online fraud. In the UK alone, the Financial Conduct Authority (FCA) reported that celebrity-related scams have doubled since 2021, with these frauds costing British consumers more than £100 million annually. According to a recent study by the Fraud Advisory Panel, financial scams leveraging celebrity endorsements rose by 30 per cent in 2022 alone, a trend fuelled by increasingly sophisticated deepfake technology that makes these scams more believable than ever.
Not Just the UK
The impact of celeb-bait scams is even more significant worldwide. In Australia, for instance, the Australian Competition and Consumer Commission (ACCC) reported that online scams, many featuring unauthorised celebrity endorsements, cost consumers an estimated AUD 2 billion in 2023. Social media platforms, particularly Facebook and Instagram, are frequent targets for these fraudulent ads, as scammers exploit their large audiences to reach thousands of potential victims within minutes.
The US has also seen similar issues, with the Federal Trade Commission (FTC) noting that more than $1 billion was lost to social media fraud in 2022 alone, a figure that has increased fivefold since 2019. Fake celebrity endorsements accounted for a large proportion of these losses, with reports indicating that over 40 per cent of people who experienced fraud in the past year encountered it on a social media platform.
Identify and Block Using Facial Recognition
In a Meta blog post about how the tech giant is testing new ways to combat scams on its platforms (Facebook and Instagram), and especially celeb-bait scams, Meta stated: “We’re testing the use of facial recognition technology.”
According to Meta, this new approach will identify and block such ads before they reach users, offering a stronger line of defence in the ongoing battle against online scammers. The approach represents one of Meta’s most proactive attempts yet to address a persistent problem that has impacted both high-profile public figures and unsuspecting social media users alike.
How Will Meta’s Facial Recognition Work?
Meta’s facial recognition ad-blocking approach will build on its existing AI ad review systems, which scan for potentially fraudulent or policy-violating ads, but will introduce an additional layer of facial recognition that will work to verify the identities of celebrities in the ads. If an ad appears suspicious and contains the image of a public figure, Meta’s system will compare the individual’s face in the ad to their official Facebook or Instagram profile pictures. When a match is confirmed, and the ad is verified as a scam, Meta’s technology will delete the ad in real-time.
David Agranovich, Meta’s Director of Global Threat Disruption, emphasised the importance of this shift in a recent press briefing, saying: “This process is done in real-time and is faster and much more accurate than manual human reviews, so it allows us to apply our enforcement policies more quickly and protect people on our apps from scams and celebrities.” Agranovich noted that the system has yielded “promising results” in early tests with a select group of 50,000 celebrities and public figures, who will be able to opt out of this enrolment at any time.
According to Agranovich, the swift, automated nature of the system is critical to staying ahead of scammers, who often adapt their techniques as detection methods improve. The facial recognition system is not only intended to remove existing scam ads but to prevent them from spreading before they can reach a wide audience. Agranovich has highlighted how a rapid response of this kind is essential in a digital landscape where even a brief exposure to these ads can lead to significant financial losses for unsuspecting victims.
When?
This new measure is set to begin its rollout in December 2024.
Meta’s Track Record and Renewed Focus on Privacy
It’s worth noting, however, that Meta’s deployment of facial recognition technology marks a return to a tool it abandoned in 2021 amid concerns over privacy, accuracy, and potential biases in AI systems. Previously, Facebook used facial recognition for suggested photo tags, a feature that drew criticism and prompted the company to step back from the technology. This time, Meta says it has implemented additional safeguards to address such concerns, including the immediate deletion of facial data generated through the scam ad detection process.
Privacy
Privacy remains a contentious issue with facial recognition technology. Addressing privacy concerns over its new approach, Meta has stated that the data generated in making the comparison will be stored securely and encrypted, never becoming visible to other users or even to the account owner themselves. As Meta’s Agranovich says, “Any facial data generated from these ads is deleted immediately after the match test, regardless of the result.” Meta is keen to highlight how it intends to use the facial recognition technology purely for combating celeb-bait scams and aiding account recovery. In cases of account recovery, users will be asked to submit a video selfie, which Meta’s system will then compare to the profile image associated with the account. This verification method is expected to be faster and more secure than traditional identity confirmation methods, such as uploading an official ID document.
Scaling the Solution and Potential Regulatory Hurdles
Meta’s new system is set to be tested widely among a larger group of public figures in the coming months. Celebrities enrolled in the programme will receive in-app notifications and, if desired, can opt out at any time using the Accounts Centre. This large-scale trial comes as Meta faces increasing pressure from regulators, particularly in countries like Australia and the UK, where public outcry against celeb-bait scams has surged. The Australian Competition and Consumer Commission (ACCC) is currently engaged in a legal dispute with Meta over its perceived failure to stop scam ads, while mining magnate Andrew Forrest has also filed a lawsuit against the company for allegedly enabling fraudsters to misuse his image.
Martin Lewis Sued Facebook
In the UK, personal finance guru Martin Lewis previously sued Facebook for allowing fake ads featuring his image, ultimately reaching a settlement in which Meta agreed to fund a £3 million scam prevention initiative through Citizens Advice. Nevertheless, Lewis continues to push for stronger regulations, recently urging the UK government to empower Ofcom with additional regulatory authority to combat scam ads. “These scams are not only deceptive but damaging to the reputations of the individuals featured in them,” Lewis stated, highlighting the broader impact that celeb-bait scams have beyond financial loss.
Despite the New Tech, It’s Still ‘A Numbers Game’
Despite Meta’s new approach, the company still faces a huge challenge. For example, Agranovich has admitted that, despite robust safeguards, some scams will still evade detection, saying, “It’s a numbers game,” and that, “While we have automated detection systems that run against ad creative that’s being created, scam networks are highly motivated to keep throwing things at the wall in hopes that something gets through.” As scam networks find new ways to bypass detection, Meta acknowledges that the technology will require continuous adaptation and improvement to keep up.
What About Concerns Over AI and Bias?
In deploying facial recognition technology, Meta has also faced scrutiny over potential biases in AI and facial recognition systems, which have been shown to have variable accuracy across different demographics. The company claims that extensive testing and review have been undertaken to minimise such biases. Also, Meta has said it will not roll out the technology in regions where it lacks regulatory approval, such as in the UK and EU, indicating a cautious approach towards compliance and accountability.
Meta says it has “vetted these measures through our robust privacy and risk review process” and is committed to “sharing our approach to inform the industry’s defences against online scammers.” The company has also pledged to engage with regulators, policymakers, and industry experts to address ongoing challenges and align on best practices for facial recognition technology’s ethical use.
What Does This Mean for Your Business?
Meta’s latest move to integrate facial recognition technology into its anti-scam measures signals a significant shift toward tackling the complex world of celeb-bait scams. However, as Meta ventures back into using facial recognition, it’s clear the company must balance robust security with privacy, a concern that continues to shadow the rollout. While the technology holds promise, particularly in increasing detection speed and reducing the frequency of celebrity scams, it will undoubtedly be scrutinised by both users and regulators who have long questioned the use of facial recognition on such a broad scale.
For everyday Facebook and Instagram users, Meta’s new facial recognition feature could mean greater security and fewer encounters with fake ads that exploit public figures for fraudulent schemes. If successful, the initiative could lessen the risk of users falling victim to scams that impersonate well-known personalities to promote fake investments or products. The added layer of facial recognition should serve as a safeguard, reducing the frequency of these fake ads in users’ feeds and building a safer browsing experience across Meta’s platforms.
For celebrities and public figures, this development is a significant step towards reclaiming control over their public images, which are often misused without permission. The new system will help protect their reputations, preventing unauthorised use of their likenesses in fraudulent ads. Figures like Martin Lewis, who has been vocal about the damage these scams cause, could benefit as Meta finally implements more targeted measures to shield them from unauthorised endorsements.
The impact of this initiative may extend to legitimate advertisers as well. Meta’s crackdown on celeb-bait scams will likely improve ad integrity on its platforms, helping businesses that rely on Facebook and Instagram to reach audiences without the risk of association with deceptive content. A cleaner, more trustworthy advertising environment could enhance user trust and, in turn, benefit brands that promote genuine products and services. As Meta focuses on strengthening its ad review systems, legitimate advertisers may find their content reaching more engaged, security-conscious users who are less wary of the ads they encounter online. In this way, Meta’s facial recognition technology could not only shield users and celebrities from scams but also foster a more secure, credible marketplace for businesses across its platforms.