Featured Article : How New Data Laws Will Affect You
Here, we look at how the Data Use and Access Bill is poised to reshape how our personal data is handled in the UK and we also review the significant changes it will bring, with implications for the NHS and beyond.
What Is the Data Use and Access Bill?
Introduced as a cornerstone of the government’s plan to modernise data governance, the Data Use and Access Bill aims to overhaul existing data laws to improve economic growth, streamline public services, and enhance data security. Originating from a need to update the UK’s data legislation post-Brexit, the bill seeks to replace or amend elements of the EU’s General Data Protection Regulation (GDPR) to better suit national interests. The government claims that streamlining data usage and access could generate £10 billion of economic benefit. While the exact date of its enactment remains uncertain, the bill is expected to come into force within the coming year, subject to parliamentary approval.
How Will It Affect Our Data Handling?
At the heart of the bill lies a fundamental shift in how personal data will be managed, accessed, and shared across both public and private sectors. For individuals, this means their data could be used more extensively to improve services, but it also raises concerns about privacy and consent.
In the context of the NHS, the bill mandates that all IT systems adopt common data formats, enabling real-time sharing of patient information such as pre-existing conditions, appointments, and test results between NHS trusts, GPs, and ambulance services. The Department for Science, Innovation and Technology (DSIT) estimates this could free up 140,000 hours of NHS staff time annually. The government envisions that by breaking down data silos, patient care will become more efficient, reducing medical errors and eliminating the need for repeat tests.
What About Patient Passports?
Many people will have heard the term ‘patient passport’. As part of the UK’s NHS digital transformation strategy, this will be the centralised digital record that holds a patient’s comprehensive health information, including medical history, test results, and treatment notes. It’s hoped that this passport will allow healthcare providers to access a patient’s entire medical record seamlessly across different healthcare settings, whether at GP surgeries, hospitals, or through ambulance services. By consolidating data, the aim of patient passports is to reduce redundancies, prevent repeated tests, and improve continuity of care, ensuring clinicians can make quicker, well-informed decisions in critical moments.
Privacy Warnings
However, privacy advocates have said that increased data sharing must be balanced with safeguards, including protecting patient passports from third-party access. For example, one key question they’re asking is who exactly will have access to this sensitive health data? The potential involvement of multinational tech firms (known for less-than-stellar transparency records) adds to this concern. For example, the Good Law Project (a key privacy advocate), has raised concerns about the NHS’s partnership with private data firms, especially Palantir, for managing the Federated Data Platform (FDP). They argue that without sufficient scrutiny, sensitive patient data could be open to misuse or could be shared without adequate patient control. The group has highlighted potential issues with the National Data Opt-Out (NDOO), which allows patients to restrict their data from being used outside of their direct care but doesn’t yet fully cover the FDP, sparking concerns that the NDOO’s limitations might not uphold patients’ data rights effectively.
Beyond Healthcare – The Police
Beyond healthcare, the bill also proposes allowing police forces to automate certain manual data tasks. Currently, officers must log each instance they access personal information on the police database. Automating such steps could save an estimated 1.5 million hours per year, enabling officers to focus more on frontline duties. While increased efficiency is welcomed, civil liberties groups express concern over potential overreach and lack of oversight. Liberty, a UK human rights organisation, points out that “automation without accountability could lead to unchecked surveillance and data misuse.”
Infrastructure Too
The bill also introduces the creation of a digital “National Underground Asset Register,” requiring infrastructure firms to upload data on underground pipes and cables. This initiative aims to reduce the 600,000 accidental strikes on buried assets annually, minimising disruption from roadworks and construction projects.
A Digital Register of Births and Deaths
Another aspect of the bill that’s drawn attention is a plan for the creation of a digital register for births and deaths. This register is proposed to simplify how vital records are accessed and managed, with the goal of moving away from paper-based systems. Creating a digital registry should, it’s argued, make it easier for individuals and relevant authorities to access official records, such as birth and death certificates. This digital transformation will also align with broader efforts to streamline public records, similar to electronic registration in other sectors.
Consumer Data
The bill also discusses enhancing how consumer data (like energy usage or purchasing history) might be used to provide personalised services. For example, individuals could use data about their energy consumption to choose better tariffs, or purchasing data could inform tailored online shopping deals.
The Digital Revolution in the NHS
The digital revolution within the NHS is a critical component of the broader objectives outlined in the Data Use and Access Bill. The government’s new 10-year strategy for the NHS in England aims to transform how patients interact with the health service, mirroring the convenience and accessibility offered by modern banking apps.
Currently, the NHS App’s functionality is limited due to the fragmented nature of patient records, which are held separately by GPs and hospitals. The government’s push for a single, unified patient record (the patient passport) is intended to bridge this gap. As Health Secretary Wes Streeting has stated, “Moving from analogue to digital is essential if we are to create a more efficient, patient-centred NHS” (BBC, 2023).
This shift is anticipated to speed up patient care, reduce redundant testing, and minimise medical errors. For example, immediate access to a patient’s full medical history could enable faster diagnosis and treatment decisions, potentially saving lives.
Open to Abuse?
However, this digital transformation is not without controversy. Privacy campaigners, such as MedConfidential (a UK group advocating for privacy and transparency in health data usage), have expressed concerns that a single patient record / patient passport system could be “open to abuse” if not properly safeguarded. The involvement of private firms like Palantir, which has been awarded contracts to create databases joining up individual records, exacerbates these fears. As Sam Smith of MedConfidential says, “Handing over vast amounts of sensitive health data to companies with questionable track records poses significant risks to patient confidentiality”.
Too Hasty?
There has also been a public backlash against the perceived haste in implementing these changes without adequate consultation. A “national conversation” has been launched to gather public input, but critics argue that more needs to be done to ensure transparency and trust. As Rachel Power, Chief Executive of the Patients Association, said in a Patients Association Statement (2023): “For far too long, patients have felt their voices weren’t fully heard in shaping the health service. Any digital transformation must put patients at the heart of its evolution.”
The Backlash and Privacy Concerns
Despite assurances, scepticism remains. For example, the launch of the public engagement exercise was marred by inappropriate and irrelevant submissions, suggesting a disconnect between the government’s intentions and public perception. Also, reports about patient passports and usage of wearable technology (like Fitbits) to monitor health conditions remotely (to offer convenience and improved care) have also raised further privacy issues.
The British Medical Association (BMA) has expressed caution, stating that any move towards increased data sharing must be accompanied by “rigorous ethical standards and patient consent”. Critics fear that without proper oversight, personal health data could be exploited by private companies or misused by the state.
What About the Financial Aspects?
Many have highlighted that the financial aspects can’t be ignored. For example, Prof Nicola Ranger, General Secretary of the Royal College of Nursing, has said (in an RCN Press Release, 2023) that any future plans will require “new investment” to be successful and that, “Digital transformation is not just about technology; it’s about investing in people and processes to make it work effectively.”
Efficiency Gains
With figures in mind, as highlighted earlier, key examples of the efficiency savings that the proposed Data Use and Access Bill could bring by streamlining data use across sectors (especially in healthcare and law enforcement) include:
– An estimated £10 billion boost to the economy (UK government), primarily through simplifying data access and by reducing administrative inefficiencies and fostering innovation across sectors.
– Saving NHS staff 140,000 hours by standardising data formats across NHS trusts, hospitals, and GPs. This saved time could then be redirected to patient care, improving treatment speed and accessibility for patients.
– Automation of routine data tasks, such as logging access to personal data in police databases, could free up 1.5 million hours annually for the police. This reduction in administrative tasks could allow more time for frontline work, which could strengthen law enforcement efficiency and public safety.
Balancing Efficiency and Privacy
The implications of the Data Use and Access Bill extend beyond immediate efficiency gains. By fostering a more data-driven approach, the UK hopes to position itself as a leader in the global digital economy. The government asserts that modernising data laws will not only improve public services but also attract investment and innovation in sectors like artificial intelligence and biotechnology.
Public Trust Needed
However, the success of this ambitious agenda hinges on public trust. Past experiences with data initiatives, such as the failed Care.data programme in 2016, have left a legacy of scepticism. That programme sought to share GP records for research and planning but was abandoned due to public outcry over privacy concerns.
As Prof Sir Nigel Shadbolt, co-founder of the Open Data Institute, has said: “Data can be a powerful tool for good, but only if handled responsibly. Building and maintaining public trust is essential for any data initiative to succeed.”
Government Says Data Will Be Protected
In response to these challenges, the government has pledged to implement strict data protection measures. The bill is expected to outline clear guidelines on consent, data minimisation, and purpose limitation. Additionally, there will be provisions for individuals to access, correct, or delete their data, aligning with principles established under GDPR.
However, critics argue that replacing or modifying GDPR protections could weaken individual rights. The Information Commissioner’s Office (ICO), the UK’s data protection authority, has urged caution. In a statement last year, the ICO said, “Any changes to data protection laws must not dilute the rights of individuals or reduce the accountability of organisations.”
There is also the matter of international scrutiny to consider. As the UK diverges from EU data regulations, questions are being asked about the adequacy decisions that currently allow for the free flow of data between the UK and EU countries. Losing this status could have significant repercussions for businesses operating across borders.
Looking Ahead
The Data Use and Access Bill represents a significant step towards modernising the UK’s data infrastructure. While the potential benefits in terms of efficiency, economic growth, and improved public services are substantial, it seems clear that they must be carefully balanced against the imperative to protect individual privacy and maintain public trust. The coming months will be crucial as the bill progresses through Parliament and the national conversation unfolds.
What Does This Mean For Your Business?
As the Data Use and Access Bill stands poised for implementation, it signals a transformation across public services, private enterprise, and individual rights. For the government, this legislation offers a pathway to harness data as a tool for national progress. The projected £10 billion economic boost, alongside potential time savings within the NHS and police forces, embodies the bill’s intent to streamline services, foster efficiency, and support sectors such as artificial intelligence and biotechnology. For the government, success means creating a framework where data is a secure, accessible resource that fuels growth, with implications not only domestically but also in terms of the UK’s reputation on the international stage.
For the public, the stakes are particularly high. On one hand, individuals stand to benefit from improved public services, from faster healthcare diagnoses and treatments to enhanced law enforcement capabilities. But this convenience comes with concerns around privacy, choice, and transparency. Past data initiatives like Care.data have shown that public trust can falter without robust consent frameworks and clear assurances on data security. Therefore, establishing transparency and giving individuals genuine control over their information are pivotal if the public is to feel safeguarded rather than surveilled.
In healthcare, the NHS’s anticipated transformation via digital records and patient passports could make a tangible difference in patient care given the estimation that it could free up over 140,000 hours in staff time to improve responsiveness and patient outcomes. However, this potential relies on more than just technical feasibility. For example, some would say that significant investment in staff training and infrastructure, as well as strict privacy protocols, are needed to prevent data misuse. Partnerships with private tech companies, which bring efficiency but sometimes questionable records on transparency, will need to be tightly regulated to ensure that patient data is handled responsibly and ethically.
The police, meanwhile, are expected to gain valuable hours through automation, potentially redirecting 1.5 million hours away from administrative duties to active police work, which many would welcome. However, without careful oversight, automated data access could risk privacy rights and lead to unintentional overreach, a concern for civil liberties advocates who call for accountability mechanisms to match this increased efficiency.
Third-party companies, particularly in tech, are also significant stakeholders in this bill. The opportunity to innovate and participate in data-driven public projects is substantial, yet comes with the responsibility to uphold rigorous privacy standards. For UK businesses, especially those relying on cross-border data flows, alignment with international data regulations will be critical. Divergence from GDPR raises questions about future adequacy agreements with the EU, impacting data-dependent enterprises if this alignment weakens.
As this ambitious bill moves forward, its success depends not only on the economic and operational benefits it promises but also its commitment to protecting individual rights and maintaining public trust. Establishing transparent, secure data frameworks that place privacy and consent at the forefront will be essential. With appropriate safeguards, the Data Use and Access Bill could indeed lead the UK into a new era of responsible data innovation. Without them, however, it risks compromising the very rights it aims to modernise.
Tech Insight : What Is ‘Open Washing’ ?
With many tech giants now using ‘open’ as in ‘open source’ as a marketing term, we look at what the issues around this are, why it needs to be discouraged, and how this can be achieved.
What is Open Source?
To understand the question about ‘open washing’, it’s important to understand what real open source is. Defined and stewarded by the Open Source Initiative (OSI), open source goes beyond simply sharing code. In fact, it means giving users the rights to view, modify, and redistribute the software without undue restrictions. According to the OSI’s Open Source Definition, true open-source software adheres to ten principles, including free redistribution, access to source code, and the right to create derivative works. Open-source licences must also be non-discriminatory, ensuring that anyone, anywhere, can access and modify the software for any purpose.
These principles are meant to support innovation, community-driven improvement, and freedom from vendor lock-in, which is why open source has become so important in technology.
Not Everyone’s a Fan of Open Source
Despite the positive aspects of the principles of open source and its widespread use, not everyone is sold on it, with critics pointing to risks in security and sustainability. For example, while the transparency in open-source code may allow anyone to inspect for flaws, it also enables malicious actors to exploit vulnerabilities. In many cases, open-source projects tend to lack dedicated security teams, meaning patches can be slow to release, leaving users exposed. Financial viability is another issue; many open-source projects rely on volunteer developers or donations, making funding unpredictable and threatening long-term support and innovation. Without the financial backing of licensing fees that proprietary software can leverage, sustaining high-quality development and support over time is a challenge. Some critics also argue that while open source enables collaboration, it often lacks the reliability and consistent support associated with proprietary systems, creating potential pitfalls for users and developers alike.
So, What Is Open Washing?
‘Open washing’ is a term coined by internet policy researcher Michelle Thorne in 2009, referring to where using the word ‘open as a marketing term allows companies to appear open while maintaining control over their products. The term open washing is, therefore, along the same lines as the term ‘greenwashing,’ where companies claim to be environmentally friendly without substantive action. In open washing, companies appear to use “open” branding to exploit open source’s positive connotations without meeting its core values of transparency and accessibility. This co-opting of the term, therefore, undermines the foundational principles of openness, confusing consumers and diluting the legitimacy of the open-source community (open washing is a negative term).
Why Has Open Washing Become More Common?
Open source’s transformation from a fringe movement to a widely adopted practice has also made it highly attractive to companies looking to capitalise on its reputation. In the early 2000s, companies were wary of open source. For example, Microsoft’s then-CEO Steve Ballmer even called Linux a “cancer” due to the licence requirements that would obligate them to make their entire codebase open if it incorporated open-source elements. Today, however, open source is seen as innovative, ethical, and collaborative. It is endorsed by tech giants, governments, and educational institutions alike, with open-source projects like Linux, Kubernetes, and TensorFlow at the core of many enterprise systems.
The Appeal of Open Washing in AI and Big Tech
The stakes are especially high in the field of AI. Many AI models, particularly those from major tech corporations, operate under significant secrecy, which allows them to avoid scrutiny on issues ranging from ethical concerns to regulatory compliance. Open washing appears, therefore, to have become a convenient way for these companies to leverage the credibility of open source without actually relinquishing control or opening their models for true public or scientific examination.
For example, research by Andreas Liesenfeld and Mark Dingemanse at Radboud University surveyed 45 models marketed as open source and found that few actually meet the standards of true openness. The researchers found that only a handful (e.g. AllenAI’s OLMo or BigScience’s BloomZ) genuinely embody open principles.
In contrast, models from Google, Meta, and Microsoft often allow limited access to specific aspects, such as the AI model’s weights, but withhold full transparency into the training datasets or the processes behind fine-tuning – factors that are crucial for replicability and accountability.
Regulatory Incentives for Open Washing
The regulatory environment has also further incentivised open washing, particularly with the introduction of the EU’s AI Act, which came into force on 1 August 2024. This legislation, set to shape the governance of AI in Europe, includes special exemptions for open-source models. These exemptions mean that open-source AI products face fewer compliance requirements, especially regarding dataset transparency and ethical considerations. However, the EU has yet to define “open source” for AI models explicitly, leading to a gap that companies can exploit by labelling restricted models as open.
This regulatory grey area appears to have encouraged large corporations to stretch the definition of open source. By classifying their models as ‘open,’ they can benefit from reduced regulatory burdens while still keeping proprietary information hidden. This kind of open washing could, therefore, shield companies from scrutiny and enable them to bypass scientific and ethical standards that would otherwise apply.
Why Open Washing Undermines Openness and Transparency
The widespread practice of open washing could be seen as posing a risk to the integrity of the tech industry. For example, when companies brand restrictive products as open, they dilute the meaning of open source and weaken public trust. This practice could harm consumers and developers who assume these models are accessible for improvement, modification, or auditing. Without full transparency, end-users and even governments can’t fully grasp the capabilities and limitations of these tools, potentially leading to misuse and ethical oversights.
What Does the Open Source Initiative Say About It?
The Open Source Initiative (OSI) is a global nonprofit organisation that promotes and protects open-source software by maintaining the Open Source Definition, approving compliant licences, and advocating for open-source practices across industries. It is also, therefore, one of the most outspoken critics of open washing. For example, the OSI says that “misuse of ‘open’ erodes the fundamental trust” in open-source communities. According to the OSI, this dilution of open-source principles not only misleads the public but also endangers the health of the open-source ecosystem itself, as genuine open-source projects may struggle to gain traction when overshadowed by well-marketed, quasi-open products.
Composite Measures of Openness
Recognising that transparency in AI is multi-faceted, researchers have now proposed a composite measure of openness that includes access to datasets, training protocols, licensing clarity, and the model’s documentation. An example of this composite measure is a framework on openness in generative AI, presented at this year’s ACM Conference on Fairness, Accountability, and Transparency (FAccT), by Andreas Liesenfeld and Mark Dingemanse, researchers from Radboud University’s Centre for Language Studies in the Netherlands, specialising in language and AI studies.
Their framework, with its 14 dimensions of openness, highlights how open-source claims cannot rest on a single factor, such as access to model weights or basic documentation. Instead, the researchers say these claims should involve comprehensive access across multiple domains, offering the public, scientists, and policymakers a way to meaningfully assess openness. The idea is that by developing and implementing composite standards, the tech community could, therefore, discourage open washing and promote genuine transparency.
Clearer Definitions and Standards for Open Source AI
The current ambiguity around open source, particularly in AI, highlights the need for clearer standards. To tackle open washing, the OSI has recently started working on a formal definition for open-source AI, collaborating with various stakeholders to address unique considerations, like access to training data and replicability. This evolving framework aims to set definitive standards for what constitutes open source in the AI landscape, with the goal of curbing open washing and providing a measure for consumers and regulators to gauge the authenticity of open-source claims.
The Role of Public Awareness and Advocacy
To counter open washing, it may be important for both consumers and developers to recognise and question the authenticity of open-source claims. Community-driven transparency tools, such as open-source databases and audit platforms, can play a role in empowering users to make informed decisions. As Dingemanse notes, “evidence-based openness assessment is essential for a healthy tech landscape.” Awareness campaigns and advocacy groups can also shed light on open washing practices, pressuring corporations to align with true open-source standards.
What Does This Mean for Your Business?
As technology continues to evolve and embed itself deeper into everyday life, the importance of distinguishing genuine openness from ‘open washing’ becomes ever more critical. Open-source software’s promise lies in its potential for transparency, innovation, and community-driven growth. However, when companies engage in open washing, they undermine these principles, eroding public trust and complicating the regulatory landscape. This practice not only weakens the authenticity of open-source initiatives but also risks obscuring the boundaries between proprietary and truly open technologies, leading to a diluted understanding of what “open” truly represents.
The movement to counter open washing is gaining momentum through research, community initiatives, and regulatory efforts, yet it ultimately depends on public awareness and industry accountability. Informed consumers and developers play a vital role in demanding transparency and authenticity from tech giants. With organisations like the Open Source Initiative working to refine definitions and create accountability standards, there is hope for a future where open-source principles are upheld, respected, and protected. Clear standards and genuine openness are essential to sustaining an ecosystem where “open” means more than marketing, symbolising a commitment to collaboration, integrity, and the shared progress of technology.
With clearer definitions, regulatory oversight, and a strong community voice, it appears possible for the tech industry to preserve the values of openness and transparency while guarding against open washing. By holding companies accountable to genuine open-source principles, users, developers, and policymakers could help ensure that “open” remains a meaningful and respected term in the technology landscape.
Tech News : Meta Hunting Celeb-Scams
Meta, the parent company of Facebook and Instagram, has revealed a new plan to combat the growing number of fake investment scheme celebrity scam ads by using facial recognition technology to weed them out.
What’s the Problem?
Fake ads featuring celebrities, known as “celeb-bait” scams by Meta, have become a plague on social media platforms in recent years, particularly ads promoting fraudulent investments, cryptocurrency schemes, or fake product endorsements. These scams use unauthorised images and fabricated comments from popular figures like Elon Musk, financial expert Martin Lewis, and Australian billionaire Gina Rinehart to lure users into clicking through to fraudulent websites, where they are often asked to share personal information or make payments under false pretences.
Also, deepfakes have been created using artificial intelligence to superimpose celebrities’ faces onto endorsement videos, producing highly realistic content that even seasoned internet users may find convincing. For example, Martin Lewis, founder of MoneySavingExpert and a frequent victim of such scams, recently told BBC Radio 4’s Today programme that he receives “countless” notifications about fake ads using his image, sharing that he feels “sick” over how they deceive unsuspecting audiences.
How Big Is the Problem?
The prevalence of scams featuring celebrity endorsements has skyrocketed, reflecting a global trend in online fraud. In the UK alone, the Financial Conduct Authority (FCA) reported that celebrity-related scams have doubled since 2021, with these frauds costing British consumers more than £100 million annually. According to a recent study by the Fraud Advisory Panel, financial scams leveraging celebrity endorsements rose by 30 per cent in 2022 alone, a trend fuelled by increasingly sophisticated deepfake technology that makes these scams more believable than ever.
Not Just the UK
The impact of celeb-bait scams is even more significant worldwide. In Australia, for instance, the Australian Competition and Consumer Commission (ACCC) reported that online scams, many featuring unauthorised celebrity endorsements, cost consumers an estimated AUD 2 billion in 2023. Social media platforms, particularly Facebook and Instagram, are frequent targets for these fraudulent ads, as scammers exploit their large audiences to reach thousands of potential victims within minutes.
The US has also seen similar issues, with the Federal Trade Commission (FTC) noting that more than $1 billion was lost to social media fraud in 2022 alone, a figure that has increased fivefold since 2019. Fake celebrity endorsements accounted for a large proportion of these losses, with reports indicating that over 40 per cent of people who experienced fraud in the past year encountered it on a social media platform.
Identify and Block Using Facial Recognition
In a Meta blog post about how the tech giant is testing new ways to combat scams on its platforms (Facebook and Instagram), and especially celeb-bait scams, Meta stated: “We’re testing the use of facial recognition technology.”
According to Meta, this new approach will identify and block such ads before they reach users, offering a stronger line of defence in the ongoing battle against online scammers. The approach represents one of Meta’s most proactive attempts yet to address a persistent problem that has impacted both high-profile public figures and unsuspecting social media users alike.
How Will Meta’s Facial Recognition Work?
Meta’s facial recognition ad-blocking approach will build on its existing AI ad review systems, which scan for potentially fraudulent or policy-violating ads, but will introduce an additional layer of facial recognition that will work to verify the identities of celebrities in the ads. If an ad appears suspicious and contains the image of a public figure, Meta’s system will compare the individual’s face in the ad to their official Facebook or Instagram profile pictures. When a match is confirmed, and the ad is verified as a scam, Meta’s technology will delete the ad in real-time.
David Agranovich, Meta’s Director of Global Threat Disruption, emphasised the importance of this shift in a recent press briefing, saying: “This process is done in real-time and is faster and much more accurate than manual human reviews, so it allows us to apply our enforcement policies more quickly and protect people on our apps from scams and celebrities.” Agranovich noted that the system has yielded “promising results” in early tests with a select group of 50,000 celebrities and public figures, who will be able to opt out of this enrolment at any time.
According to Agranovich, the swift, automated nature of the system is critical to staying ahead of scammers, who often adapt their techniques as detection methods improve. The facial recognition system is not only intended to remove existing scam ads but to prevent them from spreading before they can reach a wide audience. Agranovich has highlighted how a rapid response of this kind is essential in a digital landscape where even a brief exposure to these ads can lead to significant financial losses for unsuspecting victims.
When?
This new measure is set to begin its rollout in December 2024.
Meta’s Track Record and Renewed Focus on Privacy
It’s worth noting, however, that Meta’s deployment of facial recognition technology marks a return to a tool it abandoned in 2021 amid concerns over privacy, accuracy, and potential biases in AI systems. Previously, Facebook used facial recognition for suggested photo tags, a feature that drew criticism and prompted the company to step back from the technology. This time, Meta says it has implemented additional safeguards to address such concerns, including the immediate deletion of facial data generated through the scam ad detection process.
Privacy
Privacy remains a contentious issue with facial recognition technology. Addressing privacy concerns over its new approach, Meta has stated that the data generated in making the comparison will be stored securely and encrypted, never becoming visible to other users or even to the account owner themselves. As Meta’s Agranovich says, “Any facial data generated from these ads is deleted immediately after the match test, regardless of the result.” Meta is keen to highlight how it intends to use the facial recognition technology purely for combating celeb-bait scams and aiding account recovery. In cases of account recovery, users will be asked to submit a video selfie, which Meta’s system will then compare to the profile image associated with the account. This verification method is expected to be faster and more secure than traditional identity confirmation methods, such as uploading an official ID document.
Scaling the Solution and Potential Regulatory Hurdles
Meta’s new system is set to be tested widely among a larger group of public figures in the coming months. Celebrities enrolled in the programme will receive in-app notifications and, if desired, can opt out at any time using the Accounts Centre. This large-scale trial comes as Meta faces increasing pressure from regulators, particularly in countries like Australia and the UK, where public outcry against celeb-bait scams has surged. The Australian Competition and Consumer Commission (ACCC) is currently engaged in a legal dispute with Meta over its perceived failure to stop scam ads, while mining magnate Andrew Forrest has also filed a lawsuit against the company for allegedly enabling fraudsters to misuse his image.
Martin Lewis Sued Facebook
In the UK, personal finance guru Martin Lewis previously sued Facebook for allowing fake ads featuring his image, ultimately reaching a settlement in which Meta agreed to fund a £3 million scam prevention initiative through Citizens Advice. Nevertheless, Lewis continues to push for stronger regulations, recently urging the UK government to empower Ofcom with additional regulatory authority to combat scam ads. “These scams are not only deceptive but damaging to the reputations of the individuals featured in them,” Lewis stated, highlighting the broader impact that celeb-bait scams have beyond financial loss.
Despite the New Tech, It’s Still ‘A Numbers Game’
Despite Meta’s new approach, the company still faces a huge challenge. For example, Agranovich has admitted that, despite robust safeguards, some scams will still evade detection, saying, “It’s a numbers game,” and that, “While we have automated detection systems that run against ad creative that’s being created, scam networks are highly motivated to keep throwing things at the wall in hopes that something gets through.” As scam networks find new ways to bypass detection, Meta acknowledges that the technology will require continuous adaptation and improvement to keep up.
What About Concerns Over AI and Bias?
In deploying facial recognition technology, Meta has also faced scrutiny over potential biases in AI and facial recognition systems, which have been shown to have variable accuracy across different demographics. The company claims that extensive testing and review have been undertaken to minimise such biases. Also, Meta has said it will not roll out the technology in regions where it lacks regulatory approval, such as in the UK and EU, indicating a cautious approach towards compliance and accountability.
Meta says it has “vetted these measures through our robust privacy and risk review process” and is committed to “sharing our approach to inform the industry’s defences against online scammers.” The company has also pledged to engage with regulators, policymakers, and industry experts to address ongoing challenges and align on best practices for facial recognition technology’s ethical use.
What Does This Mean for Your Business?
Meta’s latest move to integrate facial recognition technology into its anti-scam measures signals a significant shift toward tackling the complex world of celeb-bait scams. However, as Meta ventures back into using facial recognition, it’s clear the company must balance robust security with privacy, a concern that continues to shadow the rollout. While the technology holds promise, particularly in increasing detection speed and reducing the frequency of celebrity scams, it will undoubtedly be scrutinised by both users and regulators who have long questioned the use of facial recognition on such a broad scale.
For everyday Facebook and Instagram users, Meta’s new facial recognition feature could mean greater security and fewer encounters with fake ads that exploit public figures for fraudulent schemes. If successful, the initiative could lessen the risk of users falling victim to scams that impersonate well-known personalities to promote fake investments or products. The added layer of facial recognition should serve as a safeguard, reducing the frequency of these fake ads in users’ feeds and building a safer browsing experience across Meta’s platforms.
For celebrities and public figures, this development is a significant step towards reclaiming control over their public images, which are often misused without permission. The new system will help protect their reputations, preventing unauthorised use of their likenesses in fraudulent ads. Figures like Martin Lewis, who has been vocal about the damage these scams cause, could benefit as Meta finally implements more targeted measures to shield them from unauthorised endorsements.
The impact of this initiative may extend to legitimate advertisers as well. Meta’s crackdown on celeb-bait scams will likely improve ad integrity on its platforms, helping businesses that rely on Facebook and Instagram to reach audiences without the risk of association with deceptive content. A cleaner, more trustworthy advertising environment could enhance user trust and, in turn, benefit brands that promote genuine products and services. As Meta focuses on strengthening its ad review systems, legitimate advertisers may find their content reaching more engaged, security-conscious users who are less wary of the ads they encounter online. In this way, Meta’s facial recognition technology could not only shield users and celebrities from scams but also foster a more secure, credible marketplace for businesses across its platforms.
Tech News : ‘Human-Like’ Desktop Navigation Capability
Anthropic has unveiled an upgraded AI assistant, Claude 3.5 Sonnet, that can understand and interact with any desktop application in a human-like way, perhaps marking a new era of cross-platform automation and efficiency for businesses.
Anthropic?
Anthropic is an AI safety and research company founded in 2021 by former OpenAI researchers, including siblings Dario and Daniela Amodei. Based in San Francisco, the company focuses on developing AI systems that align with human values and safety principles.
Claude 3.5 Sonnet Can Interact With Your Computer Like A Human
Anthropic hopes its newly upgraded Claude 3.5 Sonnet is a substantial improvement over its predecessor and boasts that the new version has enhanced capabilities in coding and tool use. Most notably, it introduces a revolutionary feature now in public beta computer use. This feature enables the AI to actually interact with computer interfaces much like a human user, e.g. viewing screens, moving cursors, clicking buttons, and typing text.
As Anthropic says on its website: “Claude 3.5 Sonnet is the first frontier AI model to offer computer use in public beta”. However, the company also admits that, “At this stage, it is still experimental – at times cumbersome and error-prone. We’re releasing computer use early for feedback from developers and expect the capability to improve rapidly over time.”
What Can Claude 3.5 Sonnet Do?
With its new computer use feature, Claude 3.5 Sonnet can essentially automate tasks across various software applications without the need for specialised integrations or APIs. Developers can direct Claude to perform actions by providing instructions that the AI translates into computer commands. For example:
– Automating repetitive processes. Claude can handle mundane tasks such as data entry, form filling, or scheduling, freeing up human resources for more strategic activities.
– Software development and testing. Companies like Replit, for example, are using Claude to build features that evaluate apps during development, enhancing productivity and code quality. As Anthropic says,“Replit is using Claude 3.5 Sonnet’s capabilities with computer use and UI navigation to develop a key feature that evaluates apps as they’re being built for their Replit Agent product”.
– Complex multi-step tasks. The AI can carry out operations that require dozens or even hundreds of steps, thereby streamlining workflows that would otherwise be time-consuming.
Benefits for Business Users
The introduction of Claude 3.5 Sonnet, therefore, appears to offer several potential advantages for businesses, such as:
– Increased efficiency. Automating repetitive and complex tasks reduces operational bottlenecks.
– Cost savings. By handling tasks traditionally performed by humans, businesses can lower labour costs.
– Enhanced productivity. Employees can focus on higher-level functions that require human judgement and creativity.
– Scalability. The AI can handle increasing workloads without the need for proportional increases in staff.
Examples of Business Applications
Examples of how companies across various industries are exploring Claude’s potential include:
– Asana is using it to enhance project management by automating task assignments and updates.
– Canva is using Claude to assist in the designing and editing process, making creative tools more accessible.
– DoorDash (the US-based on-demand food delivery service) is using it to streamline logistics and order management through automated processes.
The Browser Company (a New York-based technology startup with its ‘Arc’ browser) is using Clause to automate web-based workflows to improve user experience.
How Good Is It?
Claude 3.5 Sonnet is reported to have demonstrated impressive results on industry benchmarks, showcasing its advanced capabilities in coding and tool usage. In the realm of coding excellence, the model improved its performance on the SWE-bench Verified benchmark from 33.4 per cent to an impressive 49.0 per cent. This leap not only marks a significant advancement over its predecessor but also surpasses all other publicly available models. Such a performance appears to show superior coding skills and its potential to handle complex programming tasks effectively.
In terms of tool use proficiency, Claude 3.5 Sonnet enhanced its scores on the TAU-bench, an agentic tool use benchmark, from 62.6 per cent to 69.2 per cent in the retail domain. This improvement appears to show the model’s increased ability to utilise tools efficiently within specific industry contexts, thereby reflecting a good level of adaptability and practical utility in real-world scenarios.
Also, GitLab tested the model for DevSecOps tasks (integrating security into software development and operations tasks) and observed notable enhancements. “GitLab found it delivered stronger reasoning—up to 10 per cent across use cases—with no added latency,” noted Anthropic. This improvement without compromising speed appears to make Claude 3.5 Sonnet a good candidate for things like powering multi-step software development processes, offering both efficiency and high-level reasoning skills.
Claude 3.5 Haiku Too
In addition to Claude 3.5 Sonnet, Anthropic says it’s set to release Claude 3.5 Haiku later this month. This AI model matches the performance of Claude 3 Opus, the company’s previous largest model, but offers similar speed and cost to the earlier Haiku version.
Claude 3.5 Haiku is particularly adept at coding tasks, scoring 40.6 per cent on SWE-bench Verified (a benchmark for coding accuracy and efficiency). Its low latency and improved instruction-following appear to make it ideal for user-facing products, specialised sub-agent tasks, and handling large volumes of personalised data.
Safety Measures and Concerns
However, while the capabilities of Claude 3.5 Sonnet may be impressive, there are some valid concerns regarding potential misuse. For example:
– The risk of malicious activities. The AI’s ability to interact with desktop applications could be exploited for harmful purposes if not properly secured.
– Some error-prone behaviour. Anthropic acknowledges that the computer use feature is still experimental and may be cumbersome or inaccurate at times.
– Data privacy. The AI’s interaction with sensitive data requires stringent security protocols to prevent breaches.
Addressing These Concerns
Anthropic has, however, taken a proactive approach to trying to address potential safety concerns surrounding Claude 3.5 Sonnet. For example, the model underwent joint pre-deployment testing by the US AI Safety Institute and the UK Safety Institute, ensuring that safety evaluations were thorough and rigorous before release.
To manage risks responsibly, Anthropic follows the ASL-2 Standard under its Responsible Scaling Policy, which aims to mitigate any catastrophic risks associated with advanced AI systems. This policy reflects some commitment to developing AI that aligns with safe and responsible practices.
Also, Anthropic has developed new classifiers to detect potentially harmful uses of the model’s computer interaction capabilities. These classifiers are designed to identify and prevent misuse, such as spam, misinformation, or fraud, ensuring that Claude’s actions remain aligned with safe and ethical standards.
As Anthropic says, “Because computer use may provide a new vector for more familiar threats such as spam, misinformation, or fraud, we’re taking a proactive approach to promote its safe deployment.”
Competitors
With the AI landscape evolving rapidly, it’s not surprising that there are several key players developing similar technologies. For example, OpenAI is working on AI agents capable of automating software tasks, with their GPT-4 model being a notable competitor. Also, Microsoft is introducing tools for building AI agents that can perform a variety of tasks across software platforms. Salesforce too is developing AI agent technology aimed at transforming customer relationship management and Amazon’s Adept is focusing on training models to navigate software and websites.
Anthropic, however, is hoping to distinguish itself through its commitment to safety and alignment with human values, aiming to balance innovation with responsibility.
What Does This Mean For Your Business?
For Anthropic, the launch of an improved Claude 3.5 Sonnet marks a defining moment, potentially establishing the company as a leader in AI-driven business automation. By offering Claude’s computer interaction feature in public beta, Anthropic is positioning itself as a pioneer in cross-platform automation, a niche not yet fully realised by its competitors. This strategic move could strengthen its standing in an increasingly competitive field, as Anthropic’s focus on safety and ethical standards differentiates it from the likes of OpenAI and Microsoft. The enhanced capabilities and unique safety protocols built into Claude 3.5 Sonnet could provide Anthropic with a distinct advantage, particularly in appealing to businesses that are really set on only using the most secure and responsible AI applications. This focus may allow Anthropic to capture a segment of the market that is as concerned with AI safety as it is with productivity gains.
For competitors, Claude 3.5 Sonnet’s public beta launch raises the stakes. Companies like OpenAI, Microsoft, and Salesforce, which are also investing in AI agents for automation, will need to keep pace as Anthropic introduces new, tangible functionality that places AI capabilities directly onto the user’s desktop environment. These competitors may find themselves under increased pressure to accelerate their own development timelines, incorporate safety features, and refine their offerings to ensure they remain competitive with Claude’s human-like computer interaction abilities. For Adept (now part of Amazon), and other companies working to develop similar cross-platform tools, Anthropic’s progress may indicate the importance of safety features and real-world usability in building industry confidence.
The introduction of Claude 3.5 Sonnet offers substantial potential benefits for UK businesses choosing to incorporate this AI assistant into their operations. For organisations across sectors such as finance, healthcare, and logistics, Claude’s ability to handle repetitive tasks, complex multi-step workflows, and even creative processes could be transformative. By automating routine activities, such as data entry, scheduling, and system navigation, Claude 3.5 Sonnet could drive significant efficiency gains, freeing employees to focus on more strategic or human-centric tasks that require critical thinking and nuanced judgement. For UK businesses, which are often under pressure to maximise productivity while controlling operational costs, Claude could streamline workflows, reduce human error, and speed up project timelines, all while potentially lowering staffing costs.
Also, the scalability of Claude 3.5 Sonnet could be particularly beneficial for SMEs in the UK, which may lack the resources for extensive manual operations. By leveraging Claude’s automation capabilities, these businesses could more easily expand their services or manage growing workloads without the need for proportionate increases in staffing. The AI’s coding and tool-use improvements may also mean that it can assist developers, customer service representatives, and project managers alike, helping businesses across industries achieve smoother, more integrated operations.
For businesses that advertise heavily on platforms or deal with customer service interfaces, Claude’s ability to operate across desktop applications could allow for quicker, more personalised responses to customer inquiries, making customer interactions more efficient. Overall, the arrival of Claude 3.5 Sonnet may empower UK companies to enhance operational efficiency, improve service quality, and navigate growth challenges with greater agility. By setting a high bar for safety and adaptability, Claude 3.5 Sonnet appears to represent not only a new technological asset for businesses but also a step forward in the adoption of ethical, practical AI in commercial settings.
An Apple Byte : Trump Says Apple CEO Called With EU Concerns
Former US President Donald Trump has claimed that Apple CEO Tim Cook recently called him to voice frustrations over financial penalties imposed by the European Union (EU) on the tech giant. According to Mr Trump, Cook is alarmed by the EU’s regulatory approach, including a significant tax penalty and other fines affecting Apple’s operations within the bloc.
The claim, made during Mr Trump’s appearance on the PBD Podcast, follows a contentious period for Apple and other tech companies under the EU’s stringent competition and digital service rules. For example, in September, Apple lost a significant legal battle over €13bn (£11bn) in unpaid taxes, with the EU’s highest court upholding the European Commission’s accusation of unlawful tax benefits provided by Ireland. Cook, as Mr Trump conveyed, criticised these findings as politically motivated.
Mr Trump recounted that Cook specifically highlighted a recent $15bn fine, with additional charges reportedly raising the total to around $17-18bn. This includes a €1.8bn fine issued earlier this year over alleged breaches in music streaming competition, favouring rival services like Spotify. Cook reportedly expressed frustration over the EU using these fines as revenue, accusing the bloc of building an “enterprise” out of antitrust penalties.
The European Commission, however, has defended its approach, stating that fines for competition breaches are not only punitive but also serve as a deterrent. A Commission spokesperson highlighted that the fines contribute to the EU’s general budget, indirectly reducing the tax burden on citizens. This response reflects the EU’s firm stance that companies operating in Europe must respect its laws and competition standards.
Mr Trump also mentioned ongoing conversations with other tech leaders, including Google’s Sundar Pichai and Meta’s Mark Zuckerberg, as part of his campaign outreach to prominent figures in the tech sector. Elon Musk, CEO of Tesla, and owner of X (formerly Twitter) has also shown support for Mr Trump, who has been vocal in his criticism of the EU’s stringent digital regulations, promising changes should he return to the White House.
As Mr Trump continues to engage with tech executives, regulatory pressures on tech companies in the EU are likely to remain a significant point of contention. With new regulations such as the Digital Markets Act and the Digital Services Act, the EU is signalling a continued commitment to reining in large tech platforms, which could lead to further scrutiny and financial repercussions for major firms operating within its borders.
Security Stop Press : Jailbreak Bypasses Safety in Three Steps
Researchers have unveiled a new jailbreaking technique, ‘Deceptive Delight,’ which successfully manipulates AI models to produce unsafe responses in only three interactions.
Palo Alto Networks’ Unit 42 researchers developed the method by embedding restricted topics within benign prompts, effectively bypassing safety filters. By carefully layering requests, researchers managed to coerce AI models into generating unsafe outputs e.g., harmful instructions, such as guidance on creating dangerous items (e.g., Molotov cocktails).
Unit 42 reported that in tests across 8,000 scenarios and eight AI models, Deceptive Delight achieved a 65 per cent success rate in producing harmful content within three interactions, with some models reaching attack success rates (ASR) above 80 per cent. By contrast, sending unsafe prompts directly without jailbreaking yielded only a 5.8 per cent ASR.
This technique is part of a rising trend in AI manipulation. Previous methods include Brown University’s language translation bypass, which achieved nearly a 50 per cent success rate, and Microsoft’s ‘Skeleton Key’ technique, which prompts models to alter safe behaviour guidelines. Each approach reveals ways attackers exploit model vulnerabilities, underscoring AI’s ongoing security risks.
Businesses can mitigate these risks through updated model filtering tools, prompt analysis, and swift adoption of AI security patches. Enhanced oversight can prevent manipulation tactics like Deceptive Delight, reducing the chance of harmful content generation.