Meta Smart Glasses Security Controversy
Meta has terminated its contract with outsourcing firm Sama, leading to more than 1,000 Kenyan workers losing their jobs after they revealed they had been reviewing highly sensitive footage captured by users of its AI-powered smart glasses, raising fresh concerns about privacy, labour practices, and the hidden human layer behind AI.
What The Workers Reported Seeing
The controversy began in February when workers employed by Sama in Nairobi told Swedish newspapers that their role involved reviewing and labelling video footage captured by Meta’s Ray-Ban smart glasses. According to those accounts, the material included deeply private scenes, with one worker stating, “We see everything – from living rooms to naked bodies.”
The footage was reportedly not limited to staged or deliberately shared content. Instead, it reflected everyday life captured by wearable cameras, including people undressing, using the toilet, and handling sensitive personal information. The workers’ role was to annotate this material so that Meta’s AI systems could learn to interpret visual and contextual data more effectively.
Meta acknowledged that human review forms part of its AI training process, stating that “photos and videos are private to users” and that human reviewers are used to “improve product performance” with user consent. However, the scale and nature of the material described by workers has intensified scrutiny over how that consent is obtained and understood in practice.
Why Did Meta End The Contract?
Less than two months after the investigation was published, Meta moved to end its relationship with Sama, a US-based outsourcing company that provides data annotation services, employing workers to review and label images and video to train AI systems, a decision that resulted in redundancy notices being issued to 1,108 workers with just days’ notice. The company’s official explanation was that Sama “did not meet our standards,” although it did not specify which standards had been breached or when concerns were first identified.
Disputed By Sama
Sama has strongly disputed that characterisation, stating that it had “consistently met the operational, security and quality standards required” and had not been informed of any shortcomings before the contract was terminated.
The timing of the decision has led to further questions, with labour groups and campaigners arguing that the termination may have been linked to the workers speaking out rather than performance issues, while Naftali Wambalo of the Africa Tech Workers Movement suggested that the standards in question may relate less to quality and more to confidentiality, describing them as “standards of secrecy,” a claim that Meta has not publicly addressed.
The Human Layer Behind AI
The episode highlights a reality that is often overlooked in discussions about artificial intelligence. Before AI systems can recognise images, understand context, or respond to real-world inputs, large volumes of data must be manually labelled by human workers.
In this case, that process meant individuals in Kenya reviewing unfiltered footage captured by wearable devices used by people in entirely different parts of the world. The work sits at the intersection of privacy, labour rights, and technology development, with those carrying out the task often having limited visibility, protection, or influence over how the data is used.
Not The First Time For Meta
It seems this is not the first time Meta’s relationship with outsourced labour has come under scrutiny. For example, previous contracts involving content moderation have been linked to claims of psychological harm, low pay, and inadequate support, with some former workers reporting symptoms consistent with post-traumatic stress. Sama itself exited parts of that work in recent years, acknowledging the challenges involved.
Regulatory Pressure
The revelations have prompted regulatory attention in multiple jurisdictions. For example, the UK’s Information Commissioner’s Office described the reports as “concerning” and requested further information from Meta, while Kenya’s data protection authority has launched its own investigation into the handling of the footage.
Legal challenges are also emerging. A class action lawsuit in the United States alleges that Meta misrepresented the privacy protections of its smart glasses, while privacy groups in Europe continue to question how user data is processed and whether consent mechanisms meet regulatory standards.
The concern centres on a key distinction, because while Meta’s policies may disclose that data can be used to train AI systems, the extent to which users understand that their footage could be viewed by human reviewers remains unclear, particularly when that footage includes sensitive or intimate situations.
What This Means For AI Development
The decision to end the Sama contract does not remove the need for human input in AI systems. Instead, it exposes the tension between rapid technological development and the practical realities of how that development is supported.
Training AI models at scale requires vast amounts of labelled data, and that requirement does not disappear as systems become more advanced. What changes is the level of scrutiny applied to how that data is collected, processed, and reviewed, particularly when it involves real-world human behaviour rather than curated datasets.
Smart glasses themselves represent a significant step forward in AI-enabled consumer devices, combining real-time image capture with on-device and cloud-based processing. However, their effectiveness depends on continuous learning, which in turn depends on the availability of human-labelled data.
What Does This Mean For Your Business?
This story illustrates how organisations adopting AI tools may need to look beyond the technology itself and consider the full data lifecycle, including how training data is sourced, handled, and reviewed, particularly where external providers or offshore teams are involved.
For UK businesses, this has clear implications around compliance and accountability, because under UK GDPR and data protection law, responsibility does not disappear when data is passed to a third party, meaning organisations must be confident not only in how systems perform but also in how the underlying data is being processed and by whom.
Reducing risk therefore means ensuring that suppliers and partners meet clear standards not only for technical performance but also for data governance, worker welfare, and transparency, with strong contractual controls, regular audits, and clear oversight of third-party processes becoming essential, especially when sensitive or personal data is involved.
The broader lesson, and what may be surprising to many, is that AI systems are not purely automated but are built on human input at multiple stages, and any weakness in that chain can create reputational, legal, and ethical risk, leaving businesses that properly understand and manage that reality far better placed to use AI responsibly while maintaining trust with customers, regulators, and stakeholders.
OneDrive Removes Local Recycle Bin Fallback For Cloud Deletions
Microsoft is changing how OneDrive handles deleted files, removing a long-standing fallback that many users rely on without realising it and increasing the risk of accidental data loss across synced devices.
What Has Changed In OneDrive?
Although the change is straightforward, it is also significant, because when a file is deleted from the OneDrive website, mobile app, or another synced device, it will no longer appear in the local Recycle Bin on a Windows PC or the Trash on a Mac.
Instead, the file is removed directly from the local device and can only be recovered from the OneDrive web-based recycle bin, which applies even if the file was previously available offline on that device.
Files deleted locally on the computer will continue to behave as expected and appear in the Recycle Bin, with the key difference being where the deletion is initiated, because if it starts in the cloud, the local recovery route no longer exists.
For most users, this represents a change in behaviour rather than a change in capability, and that distinction is exactly where the risk begins.
Why Is Microsoft Making This Change To OneDrive?
Microsoft’s reasoning seems to be focused on performance and consistency, as OneDrive usage has expanded, particularly in business environments with large file libraries and multiple synced devices, making it more complex to manage file state across locations.
By removing the local Recycle Bin step for cloud-initiated deletions, OneDrive can process changes faster and avoid maintaining duplicate recovery paths, meaning that instead of files appearing in multiple locations depending on how they were deleted, there is now a single, central recovery point in the OneDrive recycle bin.
From a system design perspective, this seems to make some sense, as it could simplify synchronisation, reduce overhead, and create a more predictable model for file recovery.
However, what works from an engineering perspective does not always align with how people actually use technology in practice.
The Risk
The core issue is not the removal of recovery altogether but the removal of a familiar and highly visible fallback that users have come to rely on.
For many people, the Recycle Bin is an instinctive fallback, because if something is deleted by mistake, the first place they look is the desktop bin, a behaviour that has been consistent across Windows systems for decades and is deeply ingrained.
However, under the new model, that is no longer true for cloud-initiated deletions, as a file removed via a mobile app or web browser will not appear locally, which can create confusion and delay recovery, particularly if users do not realise the change has taken place.
This is the kind of thing that happens in day-to-day use. For example, a quick deletion on a phone, a shared file removed by a colleague, or a mistaken action in the browser can now bypass the local recovery point entirely, even though in each case the file still exists in the OneDrive recycle bin, but only if the user knows to look there.
Without that awareness, the perceived loss can quickly become a real one, especially if recovery windows expire or if users assume the file has already been permanently deleted.
Operational Impact For Organisations
For organisations, the impact is less about the technical change itself and more about how it alters day-to-day processes around file management and recovery.
From a compliance perspective, UK GDPR and broader data protection responsibilities remain unchanged, meaning organisations are still accountable for ensuring that data can be recovered when needed, even though the route to recovery has changed.
Support teams are likely to see an increase in queries where users cannot find deleted files in expected locations, particularly in the early stages of rollout. Helpdesk processes that previously relied on guiding users to the local Recycle Bin will need to be updated to reflect the correct recovery path through the OneDrive web interface.
There is also a clear training requirement, as users need to understand that the method used to delete a file now determines how it must be recovered. Without that clarity, simple mistakes are more likely to escalate into avoidable support incidents.
Policies and internal documentation should also be reviewed to ensure that any references to local recovery for OneDrive files are accurate, especially in environments with remote working and multiple devices.
Reducing The Risk In Practice
Managing this change effectively comes down to awareness, process, and control.
Users should be informed clearly that cloud-initiated deletions bypass the local Recycle Bin and that recovery must be carried out through OneDrive itself, which is a simple message but one that can prevent a large number of avoidable issues.
Organisations may also want to review retention settings, particularly in business environments where the default 93-day recycle bin period can be adjusted, while extending retention or implementing additional backup solutions can provide an extra layer of protection.
From a technical standpoint, ensuring that version history and backup policies are in place becomes even more important, as the removal of one recovery route increases reliance on others, and those systems need to be robust and well understood.
A Small Change With Wider Implications
This update to OneDrive is a good example of how relatively small technical changes can have disproportionate real-world impact. The functionality to recover deleted files still exists, but the way users access it has changed, and that is enough to introduce risk.
For businesses, the key takeaway is that data protection is not just about systems and policies, but also about how people interact with them. When familiar behaviours are disrupted, even for valid technical reasons, the gap between expectation and reality is where problems tend to emerge.
Organisations that recognise this early and adapt their guidance, support, and controls accordingly will be far better placed to avoid unnecessary data loss and maintain confidence in how their information is managed.
What Does This Mean For Your Business?
For UK businesses, the bigger issue is not the technical change itself but how easily it can create a gap between how systems behave and what users expect to happen.
When a familiar behaviour changes without being widely understood, the risk increases because people continue to act on old assumptions, particularly in fast, everyday situations where files are deleted quickly and without much thought.
This is where data loss risk begins to build, not through system failure, but through misunderstanding, delay, and missed recovery opportunities.
The key response is to take control of that gap. Businesses that clearly communicate how file deletion now works, reinforce the correct recovery process, and ensure appropriate backup and retention measures are in place will be far better positioned to avoid unnecessary disruption.
This story serves as a reminder that cloud platforms continue to evolve in ways that can subtly change risk profiles, and organisations that actively monitor and adapt to those changes will be better placed to protect both their data and their day-to-day operations.
UK Plans New Social Media Restrictions For Under-16s
Social media restrictions for under-16s are moving closer to reality in the UK as ministers commit to action following a major consultation, signalling a significant change in how young people access digital platforms.
Why The UK Is Moving Towards Social Media Restrictions
The UK government has made it clear that some form of restriction on social media use for under-16s will be introduced, even if a full ban is not adopted, with ministers now focused on deciding how those measures should work in practice.
This change comes after growing concern about the impact of social media on children’s mental health, behaviour, and safety, alongside mounting political pressure from campaigners, parents, and members of Parliament. The Children’s Wellbeing and Schools Bill is central to this process, as it gives ministers the power to introduce restrictions through regulation rather than requiring entirely new legislation.
The consultation, which closes later this month, is designed to gather evidence on what combination of measures would be most effective, with ministers emphasising that the objective is not simply to act quickly but to ensure that any changes are workable and enforceable at scale, and that the approach should be “evidence-led, with input from independent experts” .
What Type Of Restrictions Are Being Considered?
Rather than focusing solely on an outright ban, the government is currently exploring a range of targeted interventions aimed at reducing harm while preserving some level of access.
One key area is the design of platforms themselves, with proposals to limit or remove features that encourage prolonged use, such as infinite scrolling, autoplay, and algorithm-driven content feeds. These features have come under increasing scrutiny for keeping users engaged for extended periods, often without clear stopping points.
Age verification is another major focus, with stronger enforcement expected to play a central role in any future framework, particularly given evidence that many children already bypass existing age limits by registering with false dates of birth.
The consultation is also examining the potential for time-based controls, including overnight curfews, as well as restrictions on access to AI chatbots and other emerging technologies that may expose children to inappropriate or harmful interactions, as part of a broader effort “to examine the most effective ways to ensure that children have ‘healthy online experiences’” .
Taken together, these measures point to a more granular approach, where specific features and behaviours are regulated rather than applying a single blanket rule across all platforms.
The Evidence Driving The Debate
The policy push is underpinned by a growing body of data and research highlighting both the scale of social media use among young people and the risks associated with it.
For example, recent figures show that social media use is nearly universal among teenagers, with around 95 per cent of 13 to 15-year-olds actively using platforms and the vast majority holding their own accounts. At the same time, a significant proportion of children report exposure to harmful or distressing content, including material linked to self-harm, bullying, and unrealistic body image expectations.
The Online Safety Act 2023 already requires platforms to take steps to protect children from harmful content, including enforcing age limits and removing illegal material. However, ongoing enforcement actions and investigations suggest that compliance has been uneven and that further intervention may be needed to achieve meaningful improvements.
Concerns have also been raised about the underlying design of platforms, particularly features that drive prolonged engagement, with policymakers pointing to risks from “design features that encourage them to spend more time on screens, while also serving up content that can harm their health and wellbeing” .
How Other Countries Are Approaching The Issue
Several countries have already introduced or are actively considering similar restrictions to the ones the UK is now considering.
For example, Australia has taken the most direct approach, introducing a nationwide ban on social media access for under-16s, with platforms required to take reasonable steps to prevent children from creating or maintaining accounts. Early enforcement efforts led to millions of accounts being removed, demonstrating that large-scale intervention is technically possible, although questions remain about long-term effectiveness and circumvention.
Spain has signalled its intention to follow a similar path, while France has already introduced measures requiring parental consent for younger users and is exploring tighter controls. Across the European Union, regulators have also focused on platform design, with actions taken against companies over addictive features and insufficient child protection measures.
These international examples highlight how governments are increasingly willing to intervene directly in platform access, and how enforcement and user behaviour remain challenging, particularly where young people find alternative routes to access services.
What Challenges Still Need To Be Addressed
Implementing effective restrictions is likely to prove complex, particularly given the global nature of social media platforms and the ease with which users can bypass controls.
Age verification remains one of the most difficult issues, as systems must be robust enough to prevent misuse while also protecting user privacy and remaining practical for widespread adoption. Even with improved verification methods, there is a risk that children will migrate to less regulated platforms or use shared accounts to maintain access.
There are also broader questions about how restrictions might affect positive uses of social media, including communication, education, and community building, particularly for young people who rely on online spaces for support and connection.
These competing factors explain why the government has opted for a consultation-led approach, aiming to balance safety, practicality, and unintended consequences before finalising its strategy.
What Does This Mean For Your Business?
For UK businesses, the immediate impact will depend on how directly they interact with younger audiences, but the broader implications extend well beyond youth-focused platforms.
Changes to social media regulation are likely to influence how digital platforms operate more widely, particularly in areas such as content moderation, user verification, and the design of engagement features. Businesses that rely on social media for marketing, customer engagement, or recruitment may see shifts in platform behaviour, audience reach, and compliance requirements over time.
Stronger age verification and feature restrictions could also affect advertising strategies, especially where campaigns currently reach mixed-age audiences, requiring more careful targeting and clearer segmentation.
There is also a wider regulatory signal that digital products are increasingly being judged not just on functionality and growth, but on their impact on users, particularly vulnerable groups. This trend is already visible in areas such as data protection and online safety, and it is likely to extend further as governments respond to public concern about digital harms.
Organisations involved in technology, digital services, education, or safeguarding should be paying close attention, as the outcome of this consultation will help shape the next phase of UK digital regulation. Businesses that understand how these changes affect platform design, user behaviour, and compliance expectations will be better placed to adapt as new rules are introduced and enforced.
Lidl Expands Into Mobile Plans With App-Only Strategy
Lidl is expanding into mobile phone plans through a new global partnership, using its scale and loyalty app to offer low-cost, flexible connectivity without traditional contracts.
Why Lidl Is Moving Into Mobile
Lidl’s move into telecommunications is built on a strategic partnership with 1GLOBAL, which gives the retailer the technical platform and regulatory framework needed to operate as a Mobile Virtual Network Operator, or MVNO. This means Lidl can offer mobile services without building its own network, instead using existing infrastructure while focusing on pricing, customer access, and digital delivery.
The company is positioning this as a response to a clear customer need for “easily accessible, flexible, and affordable connectivity of the highest quality without long-term contract commitments”. That focus aligns closely with Lidl’s broader retail model, where simplicity, price transparency, and convenience are central to how it competes.
This is not Lidl’s first step into mobile, as it already operates Lidl Connect in several European markets, but the new partnership significantly expands its ambitions, both geographically and technically.
How Lidl’s New Mobile Offering Works
The most notable aspect of Lidl’s approach is how tightly the service is integrated into its existing ecosystem. For example, rather than just launching as a standalone telecom brand, the new plans will be delivered primarily through the Lidl Plus app, which already has tens of millions of users across Europe.
Within that environment, customers will be able to purchase and manage mobile plans digitally, often using eSIM technology, with no need for physical SIM cards or long-term commitments. Lidl describes this as part of a broader effort to make mobile services “simple, digital, and affordable” for a mass audience.
Julian Beer, Executive Vice President at Lidl International, framed the ambition clearly, stating: “We are democratizing mobile communications. Simple, affordable, and of the highest quality.”
The app-led model also allows Lidl to control the customer relationship directly, rather than relying on traditional retail channels or third-party distributors, which could help reduce costs while increasing customer loyalty.
A Different Approach To Telecom Competition
Lidl’s strategy stands out because it is not trying to compete as a conventional telecom provider. Instead, it is using its existing retail scale, customer base, and digital platform to enter the market from a different angle.
With more than 100 million customers and a presence in over 30 countries, Lidl is effectively turning its loyalty ecosystem into a distribution channel for telecom services. As the company notes, “we are creating an attractive platform for established telecommunications companies” by combining reach, data, and customer engagement.
This model also benefits network operators, which gain additional usage and customer access without having to manage the end-user relationship directly.
Hakan Koç, founder and CEO of 1GLOBAL, highlighted this broader transformation, saying: “We want to make mobile communications as intuitive, flexible, and digital as possible for millions of people.”
How This Compares To Existing UK Mobile Offers
The timing of Lidl’s expansion comes as existing UK mobile providers are already adjusting their pricing and plan structures.
For example, Asda Mobile has recently removed its cheapest 5GB plan priced at £4.50, while slightly reducing the price of its 10GB plan to £5.95 per month. It has introduced new mid-range options, including 50GB for £7.95 and 80GB for £9.50, while increasing the price of its 100GB plan from £10 to £12. These changes apply to 12-month and 24-month contracts, although the company has confirmed there will be no mid-contract price rises.
This highlights a key contrast. Traditional MVNOs like Asda Mobile continue to operate within a familiar structure of fixed plans, contract terms, and tiered pricing. Lidl, by comparison, is moving towards a more flexible, app-based model with short-term or no-contract options, which could appeal to customers who want greater control and fewer commitments.
What Could Hold Lidl Back?
Despite the scale and ambition behind the move, several challenges remain.
Customer trust will be a factor, particularly when it comes to relying on a supermarket brand for a critical service like mobile connectivity. Network quality will depend on local operator partnerships, meaning the experience may vary between regions.
There is also the question of how widely the service will be rolled out, and whether key markets like the UK will be included in the first phase. While Lidl’s reach is significant, telecom markets are heavily regulated and highly competitive, which could slow expansion.
The app-only model, while efficient, may also limit access for customers who prefer more traditional purchasing methods or who are less comfortable managing services digitally.
What Does This Mean For Your Business?
For UK businesses, the immediate impact may be limited, but the wider development matters more than the product itself. Retailers using digital platforms and existing customer ecosystems to enter telecoms shows a clear change in how connectivity is being delivered and sold.
This development shows how industries are increasingly overlapping, with companies using data, apps, and customer relationships to expand into adjacent markets. Businesses that rely on mobile connectivity, whether for staff, operations, or customer engagement, may benefit from more flexible and potentially lower-cost options as competition increases.
There are also implications for customer expectations. As more services move towards app-based, contract-free models, users may begin to expect the same level of simplicity and control across other digital services.
At the same time, the entry of large retailers into telecoms adds pressure to existing providers, which could accelerate changes in pricing, service structure, and customer experience. Businesses that stay aware of these changes will be better placed to take advantage of new options as they emerge, while also understanding how evolving customer expectations could affect their own digital services and offerings.
Company Check : 1X California Factory To Produce 10,000 Home Robots
OpenAI-backed 1X Technologies has opened a California factory to build its NEO humanoid robot at scale, marking one of the clearest attempts yet to move home robots from futuristic demos into real consumer use.
Why 1X Is Scaling Home Robots Now
1X Technologies, a Norway-founded robotics company now based in California, has opened a 58,000 sq ft factory in Hayward with capacity to build up to 10,000 NEO robots a year, with plans to scale towards more than 100,000 units annually by the end of 2027. The company says demand has already been strong, stating that it “booked out our entire production capacity for the next year in just 5 days (10,000 NEOs).”
NEO is designed as a general-purpose home robot rather than a factory machine, with 1X positioning it as a household assistant that can learn tasks, move safely around people, and provide conversational support. Early access pricing has been reported at $20,000, with a subscription option around $499 per month, placing it firmly in early-adopter territory rather than the mainstream consumer market.
What Makes This Factory So Important
The significance of the Hayward factory lies in 1X’s attempt to control more of the robot’s production process in-house, rather than relying mainly on external suppliers. The company describes the site as “America’s first vertically integrated high-volume humanoid robot factory,” producing key components including motors, batteries, structures, transmission systems, sensors, and soft materials.
That matters because humanoid robots are still changing quickly. Manufacturing components internally should allow 1X to test, redesign, and improve parts faster as real-world feedback comes in from internal testing and early customers. As 1X puts it, “Most people think humanoids are a robotics problem. They’re wrong. It’s a manufacturing problem. Production makes prototypes look easy.”
Why Home Robots Are So Difficult To Build
Building a robot that can work in a private home is much harder than building one for a controlled factory floor. Homes are unpredictable, with different layouts, furniture, lighting, pets, children, clutter, and daily routines that do not follow a fixed industrial pattern.
1X appears to recognise that challenge, stating that “there is a lot that goes into creating the first ever humanoid consumer product experience” and that the product must be tested, improved, and packaged for customers who have “paid good money for a life-changing experience.” The company has also said, “We promised the first NEOs would ship in 2026, and we’re keeping that promise.”
The Competitive Landscape
The market around 1X is becoming crowded, with Tesla, Figure AI, Agility Robotics, Apptronik, Unitree, Agibot, UBTech, and others all developing humanoid robots for different use cases. Tesla’s Optimus is probably the most high-profile rival, but it is still primarily being tested inside Tesla’s own operations rather than sold broadly to consumers.
Agility Robotics’ Digit is already focused more clearly on logistics and warehouse work, while Figure AI has been targeting industrial and commercial deployments with partners such as BMW. Chinese companies including Unitree and UBTech are also moving quickly, often with lower-cost robots and strong manufacturing capacity, though many are aimed more at research, demonstration, or industrial use than general household assistance.
What makes 1X different is its consumer-first positioning. While many competitors are starting with factories, warehouses, or enterprise environments where tasks are more predictable, 1X is trying to put humanoid robots directly into homes, which could be more transformative but also much harder to make reliable.
What This Means For The Future Of Robotics
The move from prototypes to production is an important test for the whole humanoid robotics sector. Impressive videos can generate attention, but real adoption depends on whether robots can work safely, consistently, and usefully in ordinary environments.
The question is not whether NEO can perform selected tasks in controlled demonstrations. The real test is whether it can help enough in real homes to justify the cost, deal with unpredictable situations, and improve over time without frustrating users.
If 1X succeeds, home robots could begin to follow a path similar to early electric cars, starting as expensive, limited early-adopter products before becoming more capable and affordable as production improves. If it struggles, the market may move more slowly through enterprise settings before reaching the home.
What Does This Mean For Your Business?
For UK businesses, the immediate impact is not that humanoid robots will suddenly appear in every home or workplace, but that robotics is moving closer to practical deployment at scale. Organisations in care, facilities management, logistics, hospitality, retail, and property services should be watching this closely because many of the same capabilities being developed for homes could eventually apply to workplaces.
The wider business relevance sits in automation, workforce planning, and service delivery. Robots that can move safely around people, understand instructions, and handle varied physical tasks could eventually support cleaning, stock movement, basic maintenance, customer assistance, or care-related activities.
There are also important questions around safety, liability, privacy, cybersecurity, and staff acceptance. Any organisation considering robotics in future will need to understand not only what the machines can do, but how they collect data, how they are updated, who is responsible when something goes wrong, and how they fit into existing teams.
For now, 1X’s factory is less a guarantee that home robots are about to become mainstream and more a sign that the industry is entering a more serious phase. Businesses that start understanding the technology now will be better prepared if humanoid robots move from novelty to practical tool over the next few years.
Security Stop-Press : cPanel Bug Puts Hosted Websites At Risk
Hackers are exploiting a critical flaw in cPanel and WebHost Manager that can allow full server access without logging in.
Tracked as CVE-2026-41940, the issue lets attackers bypass authentication and reach admin panels. Canada’s Cyber Centre has warned that exploitation is “highly probable” and requires immediate action.
Because cPanel is widely used by hosting providers, attackers could gain control of websites, databases, and email accounts, potentially impacting multiple businesses on shared servers.
Patches have been released, but reports suggest that exploitation attempts began as early as February, before public disclosure.
To reduce risk, businesses should ensure systems are patched, check with hosting providers, review logs for unusual activity, and restrict access to admin interfaces.