Tech News : Microsoft Re-Launching Controversial ‘Recall’ Feature
Following Microsoft having to postpone the release of its ‘Recall’ screenshot feature in May over privacy concerns, it now plans to re-launch an updated version in November on its new CoPilot+ computers.
Recall – What Happened?
At its Microsoft Build 2024 developer conference back in May, Microsoft announced that it planned to introduce the ‘Recall’ AI-powered feature which was designed to take periodic screenshots (snapshots) of everything a user interacts with on their PC. The screenshots (taken every 5 seconds) were to be stored (encrypted) and analysed using optical character recognition (OCR) – using AI, locally on the user’s PC.
Why Take Snapshots?
The screenshots (referred to as snapshots) were intended to be used to provide a timeline of everything a user’s done and seen, and to enable the use of voice commands to search through this timeline. Yusuf Mehdi, Microsoft’s executive vice president and consumer chief marketing officer, said that with Recall, Microsoft “set out to solve one of the most frustrating problems we encounter daily — finding something we know we have seen before on our PC”. Recall was, therefore, intended to be a productivity and user experience-enhancing feature.
Privacy Concerns
However, Microsoft very quickly faced a backlash due to fears around privacy and data security relating to the Recall feature. Recall was described as a “privacy nightmare” and attracted the attention of the UK Information Commissioner’s Office (ICO), plus critics pointed out that the tool (which continuously records user activity) could easily become a “honeypot” for hackers, especially if malware gained access to these snapshots.
Other concerns centered around:
– The default setting enabling Recall on Copilot+ PCs without explicit user consent.
– A lack of moderation in what Recall recorded, i.e. very sensitive information including snapshots of passwords, financial account numbers, medical or legal information (and more) would be recorded and, therefore, could potentially be accessed and taken.
– Worries about who could access these recordings, particularly if devices fell into the wrong hands or were compromised by malicious software.
– Anyone who knew a user’s password could access that user’s history in more detail.
– With gaining initial access to a device being one of the easier elements of an attack, this is all that would be needed to potentially access the screenshots and steal sensitive information or business trade secrets.
Listened
Microsoft now says that it has listened to feedback, and after planning to debut Recall with its new CoPilot+ computers in June, it has spent time removing some of Recall’s more controversial features, and now plans to re-launch Recall in November (on its new CoPilot+ computers).
What’s New About It?
With the revamped Recall, users must actively choose to enable it, rather than having it automatically activated. This change should give users more control over whether their data is recorded. Also, Microsoft has introduced encryption measures, secured via the Trusted Platform Module (TPM), to protect the screenshots that Recall takes. The data is also stored within a Virtualisation-based Security (VBS) Enclave, ensuring it’s more difficult for hackers or malware to access.
Additional enhancements are also understood to include the ability to set preferences for what content Recall captures, how long the data is stored, and what types of sensitive information (such as credit card details) should be automatically excluded from being recorded. For example, an icon in the system tray will now notify users when screenshots are being taken, providing transparency and the option to pause the feature whenever desired.
What Does It Mean For Your Business?
As Microsoft prepares to relaunch Recall with a more privacy-conscious design, it shows the company’s commitment to addressing the concerns raised earlier this year. By shifting to an opt-in model and enhancing encryption, Microsoft aims to give users more control over their data, which is crucial in today’s security-focused landscape. The added features, such as notification alerts and more granular content preferences, demonstrate a thoughtful balance between innovation and user safety.
These changes are not just superficial adjustment, but they reflect Microsoft’s awareness of the growing need for transparent data management, especially with AI-powered tools that handle sensitive information. By actively listening to (and involving) users in deciding how Recall operates on their devices, Microsoft will, no doubt, be hoping to regain trust and re-establish Recall as a valuable productivity tool rather than a security risk.
Ultimately, whether these revisions are enough to win over privacy advocates remains to be seen. However, the revamped version of Recall marks a step in the right direction, highlighting how user feedback can shape technology in ways that benefit both functionality and security. Microsoft’s ability to adapt will likely be key to the long-term success of Recall and its broader Copilot+ initiative.
An Apple Byte : Apple Launching UK Roadside Assistance (Via Satellite)
Apple is set to extend its satellite messaging service, introducing a Roadside Assistance feature in the UK through a partnership with Green Flag. This new service, launching with the iPhone 16, will enable drivers to get help in areas with poor or no cellular coverage, using satellite connectivity.
Previously available only in the US, the satellite Roadside Assistance messaging service will mean that iPhone users will still be able to communicate with breakdown services without a mobile or Wi-Fi signal. Scheduled for release with the iPhone 16 later this autumn, it promises greater safety and convenience for iPhone-using motorists in remote areas.
Green Flag, the UK roadside assistance provider, will support Apple in deploying this satellite-driven service. Aimed particularly at regions where mobile coverage is unreliable, the service will be accessible via a new interface on the latest iPhone model. However, while generally effective in open spaces, its performance may be reduced under cover or near large obstructions (satellites usually need a clear line of sight).
Apple’s Roadside Assistance will operate on a pay-per-use basis, thereby offering flexibility for UK drivers who prefer not to commit to a full-time subscription. This new service is expected to set a new industry standard in emergency communications via satellite and could also encourage the broader adoption of satellite assistance technologies, thereby helping Apple to diversify its product offerings and enhance its strategic positioning within the technology sector.
Security Stop Press : Beware ChromeLoader Exploit Malware Website Campaign
An HP Wolf Security report has highlighted how hackers are leveraging a ChromeLoader exploit and using code-signing certificates and malvertising techniques to distribute malware via fake companies and websites.
As part of what appears to be a large-scale cyberattack, cybercriminals are reportedly exploiting the ChromeLoader vulnerability (ChromeLoader is a malicious browser extension) by using valid code-signing certificates (the digital certificates to verify software authenticity and integrity), allowing them to bypass Windows security measures like AppLocker without triggering user warnings.
The report highlights how the attackers set up fake companies to obtain these valid certificates or steal them from legitimate sources. These fake companies then host websites that offer seemingly legitimate tools, such as PDF readers or converters, to lure in victims.
The campaign uses malvertising (malicious advertising) to direct potential victims to the well-designed but malware-ridden websites which often appear in search results for popular keywords like “PDF converters” and “manual readers.”
Once victims visit these infected sites, their browsers can be hijacked, allowing attackers to redirect search queries to malicious sites, increasing the scope of their attacks.
HP’s report suggests that the scripts used in this campaign were likely developed using generative AI tools, making it easier and faster for cybercriminals to launch such attacks.
The advice to avoid ChromeLoader attacks is to only download software from trusted sources, be cautious of online ads, keep security features enabled, use antivirus software, and regularly update your browser and system.
Sustainability-in-Tech : UK Startup Makes ‘Lab’ Leather
Cambridge-based startup ‘Pact’ has raised £9 million in (seed round) funding to expand its factory space and scale-up production of its “world-first” biomaterial – a skin made from collagen that’s a convincing alternative to leather.
Oval
Oval, developed by Pact, is a pioneering biomaterial made from natural collagen, designed to be a sustainable and scalable alternative to traditional materials like leather. For example, with Oval Pact says it is “Capturing the strength, feel, stretch and durability of heritage materials through upcycled collagen.”
The collagen used in Oval is sourced from ethical and environmentally friendly suppliers, often from surplus or recycled materials such as those used in cosmetics.
Oval not only looks like leather, but it also behaves like leather, i.e. it responds to scratches, water, and sunlight in a very similar way.
What’s Collagen?
Natural collagen, the biomaterial that Oval is made from, is a protein, found in the skin, bones, and connective tissues of animals, providing structural support and elasticity. It is often used in cosmetics for its ability to promote skin hydration, elasticity, and repair, making it popular in anti-aging products. Collagen’s biocompatibility and strength make it an ideal material for sustainable biomaterials like Oval, which mimics leather while reducing environmental impact. The Collagen used to make Oval is recycled collagen from old cosmetics with some herbal extracts, oils, and minerals added. Pact says: “Our collagen is a natural byproduct used in high-end cosmetics, skincare and pharmaceuticals”.
Customisable
Oval is versatile and customisable, allowing designers to create a wide range of textures, patterns, and colours. The material is finished using techniques traditionally applied to leather, making it ideal for luxury fashion, footwear, interiors, and more.
Chemical-Free + Reduced CO2
Its production is chemical-free, requires less water, and has a significantly lower carbon footprint compared to traditional leather production. Pact estimates that incorporating Oval in place of leather and synthetic alternatives could prevent 4.8 million tonnes of CO2 emissions annually!
Patented
Pact says that in the production of Oval, a patented process is used to transform cosmetic-grade collagen into collagen skins. Pact says the skins are then “enriched with all-natural ingredients, then enhanced using time-honoured finishing techniques”. Pact sums up the key benefits of Oval, saying “Oval radically reduces environmental impact and inspires unlimited design possibilities”.
Who’s It For?
Pact CEO, Yudí Ding, highlights how the company has already partnered with Luxury Maisons and how the new biomaterial has been embraced by leading fashion houses and groups globally. For example, investors in this seed investment round included Hoxton Ventures, ReGen Ventures, Celsius Industries (formerly Untitled) and Polytechnique Ventures.
Pact has also developed “drop in” manufacturing technology, enabling clients to produce Oval directly in their own supply chains.
Funding To Scale-Up
The £9 million of funding raised in this seed round has enabled Pact to invest in a new 13,820 sqft headquarters in Cambridge, which includes a laboratory and pilot production facility. This will put Pact in a better position to push into the commercialisation phase and expand and scale-up production to meet demand (which is anticipated to be global).
What Does This Mean For Your Organisation?
The success of Pact and its innovative biomaterial, Oval, marks a significant shift towards sustainable alternatives in industries traditionally dependent on leather. As environmental concerns become paramount, Oval’s ability to mimic leather while drastically reducing water usage and CO2 emissions could position it as a game-changer across fashion, interiors, and even automotive design. By offering a material that combines durability, versatility, and sustainability, Pact is responding to the increasing demand for eco-friendly solutions without compromising on quality or creativity.
This advancement doesn’t just affect consumers and brands, but it also sends a clear message to competitors in the materials industry. As Pact scales up production and solidifies partnerships with luxury brands, traditional leather manufacturers and other synthetic alternatives may feel the pressure to innovate or risk becoming obsolete. Oval’s ability to slot seamlessly into existing supply chains, thanks to Pact’s “drop-in” manufacturing technology, may give it an edge that could force competitors to reassess their production models and environmental footprints.
As more companies adopt sustainable practices, Pact’s Oval appears to be setting a new benchmark that competitors will likely need to meet. This biomaterial’s potential to reduce millions of tonnes of CO2 emissions annually makes it not just an alternative but possibly a necessary evolution for the industry. Ultimately, Pact’s breakthrough may not only disrupt the materials market but also challenge the entire ecosystem to raise its sustainability standards and embrace innovation.
All that said, however, Pact’s Oval is still at the beginning of its journey and has yet to live up to its considerable promise when it goes fully into the commercialistaion phase, although it appears that the signs are good so far.
Tech Tip – Use “Ctrl + D” to Quickly Bookmark Pages in Web Browsers
Quickly bookmark important pages or documents in any browser using the Ctrl + D shortcut, making it easier to save and access key resources. Here’s how to use it to bookmark a page and choose the bookmark folder:
How to bookmark a Page
– While on the webpage you want to bookmark, press Ctrl + D.
Choose the bookmark Folder
– Choose a folder to save the bookmark or use the default option, and click Done.
This tip works across all major browsers, including Chrome, Edge, and Firefox.
Featured Article : Would You Be Filmed Working At Your Desk All Day?
Following a recent report in the Metro that BT is carrying out research into continuous authentication software, we look at some of the pros and cons and the issues around employees potentially being filmed all day at their desks … under the guise of cyber-security.
Why Use Continuous Authentication Technology?
Businesses use continuous authentication technology to enhance security, i.e. to add an extra layer of protection. As the name suggests, this type of software continuously verifies users throughout their session, rather than relying solely on traditional one-time authentication methods like passwords or PINs. This approach is designed to mitigate risks such as session hijacking, whereby unauthorised users gain access after the initial login, or insider threats where someone might misuse another’s logged-in session. Continuous authentication essentially helps detect abnormal behavior in real-time, flagging up potential breaches or fraud by monitoring unique patterns such as typing style, mouse movements, or facial recognition. By integrating this technology, businesses may hope to reduce security vulnerabilities, safeguard sensitive data, and improve compliance with industry regulations, all while maintaining a seamless user experience, i.e. it’s happening automatically in the background.
BT Trialling Continuous Authentication Technology
BT is reported to be trialling BehavioSec’s behavioral biometrics technology at its Adastral Park science campus near Ipswich. This software is used for continuous authentication, where it monitors users’ unique behavior patterns, such as how they type, move the mouse, or interact with their devices, to confirm their identity. However, in the case of BehavioSec’s technology, it doesn’t usually require the use of a camera, i.e. the user doesn’t need to filmed by a webcam all day. Instead, it can rely on analysis of a user’s behaviour patterns by looking at factors such as keystroke dynamics, mouse movements, touchscreen gestures, and device interaction patterns (e.g. how the user holds their phone, scrolls through pages, or interacts with specific applications). In the recent Metro story however, the reporter witnessed a demonstration of the system that did use facial recognition and required continuous filming of the user with a webcam/front-facing camera to detect whether the user’s face was consistent with expected dimensions.
BT is exploring this technology as part of its broader efforts to improve cybersecurity, particularly in response to the growing threat of cyberattacks and data breaches. The trials of BehavioSec’s behavioral biometrics technology are part of BT’s research into how it can use innovative technology to better protect digital assets and infrastructure, especially in enterprise and government contexts. For example, back in 2022, BT said it would be taking security to a new level so that even if an attacker obtained a device, any ongoing work session would end, locking the device, because their biometrics wouldn’t match that of the device user’s known biometrics.
Systems Using Cameras?
There are, however, many such continuous authentication systems now available which require a camera being trained on a user’s face. A few prominent examples include:
– FaceTec’s ZoOm. This is a 3D facial recognition solution that uses the front-facing camera of devices (it can use a webcam) to authenticate users, e.g. by carrying out “Liveness Checks, Face Matches & Photo ID Scans”. It’s often used in applications requiring high security, such as financial services or identity verification systems, and biometric security for remote digital identity.
– FacePhi. This (Spanish) biometric solution for facial recognition is widely used in the banking, healthcare, and fintech sectors for secure access to mobile banking apps and fraud prevention. The software uses a camera to identify users and offers continuous authentication by tracking facial features during interactions.
– IDEMIA’s VisionPass. This system combines 3D facial recognition with AI and uses cameras to recognise faces and continuously verify identities, even in challenging conditions like low light or with face masks. It’s generally deployed in secure facilities, airports, and government buildings for access control and ongoing authentication.
– Trueface. This AI-powered facial recognition technology integrates with existing security systems, such as cameras in corporate offices, to provide continuous authentication. Trueface can recognise and track users in real-time, improving access security and is used in corporate offices, airports, and law enforcement for continuous identification and authentication.
Other popular systems that use similar methods include Clearview AI, Neurotechnology’s Face Verification System, AnyVision, and ZKTeco’s FaceKiosk.
It’s also worth noting here that the “big tech” companies’ versions, such Apple’s Face ID, Google’s Face Unlock (Pixel Devices), and Microsoft Windows ‘Hello’ are also facial recognition-based authentication systems that are classed as continuous authentication technology. However, for the purposes of this overview, we’re focusing on the kinds of systems that businesses may use for their own employees.
Issues
The usage of facial recognition (e.g. by law enforcement) has had its share of criticism in recent years. However, the thought of businesses using a camera to continuously film an employee, even if it may be for security purposes, such as continuous authentication, raises several serious issues and concerns. For example:
– An invasion of privacy. With constant surveillance, employees may feel that their privacy is being violated. Cameras can capture not only work-related activities but also personal moments, which may lead to discomfort and a sense of being micromanaged. Cameras might inadvertently record personal or sensitive information, such as confidential discussions, which could be accessed or potentially misused.
– The effect on employee trust and morale. Continuous filming can create an atmosphere of distrust between employees and employers. Workers may feel they are being monitored for reasons beyond security, leading to an atmosphere of fear, plus a decrease in morale and engagement (and ‘quiet quitting’).
– Psychological stress. Constant camera surveillance can lead to stress or anxiety among employees, affecting their overall well-being and productivity, which could obviously be counterproductive for the company.
– Data security and misuse. For example, video recordings of employees can contain sensitive biometric data, which, if compromised through a data breach, could have serious consequences. Biometric data is immutable, i.e. once stolen, it cannot be changed (like a password). There is a risk of video footage being misused, either by internal parties or external hackers. The footage could be exploited for purposes other than security, such as inappropriate monitoring of behavior or harassment.
– Ethical concerns. These could arise if employees are not fully aware of the extent and purpose of the surveillance, or if they feel coerced into accepting it as a condition of employment. Also, filming employees all day can be viewed as excessive (overreach), especially if less invasive alternatives exist. Monitoring behavior to this degree may cross ethical boundaries of acceptable workplace practices.
– Legal implications. Many regions have strict privacy laws (e.g. GDPR in Europe, CCPA in California) that require companies to obtain explicit consent for continuous surveillance and ensure the proportionality and necessity of such measures. Non-compliance could lead to legal consequences, fines, or lawsuits for a business. In some countries (or US states, for example) there are labour laws that protect employees from invasive workplace monitoring. Continuous surveillance may violate these protections if it is deemed too intrusive.
– The Potential for bias and discrimination. Among other things, this could include algorithmic bias. If the continuous authentication system relies on facial recognition, there is a risk of bias against certain groups, such as racial minorities or those with disabilities, due to known issues with facial recognition accuracy across diverse demographics. Also, employees may worry that the surveillance data could be used for purposes other than security, such as evaluating performance, which could lead to discrimination or unfair treatment.
– Technical reliability, e.g. false positives/negatives. Continuous authentication systems relying on cameras may fail, leading to false positives (unauthorised users being granted access) or false negatives (legitimate users being denied access). This can disrupt work and erode trust in the system.
While continuous authentication aims to enhance security, using cameras to film employees all day raises significant challenges. Companies need to carefully balance security needs with privacy rights, ethical considerations, and legal compliance to avoid potential negative consequences. For example, in 2020, H&M (the German multinational clothing retailer) was fined €35.3 million by the Hamburg Data Protection Authority in Germany for violating GDPR due to excessive and invasive surveillance of employees.
What Is ‘Emotional Analysis’ And Why Is It Causing Concern?
Some continuous authentication software can now use ‘emotional analysis’. This refers to the use of AI to detect and interpret human emotions through cues like facial expressions, voice tones, or body language. Its purpose is to monitor and assess workers’ emotional states, such as stress, engagement, or satisfaction. It could help a business by providing insights into employee well-being and productivity, identifying signs of burnout or disengagement, and enabling management to respond proactively to improve workplace morale, increase efficiency, and enhance overall performance through better support and tailored interventions.
However, its usage also raises significant concerns around privacy, accuracy, and bias. The technology is often inaccurate, particularly across different demographics, leading to misinterpretation of emotions. Its use in workplaces for employee monitoring can create a sense of invasion and stress, eroding trust, and morale. There are also ethical and legal issues, with fears of misuse for micromanagement or even manipulation of behavior, making its widespread deployment highly controversial.
Susannah Copson, legal and policy officer with civil liberties and privacy campaigning organisation Big Brother Watch has described ‘emotion recognition technology’ as “pseudoscientific AI surveillance” and has called for it to be banned.
What Do Rights Organisations Say?
Big Brother Watch is strongly opposed to the unchecked growth of workplace surveillance tools, calling them an invasion of privacy, harmful to employee well-being, and in need of stricter regulation to protect workers’ rights. Big Brother Watch recently held an event at the UK at the Labour Party conference to launch its report on workplace surveillance in the UK, highlighting its increasing use by bosses and their employers, and its negative effects on employees.
Big Brother Watch argues that workplace surveillance technologies, such as keystroke logging and AI-powered emotional analysis, invade employee privacy, erode trust, enable micromanagement, and harm mental health, potentially violating privacy laws like GDPR, while calling for stricter regulation to protect workers’ rights.
How Much Has Workplace Surveillance Increased?
A recent report by ExpressVPN, titled the “2023 State of Workplace Surveillance,” highlights a significant increase in workplace surveillance. Some key findings include:
– 78 per cent of employers are using some form of employee monitoring tools in 2023, up from 60 per cent before the COVID-19 pandemic.
– 57 per cent of employers implemented new surveillance tools specifically due to remote work conditions caused by the pandemic.
– 41 per cent of companies now use software to track keystrokes, screenshots, or record the activity of employees’ screens.
– 32 per cent of employers monitor employee emails and messages, while 25 per cent track employee location using GPS or IP data.
A Growing Market
This surge in monitoring reflects the growing reliance on digital surveillance tools to manage remote workforces. Regarding the market for identity and access management (IAM) and cybersecurity solutions, Gartner reported in its “Market Guide for User Authentication” that continuous authentication is gaining traction due to increasing concerns about cybersecurity and the limitations of traditional login methods.
A MarketsandMarkets report has also noted that the global user authentication market, which includes continuous authentication solutions, is projected to grow from $13.9 billion in 2022 to $25.2 billion by 2027. A 2022 Verizon Data Breach Investigations Report also noted that 61 per cent of breaches involve stolen credentials and pushed companies to adopt continuous authentication as a preventive measure.
What Can Employees Do?
If employees are concerned about continuous camera monitoring such as that used with some continuous verification systems, the (realistic) options they have are to:
– Review company policies to understand the purpose and limits of the surveillance.
– Raise concerns with HR or management to request less invasive alternatives, like fingerprint or password-based methods.
– Seek legal advice if monitoring violates privacy laws, or report it to a regulatory body like the ICO (in the UK).
– Consult with a union to negotiate privacy protections, if applicable.
– Document their issues for potential disputes and familarise themselves with their rights under local privacy and employment laws.
What Does This Mean For Your Business?
The rise of continuous authentication software, particularly that using facial recognition and behavioural biometrics, highlights the tension between advancing cybersecurity and respecting employee privacy.
While the primary aim of these systems may be to offer ongoing, seamless security by monitoring users throughout their work sessions, the methods employed, such as continuous video surveillance or behavioural tracking, have raised significant ethical and privacy concerns. The promise of enhanced protection against cyberattacks, session hijacking, and insider threats is compelling, especially in industries where data security is paramount. However, the potential downsides of this technology can’t be ignored.
One of the key concerns is the invasion of privacy. Employees may feel uncomfortable or even violated if they know that cameras or other tracking mechanisms are monitoring their every move. The potential for these systems to inadvertently capture non-work-related activities, or even sensitive personal interactions, adds to the unease. Continuous surveillance risks creating an atmosphere of distrust between employers and employees, fostering a sense of being constantly watched, which could have a detrimental effect on morale. In extreme cases, this might lead to disengagement, lower productivity, or even a rise in ‘quiet quitting,’ as employees withdraw emotionally from their work due to feeling over-monitored.
Also, there are concerns about the psychological impact of constant surveillance. The knowledge that a camera or biometric system is perpetually tracking your behaviour can lead to stress, anxiety, and a feeling of being under perpetual scrutiny. This could, paradoxically, undermine the productivity gains that continuous authentication aims to protect. Employees working under these conditions might find it difficult to focus or perform optimally, especially if they perceive the surveillance as intrusive or excessive.
In addition to these privacy and security concerns, there are ethical and legal considerations. In many jurisdictions, privacy laws require companies to obtain explicit consent for such monitoring and ensure that the measures are proportionate and necessary. Failure to comply with these regulations could lead to hefty fines or legal action (as seen in the case of H&M’s €35.3 million fine in Germany).
There are also the issues of bias and discrimination. Facial recognition technologies have been shown to be less accurate across diverse demographic groups, potentially leading to unfair treatment of certain employees. If continuous authentication systems generate false positives or negatives due to these biases, it could create additional hurdles for employees from minority groups, further entrenching workplace inequalities. There is also the risk that the data gathered could be used for purposes beyond security, such as monitoring productivity or evaluating performance, which could lead to unfair assessments or discrimination.
Despite these challenges, it is clear why businesses are keen to explore continuous authentication technology. The ever-present threat of cyberattacks, data breaches, and insider threats has made it essential for organisations to find new ways to secure their digital assets. Continuous authentication offers a promising solution by providing ongoing verification without disrupting the user experience. However, businesses must tread carefully, ensuring that these systems are deployed in ways that respect employee privacy, comply with legal requirements, and avoid creating a toxic work environment.
As continuous authentication (seemingly inevitably) becomes more widespread, it will be crucial for businesses to engage in transparent communication with employees about how these systems work, why they are being implemented, and what safeguards are in place to protect their privacy. Offering alternative, less invasive methods, such as fingerprint recognition or password-based systems, may help alleviate some concerns. Ultimately, the successful adoption of continuous authentication will depend on striking the right balance between robust security measures and the protection of employee rights and well-being.