Burger King Deploys AI Headsets to Monitor Staff ‘Friendliness’
Burger King is piloting OpenAI-powered headsets in 500 US restaurants that analyse drive-thru conversations, coach staff in real time and track hospitality signals such as whether employees say “please” and “thank you”.
What Is BK Assistant and How Does It Work?
The system, known as BK Assistant, sits inside employee headsets and a connected web and app platform. At its centre is a voice-enabled AI chatbot called “Patty”, built on OpenAI technology.
From the moment a customer pulls up at the drive-thru to the point they leave, the system analyses the interaction. It can prompt staff with recipe guidance, flag low stock levels such as a drink syrup running low, and alert managers if a customer reports an issue via a QR code.
It can also detect certain hospitality phrases. Burger King has confirmed that the system can identify words such as “welcome”, “please” and “thank you” as one signal among many to help managers understand service patterns.
Designed To Streamline Operations
Restaurant Brands International, the Miami-based parent company of Burger King, has described the platform as being “designed to streamline restaurant operations” and allow managers and teams to “focus more on guest service and team leadership”.
The company has, however, been very keen to stress that the tool is not intended to record conversations for disciplinary monitoring or score individual workers. In statements to multiple outlets, Burger King has said: “It’s not about scoring individuals or enforcing scripts. It’s about reinforcing great hospitality and giving managers helpful, real-time insights so they can recognise their teams more effectively.”
The pilot is currently running in 500 US restaurants. The wider BK Assistant platform is expected to be available to all US locations by the end of 2026.
Why Now?
Fast food is a high-volume, low-margin business where seconds matter. Drive-thru performance, order accuracy and customer satisfaction scores directly influence revenue.
AI promises to reduce friction. Recipe reminders reduce training time. Automatic menu updates prevent customers ordering out-of-stock items. Real-time alerts about stock levels and cleanliness issues allow managers to act faster.
There is also a broader industry push towards automation. Labour costs remain one of the largest operational expenses in quick-service restaurants. At the same time, recruitment and retention challenges have persisted in many markets.
Against that backdrop, using AI as a coaching and operational support tool seems to be a commercially logical decision.
The friendliness monitoring element, however, is what has triggered the strongest reaction.
Support Tool or Surveillance?
Online backlash has been swift. Some critics have described the system as dystopian, arguing that analysing staff speech risks creating a culture of constant monitoring.
Burger King has attempted to position the system as supportive rather than punitive. “We believe hospitality is fundamentally human,” the company has said. “The role of this technology is to support our teams so they can stay present with guests.”
From a management perspective, aggregated data on service patterns could be useful. From an employee perspective, the idea that an AI system is listening for key phrases raises legitimate concerns about trust and autonomy.
AI systems are not infallible. Speech recognition technology can struggle with regional accents, background noise or overlapping conversations, particularly in a busy drive-thru environment. A missed “thank you” or a misheard phrase could distort the data being fed back to managers, creating the risk of misleading signals. Over time, that kind of inaccuracy could erode confidence in the system, both for staff expected to trust it and for managers relying on it to guide decisions
There is also the wider debate about workplace surveillance. Customer service calls have long been recorded for quality purposes, but embedding AI analysis directly into frontline headsets seems to be a real step change in visibility.
So what is really going on? In reality, this is likely to be less about politeness policing and more about data. This is because fast food chains are increasingly treating operational behaviour as measurable input. Every interaction becomes a data point.
What It Means for Burger King and Its Competitors
For Burger King, the upside is operational consistency at scale. With thousands of restaurants, even marginal improvements in order accuracy or service speed can translate into significant revenue gains.
However, there’s also a reputational risk to coinsider here. If staff perceive the system as intrusive, morale could suffer. If customers view it as excessive monitoring, brand sentiment could be affected.
Competitors Doing IT Too
Burger King is not the only fast-food company using AI. Across the sector, major brands are investing heavily in artificial intelligence as they look for gains in speed, consistency and tighter operational control.
Yum Brands, the parent company of KFC, Taco Bell and Pizza Hut, has announced partnerships with Nvidia to develop AI technologies across its restaurant estate, signalling a broader move towards data-driven kitchens and smarter front-of-house systems. McDonald’s has also experimented in this space. It previously tested automated AI order-taking at drive-thrus through a partnership with IBM before ending that trial in 2024, and has since turned to Google as it refines its AI strategy.
Quick-service restaurants are evolving into technology-led businesses, embedding AI into ordering systems, kitchen workflows and customer interactions in pursuit of efficiency and consistency at scale.
What Does This Mean For Your Business?
For UK SMEs and mid-sized organisations, this story is not really about burgers at all. It is about artificial intelligence moving out of the back office and into direct, frontline interaction with customers and staff.
Burger King is using AI to gather real-time operational data, coach teams and encourage consistent service standards. That same principle is now appearing across retail, logistics, healthcare and hospitality, where AI tools are increasingly shaping how people work rather than just analysing what has already happened.
That raises important governance questions. How exactly is the data being collected? How is it interpreted, and by whom? What visibility do managers have, and how clearly is the purpose explained to employees? These are not abstract compliance issues. They influence culture, morale and trust.
Used well, AI can remove friction, improve accuracy and support performance in ways that genuinely help staff do their jobs better. Used poorly, particularly in customer-facing roles, it can feel like constant surveillance, even if that was never the original intention.
For business owners, the lesson is not to avoid AI, but to introduce it carefully. For example be transparent about what the system does and doesn’t do. Set boundaries and make sure the benefits are visible to staff as well as management.
Technology can analyse behaviour and surface patterns. The quality of service, however, still depends on people. That balance will define whether AI in the workplace feels empowering or intrusive.
Consumers Still Don’t Trust AI to Handle Customer Service
New research from Pegasystems and YouGov shows that most consumers in the UK and US remain wary of generative AI in customer service, preferring human interaction despite widespread corporate investment in chatbots and automated support.
What the Research Found
The study, published in February 2026 by Pegasystems Inc., a US-based enterprise AI software company, surveyed 4,748 adults across the UK and the US between 4 and 13 November 2025. The results show a widening disconnect between how confidently businesses are deploying generative AI in customer service and how comfortable consumers feel interacting with it.
Two-thirds of consumers (64 per cent) said they were either “not very confident” or “not at all confident” in the way businesses use generative AI when interacting with them. More than half, 53 per cent, lacked confidence that organisations use generative AI responsibly.
That scepticism appears to come from lived experience. For example, almost half (46 per cent) reported that they either “rarely” or “never” get successful outcomes when their customer service interaction is AI-powered. A similar proportion (48 per cent) said they do not trust businesses to handle their customer service entirely through AI.
People Prefer Human Support Over AI
What stands out most clearly is that people still prefer human support rather than AI. According to the research, 77 per cent say they “always” or “often” achieve better outcomes when dealing only with a person. Two-thirds, 66 per cent, actively prefer human-led assistance. By contrast, just 2 per cent say they want to interact exclusively with generative AI chatbots.
Taken together, the figures suggest that while AI adoption has accelerated rapidly inside organisations, consumer confidence in those systems has not kept pace.
Why Consumers Don’t Trust AI
Simon Thorpe, Director at Pega, was clear about what is driving the unease. “AI can be transformational for customer service – but it has to live up to customer expectations,” he said in the company’s press release. “There’s a simple reason why we’re seeing a lack of consumer trust in the use of AI. There are just too many first-hand examples of businesses deploying these tools in ways that lead to dead ends and frustration.”
That frustration is now likely to be familiar to many customers. People report being stuck in automated loops, struggling to escalate to a human agent, or having to repeat information that has already been provided. Even when an issue is eventually resolved, the process can feel inefficient and impersonal.
Not Rejecting AI Outright
That said, the research suggests that consumers are not rejecting AI outright. Instead, they are reacting to how it has been introduced into customer service channels. As Thorpe added: “Businesses must build back consumer trust by moving past simple chatbots and deploying predictable AI agents that consistently get work done on behalf of customers. If businesses can use AI to make customer service faster and easier, they can drive massive new efficiencies while retaining customer trust.”
The distinction matters. The concern is less about AI existing and more about whether it delivers a reliable, transparent and genuinely helpful experience.
Consumers May Not Choose AI, But They Suspect It’s There Anyway
The research also reveals something more subtle. Although 48 per cent of respondents said they never actively choose to use generative AI in everyday tasks, many suspect they are already using it without realising it. Around 24 per cent think they probably interact with AI every day, even if they are not consciously aware of it.
That suggests a form of reluctant acceptance. People may not actively seek out AI-powered customer service, yet they understand that it is becoming embedded in daily life, from online banking and retail to travel and utilities.
AI is becoming part of everyday customer service, from chatbots and automated emails to voice systems and agent-assist tools. Yet many customers still question whether businesses are using it in ways that genuinely improve their experience. That contrast is becoming harder to ignore.
Pressure on Businesses to Deploy AI
Despite consumer scepticism, organisations face mounting internal and competitive pressure to adopt AI. Separate industry research from Gartner has found that more than nine in ten customer service leaders report being under pressure to implement AI within the year.
The commercial reasons are clear. AI promises lower operating costs, faster response times and improved self-service success. It can triage routine queries, surface relevant data for agents and operate around the clock.
For large enterprises, even marginal gains in efficiency can translate into significant savings. For smaller organisations, automation can help manage peaks in demand without expanding headcount.
However, the Pega findings suggest that cost efficiency alone will not secure customer loyalty. A separate study by Gladly and Wakefield Research has shown that even when AI or hybrid AI-to-human interactions resolve an issue, only a minority of customers say it increases their preference for the company. Customers, the report noted, “don’t resent AI… They resent wasted effort.”
That distinction matters.
Implications
For consumers, the issue is not technology in itself. It is reliability. When AI works seamlessly, it fades into the background. When it misfires or blocks access to a person, frustration rises quickly.
For frontline staff, AI systems are reshaping workflows. In the best cases, they reduce repetitive administration and surface relevant information at speed. In weaker implementations, they add another layer of process that can constrain judgement rather than support it.
For senior leaders, AI in customer service now sits at the intersection of cost control, brand perception and regulatory scrutiny, and any decisions about deployment increasingly carry reputational weight.
Organisations are therefore navigating a narrow path. They must modernise service operations while protecting customer confidence and employee engagement. That balance is becoming a defining feature of digital strategy.
What Does This Mean For Your Business?
For UK SMEs and mid-sized organisations, the message from this research is clear. Customer service automation can’t be treated as a plug-and-play efficiency project.
Before expanding AI across service channels, it’s worth asking three commercial questions. Does it genuinely improve resolution times? Does it reduce customer effort? Does it enhance, rather than restrict, human support when it matters?
The data suggests that customers are not rejecting AI outright. They are simply reacting to poor experiences. That means implementation quality is now a competitive differentiator. A well-designed hybrid model, where AI handles routine interactions and escalates intelligently to trained staff, is likely to outperform either extreme.
There is also a governance dimension here. Transparent communication about how AI is used, what data is processed and when a human can intervene will increasingly influence trust. With regulatory scrutiny of automated decision-making growing across the UK and Europe, customer service AI is unlikely to remain outside compliance conversations for long.
For growing businesses, AI offers the opportunity to extend service hours, smooth demand spikes and provide operational insight that was previously unavailable. Yet the organisations that benefit most will be those that treat AI as an augmentation layer, not a replacement for judgement.
The commercial advantage will not come from deploying more chatbots. It will come from deploying better ones, supported by people, process and clear accountability.
Instagram To Alert Parents Over Repeated Self-Harm Searches
Instagram says it will begin notifying parents if their teen repeatedly searches for suicide or self-harm-related terms within a short period, adding to its existing content controls as scrutiny of teen digital wellbeing intensifies.
How The Alerts Will Work
The new feature applies to Teen Accounts enrolled in Instagram’s parental supervision tools. If a young user repeatedly attempts to search for phrases promoting suicide or self-harm, or terms such as “suicide” or “self-harm”, a notification will be sent to their parent or guardian.
Parents will receive the alert via email, text message or WhatsApp, depending on the contact information provided, alongside an in-app notification. The alert will explain that the teen has repeatedly attempted to search for such terms within a short time window and will provide access to expert resources designed to support sensitive conversations.
Most Don’t Search For This
Meta has been keen to state that, of course, the vast majority of teens do not search for suicide or self-harm content and, if or when they do, Meta’s Instagram already blocks those searches and redirects users to helplines and support resources. The new alert mechanism is intended to flag patterns of repeated attempts rather than single queries.
In its announcement, Meta said: “We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution.” The company acknowledged the risk of unnecessary alerts but argued that “empowering a parent to step in can be extremely important.”
When And Where?
The alerts will roll out (starting this week) in the US, the UK, Australia and Canada, with wider availability planned later in the year. Meta has also confirmed that similar notifications are being developed for certain AI-related conversations, reflecting the growing role of AI chat interfaces in teen digital behaviour.
Why Now?
The timing reflects several pressures coming together at once. Meta and other social media companies are currently facing lawsuits in US courts alleging that their platforms have contributed to harm among young users. During recent testimony in federal and state proceedings, company executives were questioned over the pace of safety feature rollouts and the effectiveness of parental controls.
At the same time, internal research disclosed in separate proceedings suggested that parental supervision tools had limited impact on compulsive social media use.
Beyond the legal context, broader behavioural trends are also likely to be playing a part in this decision. In February, a Pew Research Center survey found that 64 per cent of US teens report using AI chatbots, compared with 51 per cent of parents who believe their teen uses them. While most teens use AI to search for information (57 per cent) or get help with schoolwork (54 per cent), 16 per cent say they have used chatbots for casual conversation and 12 per cent report using them for emotional support or advice.
These figures underline why Meta’s decision to extend parental alerts to AI interactions later this year may prove significant.
Mixed Views On AI From Teens
Interestingly, Pew also found that teens’ views on AI are mixed. For example, 36 per cent expect AI to have a positive impact on them personally over the next 20 years, while 26 per cent believe its broader impact on society will be negative. That ambivalence reflects a digital environment in which technology is both a support tool and a source of concern.
Balancing Intervention and Privacy
Introducing parental alerts for repeated search behaviour raises practical questions around privacy, proportionality and effectiveness.
Meta says it analysed Instagram search behaviour and consulted its Suicide and Self-Harm Advisory Group to determine an appropriate threshold. The aim, it says, is to avoid excessive notifications that could reduce impact over time.
The company also maintains strict policies against content that promotes or glorifies suicide or self-harm and states that it hides certain sensitive content from teens even when shared by accounts they follow.
The challenge, as with many digital safeguards, is calibration. Too little intervention risks missing warning signs. Too much may undermine trust or normal adolescent privacy.
What Does This Mean For Your Business?
For organisations operating in digital platforms, education, youth services or AI development, this move illustrates how online safety, legal exposure and product design are increasingly intertwined.
Parental oversight features are no longer optional add-ons. They are becoming part of the baseline expectation for platforms used by minors. The extension of alerts into AI conversations also signals that companies view conversational systems as part of the same duty-of-care landscape as social feeds.
The Pew data adds another dimension. With 12 per cent of teens reporting use of AI for emotional support, and parents often underestimating that behaviour, organisations developing AI-enabled services will face growing scrutiny over how those systems respond to vulnerable users.
More broadly, the story reflects a shift from reactive moderation to proactive signal detection. Repeated search behaviour is being treated not just as content interaction but as a potential indicator of need.
For businesses, the implication is clear. Where products intersect with young users, mental health or AI-driven interaction, safety design must be demonstrable, measurable and defensible. The commercial risk of failing to anticipate that expectation is no longer theoretical.
Samsung Adds Built-In Privacy Display to Galaxy S26 Ultra
Samsung has unveiled a new display technology on its Galaxy S26 Ultra that allows users to activate a built-in privacy mode on a per-app basis, limiting what can be seen from side angles without the need for stick-on screen filters.
How the Privacy Display Works
The feature, branded “Privacy Display”, was introduced at Samsung’s Galaxy S26 launch event in San Francisco and will initially be available only on the Galaxy S26 Ultra, which goes on sale from 11 March in the UK (starting at £1,279).
At Pixel Level
Unlike traditional privacy films that sit over the screen and dim the display, Samsung’s approach is integrated at the pixel level. The company says the technology uses two types of pixels, described as narrow and wide, within what it calls a “Black Matrix” architecture. When privacy mode is enabled, the light path from each pixel is narrowed so that content remains visible when viewed directly but appears dark or obscured from side angles. When disabled, the display behaves like a standard screen, dispersing light in all directions.
Banking App Can Always Be In Privacy Mode
Samsung states that users can configure the feature so that specific apps, such as banking or messaging applications, always open in privacy mode. The setting can also apply to notifications, reducing the visibility of pop-ups from side views. An optional “Maximum Privacy Protection” mode further intensifies the effect by reducing brightness contrast to limit peripheral readability.
In its UK announcement, Samsung said the Galaxy S26 Ultra introduces “the world’s first built-in Privacy Display for mobile phones” and described it as reinforcing “Samsung’s commitment to privacy at a pixel level.”
Why This Matters
Shoulder surfing, the practice of observing someone’s screen in public spaces, has long been a concern for commuters and business users. Physical privacy filters have offered a partial solution but typically reduce brightness, distort colour or make it harder to share the screen deliberately.
Samsung’s integrated approach seeks to address those trade-offs. By embedding privacy control directly into the display hardware, the company aims to preserve viewing quality when privacy mode is off, while limiting exposure when activated.
The move also arrives at a time when smartphones are increasingly used for banking, two-factor authentication, work communications and AI-assisted tasks. The more sensitive activity a device handles, the greater the potential impact of casual visual exposure.
AI Central To Galaxy S26
At the same launch event, Samsung continued to position artificial intelligence as central to the Galaxy S26 line-up. TM Roh, Samsung’s President and Head of Mobile eXperience, said: “AI must become part of our infrastructure. You should be able to enjoy its benefits through the devices you use every day.”
However, it remains unclear whether AI features alone are driving large numbers of upgrades in an already mature smartphone market. While manufacturers continue to position AI as central to the next generation of devices, many users still prioritise practical factors such as battery life, camera performance and security. In that context, a built-in privacy display offers a more tangible and immediately understandable benefit for premium buyers.
Currently Limited To The Ultra Model
The Privacy Display is currently limited to the Ultra model, reinforcing its position as Samsung’s premium offering. The standard Galaxy S26 starts at £879, while the S26+ begins at £1,099.
Restricting the feature to the highest tier suggests Samsung sees it as part of a broader value proposition that includes upgraded AI performance, a customised chipset and enhanced thermal management. It may also have enterprise implications, particularly for organisations concerned about data exposure in public or shared environments.
That said, the feature’s effectiveness in real-world use will depend on user behaviour. Privacy mode must be activated, configured and understood. If users leave it disabled, the benefit disappears. There is also a balance between privacy intensity and usability, particularly in brighter environments.
Other Manufacturers Taking Similar Approaches
Samsung is not the first to address visual privacy, although its pixel-level implementation is new in mainstream smartphones. Laptop makers such as HP and Lenovo have for several years offered built-in privacy screen technologies, including HP’s Sure View and Lenovo’s PrivacyGuard, which narrow viewing angles at the hardware level.
In the mobile market, most privacy solutions to date have relied on stick-on filters or software-based controls rather than integrated display architecture. Samsung’s move suggests that hardware-level screen privacy may now be moving from enterprise laptops into premium smartphones, particularly as mobile devices are increasingly used for work and financial transactions.
What Does This Mean For Your Business?
For businesses, the introduction of hardware-level privacy controls highlights a change in how mobile security is being approached. Rather than relying solely on software encryption and access controls, manufacturers are now addressing physical visibility risks at the display level.
Organisations with mobile workforces, especially those handling financial, legal or personal data, may view such features as an additional layer of practical risk reduction. In regulated sectors, even incidental data exposure can have reputational or compliance implications.
However, hardware capability does not replace policy. Screen privacy settings must be configured, and staff still require awareness of secure working practices in public spaces.
The move by Samsung broadly reflects a growing expectation that privacy should be built into devices by design, not added later. As AI capabilities expand and phones handle increasingly sensitive information, the distinction between digital security and physical privacy appears to be narrowing.
Samsung’s Privacy Display may not, on its own, redefine the smartphone market. It does, however, show that privacy is becoming a hardware conversation as much as a software one, and that may influence future purchasing decisions across both consumer and enterprise segments.
Company Check : ServiceNow AI Resolves 90 Per Cent of IT Tickets
ServiceNow claims its new Autonomous Workforce AI is now resolving more than 90 per cent of targeted Level 1 IT help desk tickets inside its own organisation, marking a significant step in the shift from AI assistance to AI execution.
Autonomous Workforce and EmployeeWorks
The claim forms part of California-based US enterprise software company ServiceNow’s early 2026 launch of Autonomous Workforce and EmployeeWorks, two products designed to move AI from answering questions to completing work.
ServiceNow says it has effectively acted as “customer zero”, deploying the technology inside its own IT service desk. In a launch post, the company stated: “When Moveworks joined ServiceNow in mid-December, our own IT helpdesk ticket volume doubled overnight. Two organisations, one service desk, twice the requests. But SLAs didn’t slip. Not one. Why? Because ServiceNow was customer zero with AI co-workers that absorbed the entire surge, handling 90% of L1 IT tickets without missing a beat.”
The initial focus is on Level 1 IT support, covering high-volume, repeatable issues such as password resets, account unlocks, software installation and VPN troubleshooting. ServiceNow describes its AI specialists as systems that “own a job, end to end – the same way a new team member would”, rather than simply recommending next steps.
On its platform site, the company says 90 per cent of IT support requests at ServiceNow are handled autonomously, that 85 per cent of IT support agents have been freed up for higher-value work, and that cases are handled 99 per cent faster than by human agents.
How It Works
ServiceNow’s core argument is that this is not a chatbot layered over unstructured knowledge. The Autonomous Workforce operates inside the ServiceNow platform itself, drawing on live configuration management data, workflows, policy engines, approval chains and historical incident patterns.
According to the company, AI specialists “run inside your governance model, learn continuously, and work around the clock.” They self-assign tickets within defined permissions, execute workflows and escalate where appropriate.
The message is straightforward. “Businesses don’t need more pilots or promises. They need AI that gets work done,” said Amit Zavery, President, Chief Product Officer and Chief Operating Officer at ServiceNow.
The emphasis is on measurable outcomes. Tickets are either resolved within policy boundaries or escalated with full context. The system is designed to operate inside existing role-based access controls rather than bypass them.
Why This Matters
For years, AI in service management has largely been used for triage, recommendations and faster routing. Moving to autonomous, end-to-end execution at scale is a far bigger step.
If validated beyond ServiceNow’s own internal environment, this could represent a real shift in how enterprises think about AI in IT operations. The value isn’t simply in reducing headcount. It’s more to do with faster resolution times, fewer escalations and the ability to absorb growth without increasing staff numbers at the same pace.
For ServiceNow, the announcement also carries competitive weight. The IT service management market is increasingly contested, with competitors such as Salesforce targeting enterprise customers with AI-driven service offerings. Demonstrating internal success could strengthen ServiceNow’s position as more than a workflow platform, but as an operational AI layer.
Benefits and Practical Constraints
For organisations already running ServiceNow, the appeal is clear. Repetitive Level 1 tickets consume time and money. If those can be resolved reliably without human intervention, IT teams can redirect skilled staff towards more complex incidents and strategic projects.
However, the model assumes structured data, well-defined workflows and disciplined governance. ServiceNow benefits from having two decades of structured operational intelligence inside its own platform. Many enterprises, however, have more fragmented documentation and inconsistent data quality.
There are also governance considerations. Fully autonomous agents must know when to escalate. Thresholds, auditability and approval chains need to function under pressure. While ServiceNow emphasises built-in guardrails, customers will need to test those controls carefully in their own environments.
Pricing is another unknown. ServiceNow has not publicly detailed long-term cost structures for Autonomous Workforce. For customers, the commercial calculation will be straightforward. The AI must either cost less than the human effort it replaces or deliver measurable improvements in service performance.
What Does This Mean For Your Business?
For UK businesses, the headline figure of 90 per cent autonomous resolution should not be taken at face value without context. The more relevant question is whether your own IT environment is structured well enough to support that level of automation.
Autonomous IT support relies on clean configuration data, clearly defined approval hierarchies and consistent workflow design. Without those foundations in place, automation is more likely to expose gaps than eliminate friction.
It is also clear where the market is moving. Vendors are shifting from AI that advises to AI that executes. Organisations that treat AI as an operational layer, governed, monitored and measured in the same way as human teams, are more likely to unlock sustainable efficiency gains.
The opportunity is not only about reducing cost. It is about resilience. The ability to absorb spikes in demand, maintain service levels during periods of change and redeploy skilled staff towards higher-value work carries long-term strategic value.
Autonomy, however, alters the risk profile. Governance, oversight and escalation design move from technical details to core management disciplines. Businesses that invest in those capabilities will be better placed to introduce autonomous systems with confidence.
ServiceNow’s announcement raises expectations across the market. Whether a 90 per cent benchmark becomes common will depend less on vendor ambition and more on how prepared organisations are to support autonomous execution in practice. For most SMEs, reaching that level in the near term is unlikely, as it typically requires the structured data, mature workflows and governance discipline more often found in larger enterprises.
Security Stop-Press : Cyber Risk Rises After Iran Strikes
Cyber security firms have warned that the risk of retaliatory cyber activity has increased following US and Israeli strikes on Iran, with UK organisations urged to heighten vigilance.
Sophos has rated the current threat level as “Elevated”, with the highest risk in the coming days and weeks. Historically, Iran-linked actors have responded to geopolitical escalation with ransomware, wiper malware, DDoS attacks and “hack-and-leak” campaigns. CrowdStrike has already reported reconnaissance and DDoS activity consistent with Iranian-aligned groups, which can precede more disruptive operations.
For UK businesses, the danger is likely to be opportunistic targeting of exposed systems rather than direct state-level attacks. Enforcing multi-factor authentication, patching internet-facing services, reviewing remote access controls and validating secure backups are practical steps organisations should prioritise while tensions remain high.