Ukraine Says Robots Seized Enemy Territory On Their Own

Ukraine says it has carried out the first combat operation in history where enemy territory was captured entirely using robots and drones, signalling a major turning point in how future wars may be fought.

How Ukraine Says Robots Captured Enemy Positions

The claim was made by Ukrainian President Volodymyr Zelensky when he announced that Ukrainian forces had seized an enemy position using only unmanned systems, without infantry entering the battlefield.

According to Zelensky, drones and robotic ground systems identified targets, suppressed enemy fire, and secured the position without Ukrainian casualties. Ukraine’s military has not released detailed operational information, and the full claim has not been independently verified. However, the announcement has attracted global attention because it points towards a battlefield where machines increasingly replace soldiers in frontline combat roles.

The operation reportedly involved a combination of aerial drones and unmanned ground vehicles working together as coordinated systems rather than isolated devices. Analysts say this type of “multi-swarm” warfare allows militaries to overwhelm positions while reducing risk to personnel.

In a statement published alongside footage of the operation, Zelensky said: “For the first time in the history of this war, an enemy position was taken exclusively by unmanned platforms – ground systems and drones.”

Why Ukraine Has Become A Testing Ground For Military Robotics

The war in Ukraine has accelerated military technology development at a pace rarely seen in modern conflicts. Systems that would normally take years to test and deploy are now being modified, upgraded, and returned to combat within weeks.

UFORCE, the Ukrainian-British defence technology company linked to the operation, was formed through the merger of nine Ukrainian defence companies and has now achieved a valuation exceeding $1 billion, making it Ukraine’s first defence technology unicorn. The company develops air, land, and sea drones, alongside battlefield software designed to coordinate unmanned systems during combat.

Its maritime drones have reportedly damaged or destroyed multiple Russian naval assets in the Black Sea, while its ground systems are increasingly being used for reconnaissance, logistics, mine clearance, casualty evacuation, and direct attacks.

The company says it has now conducted more than 150,000 combat missions since Russia’s full-scale invasion in 2022, reflecting how rapidly unmanned systems have become central to modern warfare.

Ukraine’s wider drone production has also expanded dramatically, increasing from a few thousand units in 2022 to several million by the end of 2025, turning the country into one of the world’s largest real-world testing grounds for autonomous military systems.

How The Wider Defence Industry Is Responding

Ukraine is not alone in pushing towards more autonomous warfare systems. Defence technology companies across the United States, Europe, China, and Israel are investing heavily in AI-enabled drones and robotic systems.

US company Anduril Industries recently tested an autonomous fighter jet and is building a major manufacturing facility in Ohio designed to scale production of military drones and autonomous systems. Germany’s Helsing is combining military AI with battlefield analytics software, while Chinese companies are rapidly expanding AI-enabled military technologies with strong state support.

The defence sector itself is also changing. Traditional contractors such as BAE Systems and Lockheed Martin increasingly face competition from technology-focused startups that develop software-defined systems far more quickly than conventional military procurement programmes allow.

UFORCE has openly framed this as part of a broader industrial transformation. The company states that “the age of unmanned warfare is no longer a conference-circuit prediction” and has become an operational and commercial reality.

How Battery Technology Also Fits Into This Story

The growing role of battlefield robots also highlights another practical challenge, which is how these machines are powered in demanding real-world conditions.

This is where developments outside defence can quickly become relevant. For example, Cambridge battery company Nyobolt has developed ultra-fast charging batteries designed for autonomous machines, warehouse robots, physical AI systems, and AI data centres. The company says its technology can charge from zero to 80 per cent in under five minutes and is built for repeated, high-intensity charging cycles.

Nyobolt’s work is not about the battlefield directly, but it shows how the wider robotics ecosystem is developing around the same core problem: autonomous machines need reliable power, rapid charging, and long operating life if they are to work continuously. In warehouses, that means robots spending more time moving goods and less time charging. In military settings, the same principle could shape how future unmanned systems are designed, deployed, and sustained.

This matters because the future of autonomous robotics will not depend on AI alone. Batteries, sensors, communications, materials, and manufacturing capacity will all play a part in determining which systems can operate reliably at scale.

The Ethical Questions Around Autonomous Warfare

The growing use of AI and robotic systems in combat is also intensifying concerns about accountability, ethics, and human oversight.

At present, most battlefield robots still require human operators to approve attacks or direct operations. However, many systems already use software-assisted targeting, autonomous navigation, and machine-learning tools to accelerate combat decisions.

Human rights organisations and international bodies have warned that increasing autonomy risks reducing human accountability in life-and-death situations. Concerns include how responsibility is assigned if autonomous systems malfunction or cause civilian casualties.

At the same time, defence companies argue that automation can reduce human error, improve reaction times, and protect soldiers from increasingly dangerous battlefield conditions.

The United Nations has discussed possible international controls on autonomous weapons, but no binding global framework currently exists despite growing calls for regulation.

What Does This Mean For Your Business?

For most UK businesses, robotic warfare may appear distant from everyday operations, but the technologies emerging from Ukraine are likely to influence far more than defence.

Many of the systems now being refined on the battlefield rely on AI, machine vision, autonomous navigation, secure communications, sensor fusion, and real-time data processing. It is worth noting here that these same technologies are also increasingly used in civilian sectors including logistics, manufacturing, transport, infrastructure monitoring, and cybersecurity.

The conflict is also accelerating investment into robotics and AI across Europe and the United States, creating commercial opportunities for companies involved in software engineering, semiconductors, communications systems, drones, sensors, and advanced manufacturing.

Also, the rapid militarisation of AI is likely to increase regulatory scrutiny around autonomous systems more broadly, particularly where safety, accountability, and decision-making are involved. Businesses developing AI-enabled products may therefore face growing expectations around transparency, oversight, and ethical controls.

Russia’s war against Ukraine is no longer only reshaping modern warfare. It has also become one of the world’s fastest-moving testing grounds for autonomous technology, with the systems emerging from the conflict likely to influence both defence and civilian industries for years to come.

AI Agents Are Starting To Rewrite The Software Industry

Enterprise spending on AI-native software is now growing far faster than traditional cloud software, signalling a major change in how businesses buy, use, and value technology.

Why The Traditional SaaS Model Is Under Pressure

For more than two decades, most enterprise software has operated on a relatively simple model. Businesses bought software licences based on the number of employees using a platform, often referred to as “per-seat” pricing.

This approach helped drive the growth of companies such as Salesforce, Workday, ServiceNow, Slack, Zoom, and countless other Software-as-a-Service (SaaS) providers. Revenue grew as customers added more staff and purchased more licences.

However, the rapid rise of AI agents and AI-native platforms is starting to disrupt that model.

Instead of simply giving employees tools to work with, AI-native systems increasingly aim to complete tasks themselves. For example, AI agents can now respond to customer enquiries, generate marketing campaigns, summarise meetings, analyse contracts, process onboarding requests, monitor systems, and automate internal workflows with limited human involvement.

This changes the economics of enterprise software because companies may no longer need as many human users interacting directly with traditional platforms.

The Spending Gap Is Growing Quickly

One clear sign of this transition comes from procurement platform Tropic, which analysed more than $18 billion in managed software spending. Its latest figures show AI-native enterprise spending grew by approximately 94 per cent year-on-year among mid-market and enterprise organisations, while primarily traditional SaaS spending grew by around eight per cent.

It’s important to note that these figures reflect Tropic’s customer dataset rather than the entire global software market. However, analysts increasingly believe the underlying trend is real and accelerating.

Also, research from Deloitte suggests software companies are now under growing pressure to become “AI-first” businesses, with agentic AI expected to transform software operations, pricing models, and customer expectations across the industry.

Meanwhile, Gartner predicts that by 2030, at least 40 per cent of enterprise SaaS spending could move towards usage-based, agent-based, or outcome-based pricing models instead of traditional per-seat licensing.

What “AI-Native” Actually Means

Much of the current discussion centres around the difference between traditional SaaS, hybrid AI software, and fully AI-native systems.

Traditional SaaS platforms mainly rely on human users manually operating software interfaces. Hybrid systems add AI features into existing platforms, such as AI assistants inside Microsoft 365 or Salesforce.

AI-native platforms are different because the AI itself becomes the main worker inside the system.

For example, some newer customer service platforms now allow businesses to deploy autonomous AI agents capable of handling large volumes of enquiries across WhatsApp, email, web chat, and social media with minimal human input. Other AI-native systems can build workflows, generate reports, write software code, or analyse data through natural language instructions rather than manual configuration.

This helps explain why investors and software vendors are increasingly focusing on “agentic AI”, where software performs work autonomously rather than simply assisting humans.

Why Software Companies Are Rushing To Adapt

The pressure on traditional software firms is now becoming increasingly visible.

Many major software providers are rapidly embedding AI agents into their products, partly because investors fear that platforms failing to adopt AI quickly enough could lose market share to newer AI-native competitors.

Salesforce, Microsoft, Google, ServiceNow, Slack, Anthropic, OpenAI, and many others are now heavily promoting AI agents and autonomous workflow systems as core parts of their future strategies.

If one AI agent can perform work that previously required several employees using multiple software licences, the traditional per-user revenue model that has underpinned much of the software industry for decades becomes harder to sustain.

This has also created growing interest in alternative pricing structures based on usage, AI actions, outcomes, or completed tasks rather than simply employee headcount.

At the same time, many businesses are discovering that AI systems introduce very different cost structures from traditional SaaS.

Unlike standard software subscriptions, AI systems often consume large amounts of compute power, tokens, API calls, and cloud infrastructure. Research cited by Tropic suggests many organisations are now seeing AI-related software price increases far above normal annual SaaS uplifts.

What Does This Mean For Your Business?

For UK businesses, the most important point is that AI is increasingly moving beyond being a standalone productivity tool and is starting to reshape the software industry itself.

Businesses evaluating software suppliers may increasingly need to ask not just what a platform does, but how much human work it can realistically automate, what the long-term pricing model looks like, and how AI-generated decisions are monitored and controlled.

The trend also means software procurement is becoming more complicated. Traditional, predictable per-user pricing is gradually being replaced by models based on AI usage, actions, compute consumption, or business outcomes, which may make long-term costs harder to forecast.

At the same time, organisations adopting AI-native systems may gain significant efficiency advantages if these tools genuinely reduce manual workload, improve customer response times, or automate repetitive operational tasks.

However, many AI agents still remain imperfect, requiring human oversight, careful governance, and strong security controls. Businesses should therefore be cautious about assuming that AI-native automatically means lower risk or lower cost.

What is becoming increasingly clear, however, is that the software industry is entering a major transition period. The companies that succeed may not necessarily be those with the biggest software platforms, but those that can most effectively combine AI automation, workflow integration, trust, and measurable business outcomes into products organisations are willing to rely on every day.

Amazon Launches UK Drone Deliveries

Amazon has begun making drone deliveries in the UK for the first time, marking a major step towards autonomous AI-driven logistics becoming part of normal daily commerce.

Why Amazon Has Started UK Drone Deliveries Now

Amazon Prime Air has officially launched limited drone deliveries in Darlington, County Durham, making the UK the first country outside the United States where the company has rolled out the service commercially.

The launch follows years of testing, regulatory delays, safety reviews, and technical development. Amazon first trialled drone deliveries near Cambridge back in 2016, when one early test delivery reportedly took just 13 minutes.

The company is now using its newer MK30 drone platform, which has been designed to operate more quietly, fly further, and cope with a wider range of weather conditions than previous models.

For now, deliveries are restricted to a 7.5-mile radius around Amazon’s Darlington fulfilment centre. Packages must weigh less than 2.2kg and fit inside a relatively small parcel size.

Eligible customers can receive items such as batteries, cables, office supplies, beauty products and household essentials in under two hours.

Amazon says the long-term goal is to make deliveries significantly faster. In some parts of the US, where the system is already operational in five states, the average delivery time is reportedly around 36 minutes.

How The Drone System Actually Works

The MK30 drones operate largely autonomously using onboard cameras, sensors, GPS, and machine learning systems designed to identify obstacles and avoid collisions.

Amazon says the drones can detect objects including washing lines, trampolines, trees, animals, people and other aircraft while descending for deliveries.

Packages are lowered into a customer’s garden or driveway from a height of around 10 to 12 feet, rather than requiring the drone to land fully.

The flights are taking place under Beyond Visual Line of Sight, or BVLOS, rules approved by the UK Civil Aviation Authority. That matters because it allows drones to operate autonomously beyond what a human pilot can physically see.

Even so, the drones are still monitored remotely from a control centre, with operators able to coordinate with air traffic control if needed.

Amazon has also secured temporary protected airspace around the Darlington test area while the trial continues.

Why Darlington Was Chosen

Darlington was selected partly because it is believed to offer a useful mix of residential areas, rural land, roads and nearby airspace within a relatively compact area.

That allows Amazon to test how the drones cope with real-world conditions without immediately dealing with the extreme complexity of major cities.

This is important because dense urban environments remain one of the biggest technical challenges for drone delivery systems.

Practical Limitations

It should be noted here that drone deliveries also face practical limitations in dense urban environments, where high-rise buildings, congested airspace, and limited landing areas make autonomous delivery far more difficult than in lower-density suburban or rural locations.

Issues such as access to flats and apartments, rooftop delivery infrastructure, airspace congestion, safety management and public acceptance remain unresolved in many city environments.

The current Darlington operation is also still relatively small in scale, with Amazon carrying out only a maximum of around 10 flights per hour.

Still Safety Questions Around Drone Parcel Deliveries

Despite Amazon’s confidence in the technology, safety concerns remain one of the biggest barriers to wider public acceptance and regulatory expansion.

Amazon’s rollout comes after several incidents involving its MK30 drones in the United States.

One drone reportedly clipped a building in Texas earlier this year after temporarily losing GPS positioning. Other incidents involving collisions during testing in Arizona and Oregon also triggered investigations and delays.

Amazon says no injuries occurred and describes the incidents as part of the normal process of refining a new aviation system.

The company also argues that the drones operate to aerospace-level safety standards and include multiple backup systems.

Public Reaction

Public reaction in Darlington seems to have been mixed. Some residents have reportedly embraced the convenience and novelty of near-instant deliveries, while others have raised concerns around noise, safety and whether drones are really necessary for ordinary household deliveries.

Many AI-powered autonomous systems still face a basic problem, namely that people do not automatically trust them simply because the technology works.

A similar challenge is now emerging elsewhere in the tech industry. Meta, for example, is increasingly using AI systems to estimate users’ ages and help enforce safety rules on platforms like Instagram. In both cases, companies are asking the public to trust autonomous systems to make decisions that were previously handled directly by humans.

Why This Matters Beyond Parcel Deliveries

The full significance of Amazon’s rollout is not really about faster deliveries of batteries or office supplies.

The bigger story is really that autonomous AI systems are steadily moving out of controlled test environments and into ordinary public infrastructure.

Drone delivery combines several technologies that businesses are likely to encounter more frequently over the next decade, including machine learning, autonomous navigation, remote monitoring, automated compliance systems and AI-assisted decision-making.

The UK is already experimenting with similar technology elsewhere. For example, the NHS has been trialling drones for transporting blood supplies in London, while Royal Mail has used drones to deliver parcels to remote communities in Orkney. Many of these early deployments focus on environments where conventional transport is slow, expensive or difficult.

The commercial logic for drone parcel deliveries is also now becoming a bit clearer. For example, labour shortages, rising delivery costs, pressure for faster fulfilment and growing demand for same-day delivery are all pushing logistics companies towards greater automation.

What Does This Mean For Your Business?

For most UK businesses, drone deliveries are unlikely to become an immediate operational reality. The technology still faces significant regulatory, technical, and public acceptance barriers, especially in towns and cities.

However, AI-driven autonomous systems are increasingly becoming part of everyday business operations, with AI now making more decisions in areas such as logistics, security, customer verification, fraud detection and operational management.

That creates opportunities for faster services and lower operating costs, but it also increases the importance of governance, oversight, cybersecurity and trust.

Amazon’s drone rollout is, therefore, less about flying parcels and more about what happens when AI systems begin interacting directly with the physical world at scale.

For UK businesses, the key lesson here may simply be that autonomous systems are no longer experimental concepts sitting in research labs. They are beginning to appear in everyday operations, regulation, infrastructure and customer services, often much sooner than many organisations expected.

Children Fool Online Age Checks With Fake Mustaches

Children are reportedly bypassing online age-verification systems using methods as simple as drawing fake facial hair, raising fresh questions about whether current age-assurance technology is robust enough to protect young users online.

How Children Are Bypassing Age Checks

The issue was highlighted in a new report from UK online safety organisation Internet Matters, which surveyed more than 1,200 UK children and parents about online safety and age verification under the Online Safety Act.

The findings suggest that many children already understand how to bypass checks designed to block access to adult content, restricted social media features, and age-limited online platforms. The report found that 46 per cent of children believe age checks are easy to bypass, while only 17 per cent described them as difficult.

One of the more unusual techniques involved children drawing fake mustaches or facial hair using makeup pencils in order to fool facial age-estimation systems. Internet Matters stated that “children demonstrated a clear awareness of how to bypass age checks” and noted that drawing on facial hair was “reported as working in multiple instances”.

The report also found that around one-third of children admitted bypassing age checks entirely, including by entering fake birthdays, using someone else’s account, uploading photos of adults, or using VPNs to avoid restrictions.

Parents were also found to play a role in some cases. Internet Matters reported that 26 per cent of parents had either helped their child bypass age checks or knowingly allowed it.

Why Age Verification Is Expanding

The rapid growth of age-verification systems is being driven largely by new online safety laws introduced across the UK, Europe, Australia, and parts of the United States.

In the UK, the Online Safety Act requires platforms to take stronger steps to protect children from harmful content, including pornography, violence, self-harm material, and certain addictive platform features. The law also requires pornography services to implement what Ofcom describes as “highly effective age assurance”.

As a result, many websites and apps now use facial age estimation, government ID uploads, third-party verification systems, or behavioural analysis to estimate a user’s age.

Internet Matters found that 53 per cent of children had recently been asked to verify their age online, with checks commonly appearing on platforms including TikTok, YouTube, Roblox, Instagram, Reddit, Discord, and Twitch.

The report also noted that many children actually support stronger safety protections online. One child quoted in the research said: “I think it’s good because it keeps us from viewing adult content which is not going to be good for our mental health.”

How Meta Is Using AI To Estimate Age

The wider technology industry is already moving beyond simple “enter your birthday” systems and towards AI-driven age estimation.

Meta recently confirmed it is using AI systems to analyse photos, videos, captions, interactions, and behavioural signals to determine whether users may actually be underage, even if they claim to be adults.

According to Meta, its systems now use “visual analysis” to estimate age using factors such as height, bone structure, and broader visual cues. The company stated: “Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age.”

Meta stressed that this “is not facial recognition” and says the technology is designed to place suspected teenagers into stricter “Teen Account” protections automatically, or remove users believed to be under 13 until they can verify their age.

The company is also expanding these systems across Instagram, Facebook, Messenger, Reels, Live streams, and Groups as governments place increasing pressure on platforms to improve child safety.

Why The Technology Still Struggles

Despite increasingly sophisticated systems, the latest findings show that age assurance remains far from foolproof.

Facial age-estimation technology relies heavily on probability rather than certainty, meaning lighting, makeup, camera quality, facial expressions, accessories, and image manipulation can all affect results. Internet Matters also found that some children had successfully used video game characters, AI-generated faces, or edited images to bypass checks.

The report also highlighted wider concerns around privacy and cybersecurity. Some parents and children expressed discomfort about uploading passports, ID documents, or facial scans online, particularly if third-party verification companies are involved.

Others worried that large-scale age verification could create attractive targets for cybercriminals if sensitive personal data were breached or leaked.

At the same time, supporters argue that platforms cannot realistically deliver age-appropriate experiences without some form of reliable age assurance.

What Does This Mean For Your Business?

For UK businesses, the story highlights the growing difficulty of verifying identity and age online, particularly when AI systems are being asked to make judgement calls based on appearance, behaviour, and probability rather than certainty.

Organisations developing online platforms, customer portals, AI tools, or digital services are likely to face increasing regulatory pressure around age assurance, child safety, privacy, and identity verification as online safety laws continue to expand globally.

The findings also underline a wider cybersecurity and governance challenge. Systems that rely entirely on automated trust signals can often be manipulated in unexpected ways, particularly when users actively look for workarounds.

At the same time, the growing use of facial analysis, behavioural monitoring, and AI-driven verification is likely to increase scrutiny around privacy, biometric data handling, transparency, and UK GDPR compliance.

The main lesson here for businesses is that safety technology alone rarely solves behavioural problems completely. Effective online protection increasingly depends on combining technical controls with education, parental involvement, platform accountability, and realistic expectations about how people actually behave online.

Company Check : Chrome AI Model Download Raises User Control Questions

Reports that Google Chrome may download a multi-gigabyte AI model onto some desktop computers without many users realising it have sparked debate about transparency, storage use, privacy, and how AI features are increasingly being embedded into everyday software.

How The Controversy Started

The issue emerged after privacy researcher Alexander Hanff published a detailed blog post claiming that Chrome had silently downloaded a file called weights.bin onto his system as part of Google’s Gemini Nano on-device AI system.

According to Hanff, the file appeared inside a folder named OptGuideOnDeviceModel and occupied around 4GB of storage space. He claimed Chrome downloaded the model automatically in the background and that manually deleting the file caused it to reappear later after Chrome re-downloaded it.

The story quickly attracted wider attention, leading to other users reporting that they had discovered large unexplained files linked to Chrome installations.

Importantly, there is currently no evidence that the file is malicious software or spyware. The debate instead centres on whether users were given enough visibility and control over what was being installed and why.

What The File Actually Does

The file is understood to contain Google’s Gemini Nano model, a smaller local version of its Gemini AI system designed to run directly on devices rather than entirely in the cloud.

Google has increasingly been building AI capabilities into Chrome, including scam detection tools, writing assistance, summarisation features, developer APIs, and other AI-assisted functions. Running some of these tools locally can reduce latency and limit the amount of information sent back to remote servers.

In a statement from Google, reported by Android Authority, the company said: “We’ve offered Gemini Nano for Chrome since 2024 as a lightweight, on-device model. It powers important security capabilities like scam detection and developer APIs without sending your data to the cloud.”

Google also stated that the model is designed to uninstall automatically if a device is low on resources, and that it has started rolling out settings allowing users to disable and remove the model more easily.

Why Some Users Are Concerned

Much of the concern is not about AI itself, but about how these features are being deployed.

Many users appear to have been unaware that Chrome could download several gigabytes of AI model data in the background, particularly on systems where storage space, bandwidth, or battery life may already be constrained. Some users also questioned whether these features should be enabled automatically rather than introduced through a clearer opt-in process.

Hanff’s blog post went much further, arguing that large-scale AI downloads could carry environmental implications when multiplied across potentially hundreds of millions of devices worldwide. His article also raised legal and regulatory questions under European privacy law, although those claims have not been tested in court and Google has not publicly responded directly to the legal allegations.

The broader issue reflects growing public unease around how AI is increasingly becoming embedded inside familiar products, often with little visibility into what is running locally, what data may be processed, and how much system resource is being consumed.

Why Google Is Pushing AI Into Chrome

It should be noted here that Google is certainly not alone in embedding local AI models into consumer software.

For example, Microsoft has added AI assistants and local AI features into Windows and Office. Apple is expanding on-device AI processing across macOS and iOS. Also, Meta is building AI tools directly into Facebook, Instagram, and WhatsApp. Browser makers and operating system vendors increasingly view AI as a core platform feature rather than a standalone application.

It’s also worth noting here that local AI processing can offer some genuine advantages. For example, keeping certain AI functions on-device rather than constantly sending data to cloud servers can improve response times, reduce some privacy risks, and allow features to continue working offline.

That said, this has created a new challenge for software vendors because AI models are often large, resource-intensive, and not always obvious to ordinary users.

The debate around Chrome highlights how software expectations are changing. Browsers are no longer simply lightweight web access tools. They are increasingly becoming AI-enabled operating environments running sophisticated local models behind the scenes.

What Does This Mean For Your Business?

For UK businesses, the issue is less about the AI model itself and more about whether organisations have enough visibility and control over the growing number of AI features now being built into everyday software.

Organisations should review which AI features are enabled across browsers and workplace devices, particularly in managed IT environments where storage, performance, bandwidth usage, and data handling policies matter. IT teams may also want to assess whether local AI models are necessary on all devices or whether some features should be disabled through enterprise policy controls.

The story also highlights a wider challenge facing businesses as AI becomes embedded into mainstream software products. Features that were once optional add-ons are increasingly arriving automatically through standard updates, making it harder for organisations to fully understand what software is doing behind the scenes.

Businesses that maintain clear software governance, strong endpoint management, and active oversight of AI-related features will be better placed to balance the potential benefits of AI against the operational, security, privacy, and compliance risks that increasingly come with it.

Security Stop-Press : Staff Increasingly Relaxed About Workplace Fraud

New research from Cifas suggests some UK employees are becoming increasingly comfortable with workplace fraud and insider threats.

The survey found that 24 per cent believed it was acceptable to secretly work for a competitor, while 13 per cent admitted selling, or knowing someone who had sold, company login details.

Cifas warned the findings point to “shifting norms, blurred boundaries, and rising risks to organisational integrity”, with insider threats becoming a growing concern for employers.

For businesses, the report reinforces the need for stronger access controls, staff training, and better monitoring of insider risks, as cybercriminals increasingly target employees as a route into company systems and sensitive data.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives