Featured Article : ChatGPT Gets Image Upgrade and Fewer Restrictions

ChatGPT’s image generation tools have just received a major upgrade, and while everyone is busy turning politicians into dreamy ‘Studio Ghibli’ characters, OpenAI’s (quiet) policy changes may prove to be the bigger story.

A New Visual Brain for ChatGPT

At the centre of this update is GPT-4o, OpenAI’s new omnimodal model. Unlike previous tools like DALL·E (which bolted image generation onto ChatGPT from the outside), GPT-4o builds it into the core of the chatbot. The result appears to be faster, sharper, and more context-aware image outputs.

This means ChatGPT can now:

– Generate photorealistic images directly from prompts.

– Follow detailed instructions more precisely (including rendering readable text in images).

– Edit existing pictures, including transforming or “inpainting” people and objects.

The upgrade is currently available for Pro users paying $200/month, with OpenAI promising access will “soon” be extended to Plus and free-tier users as well as API developers.

Users Experimenting

It’s been reported that users have been experimenting with the upgraded version, including uploading selfies and asking ChatGPT to transform them into Pixar-style avatars or place them in scenic landscapes. Others have reportedly been feeding the tool prompts like “Donald Trump in a Ghibli forest” (a dreamy, nature-filled setting inspired by Studio Ghibli films) or “South Park characters debating in Parliament” and got eerily convincing results.

Why Studio Ghibli Is Suddenly Everywhere

It seems it didn’t take long for social media to explode with whimsical, pastel-hued images that seemed plucked straight from ‘My Neighbour Totoro’ or ‘Spirited Away’, i.e. two of Studio Ghibli’s most iconic animated films. The reason is likely to be because GPT-4o’s new model was trained on a wide range of styles, including those reminiscent of iconic animation.

While OpenAI insists it avoids mimicking the work of any living artist (it actively blocks prompts that explicitly request such imitations) it seems clear to many that the model can now reproduce stylistic “vibes” with uncanny accuracy. This explains how the internet managed to flood X and Instagram with Ghibli-inspired memes within days.

However, this artistic mimicry has raised some eyebrows. A resurfaced 2016 video of Studio Ghibli co-founder Hayao Miyazaki calling AI-generated art “an insult to life itself” has been doing the rounds again, reigniting the debate around AI and artistic originality.

Soften the Rules, Sharpen the Debate

Perhaps the most quietly controversial part of this launch is what OpenAI removed. GPT-4o comes with a notably relaxed set of safeguards around image generation. While safety features still exist, especially for minors and violent or abusive content, the rules have actually changed quite significantly. For example, ChatGPT can now:

– Generate images of public figures like Elon Musk or Donald Trump.

– Depict racial features and body characteristics on request.

– Show hateful symbols (like swastikas) if done in educational or neutral contexts.

– Mimic the aesthetic of well-known studios (e.g. Pixar, Ghibli), though not named living artists.

Joanne Jang (OpenAI’s model behaviour expert) has explained the move as a shift from blanket refusals to more nuanced moderation, saying, “We’re focusing on preventing real-world harm,” and “Not just avoiding discomfort.”

For example, ChatGPT used to reject prompts like “make this person heavier” or “add Asian features,” assuming them to be inherently offensive. Now, such requests are allowed if they are presented in a neutral or user-specific context.

This reflects OpenAI’s broader philosophy that censorship by default may suppress creativity or unfairly judge user intent. As Jang wrote in a recent blog post, “Ships are safest in the harbour,” adding “but that’s not what ships — or models — are for.”

Safety Isn’t Gone, It’s Just Different

That’s not to say the floodgates are wide open. Despite the apparent loosening of rules, the new image generator still uses a layered safety stack that includes:

– Prompt blocking (for inappropriate text before image generation).

– Output blocking (for images that breach policy after they’re made).

– A sophisticated moderation system, including child safety classifiers.

– Refusal triggers for prompts involving living artists or sexualised content.

Also, unlike earlier tools, GPT-4o seems especially cautious around children. It won’t allow editing uploaded images of realistic children and applies stronger classifiers to detect potential misuse involving minors.

Performance metrics from OpenAI’s system card also show the updated safety stack performs better than previous versions, especially in areas like gender and racial diversity. It’s been reported that in one test, 4o image generation produced diverse outputs 100 per cent of the time for group prompts compared to 80–89 per cent for DALL·E 3.

The Benefits for Users

The new capabilities have clear commercial potential. Designers, marketers, developers and content creators can now produce custom visuals, mockups, product renders, and marketing illustrations with minimal friction. For example:

– A property developer could quickly visualise housing concepts in different styles.

– An education provider could create bespoke, text-rich diagrams for course materials.

– A social media agency could mock up viral meme formats in seconds.

With enhanced control over composition, text, and detail, plus the ability to edit and iterate on existing images, GPT-4o appears to be taking AI image generation a step closer to mainstream creative workflows. Also, with API access rolling out, this could also give rise to entirely new applications built on top of GPT-4o, from instant avatar builders to interior design preview tools.

Risks, Especially Around Trust and IP

Despite the excitement, this change is far from risk-free. For example, allowing depictions of public figures or sensitive racial and political symbols opens the door to misinformation, reputational damage, and potential misuse.

Even if OpenAI prohibits images that “praise extremist agendas,” critics worry that fringe users could find ways to skirt those limits or that mainstream users might be unaware of the implications of what they’re creating.

There’s also the ever-present issue of copyright. Training on “publicly available” data and corporate partnerships (e.g. with Shutterstock) may cover some ground but as Studio Ghibli-style memes go viral, the question of fair use resurfaces.

For businesses, this raises two key concerns:

1. Reputational risk. Could AI-generated visuals be misattributed, manipulated, or used maliciously?

2. Legal exposure. Could brand-generated content be seen as infringing on artistic or personal likeness rights?

As with previous AI developments, what’s technically possible may soon outpace what’s legally clear or culturally acceptable.

What It Means for the Wider AI Landscape

OpenAI’s move comes just weeks after Google faced backlash over Gemini’s historical inaccuracies and image bias, and amid growing political scrutiny. In the US, Republican lawmakers are probing tech firms over alleged censorship, a backdrop that likely informed OpenAI’s more libertarian-leaning policy update.

By relaxing its image generation rules now, OpenAI seems to be signalling that it trusts both its technology and its users enough to let go of some of the training wheels, and is (presumably) willing to weather the inevitable criticism if it means retaining or gaining ground against rising competitors like MetaAI.

What Does This Mean For Your Business?

OpenAI’s latest update appears to have placed ChatGPT on a new creative footing – one that blends impressive technical progress with a deliberately looser grip on content control. In doing so, the company looks like steering away from the more cautious posture that has defined much of the AI sector to date (with the notable exception of Grok). Whether that’s a bold move or a risky one depends very much on how the public, regulators, and commercial users respond in the months ahead.

For UK businesses in particular, the upgrade could mean the ability to generate high-quality, editable, and highly specific imagery using a chatbot could significantly reduce production times for everything from ad campaigns to training materials. The tools now on offer may make it far easier for SMEs and creative agencies to iterate visually without relying on third-party design services, a potential leveller in an increasingly competitive digital landscape. For marketing teams, the prospect of generating branded content, explainer graphics, or social media visuals with a single prompt is clearly appealing.

However, those same businesses will need to tread carefully. As copyright debates heat up and content provenance tools remain in their early stages, there’s a real risk that missteps (however unintentional) could carry legal or reputational consequences. The temptation to experiment with viral visual styles or public figures is likely to be strong, but so will the scrutiny. Companies looking to incorporate these tools into their workflows will likely need internal guidance, or even new policies, around AI-assisted visual content.

Meanwhile, for artists, regulators, and platform providers, the questions are only getting thornier. What counts as fair use in an age of style mimicry? Who decides whether a request is educational, offensive, or somewhere in between, and how do companies like OpenAI draw policy lines that are both ethically sound and commercially sustainable? The fact that ChatGPT now permits the creation of imagery that was off-limits just weeks ago, including depictions of politicians, sensitive racial traits, and even controversial symbols, now appears to reflect a broader change in how AI firms are interpreting their responsibilities.

In truth, it may be the AI market itself that forces the next evolution. With rivals like Google and Meta pursuing their own image-generation models and competing for developer mindshare, the pressure is on to deliver not just safety, but usability. OpenAI’s gamble appears to be that with the right blend of user freedom and behind-the-scenes safeguards, it can satisfy both the creative crowd and the cautious boardroom.

Tech Insight : Your Guide to Choosing the Right AI Model in 2025

In this week’s tech insight, here’s a handy guide to the major (and now plentiful) generative AI models available, including what they do best, and how to access them.

Which One Is Right For You?

The AI boom is showing no signs of slowing. Whether it’s for writing, coding, design, research, or customer support, generative AI tools have become an everyday business asset, or at the very least, something worth exploring.

However, with so many models now flooding the market, it’s becoming harder to tell which ones genuinely deliver value and which are simply overhyped. Each promises cutting-edge performance, but the real differences often only become clear once you’ve tested them in practice.

With this in mind, here’s our plain-English guide to the standout generative AI models available to UK businesses in 2025. Here you can discover (if you haven’t already) what they’re particularly good at, where to find them, the pricing, and any recent updates (and controversies).

OpenAI: GPT-4o, GPT-4.5 ‘Orion’, Sora, Operator & More

The first place that many people were introduced to generative AI (and still the best-known), OpenAI offers multiple models for different use cases. These include:

GPT-4o (ChatGPT): The default for most business users. Handles writing, analysis, basic reasoning, and now image generation too. Available on chat.openai.com.

Free Tier: GPT-3.5 only.

ChatGPT Plus: £20/month for GPT-4o and image tools.

GPT-4.5 ‘Orion’: A more advanced version with stronger ‘world knowledge’. Currently only available with OpenAI’s £160/month Pro subscription.

Sora: A new text-to-video model capable of creating entire scenes from prompts. Still experimental and only available on paid plans.

Operator: An AI ‘agent’ designed to take actions on your behalf, like ordering stock or booking meetings. Available on Pro (£160/month), but early testers report unpredictable behaviour.

Deep Research: Designed for serious research with citations. Also Pro-only. Hallucinations are still an issue.

o3-mini & 4o-mini: Cheaper, faster reasoning models optimised for maths and code. Available for free or low cost on ChatGPT.

Pros: Mature, fast, widely integrated. Huge plugin and extension ecosystem.

Cons: Some of the most powerful tools are behind expensive paywalls. Occasional hallucinations and inconsistencies.

Google: Gemini 2.5, Deep Research & AI Premium Tools

Google’s AI suite, now rebranded under the Gemini umbrella, focuses mainly on knowledge tasks, coding, and long-context reasoning.

Gemini 2.5 Pro Experimental: Excels at code generation and reasoning. Slightly underperforms Claude 3.7 on some benchmarks.

Gemini Deep Research: Summarises large volumes of search data with citations.

Both require a Google One AI Premium subscription (£19.99/month), which also grants access to Gemini in Docs, Gmail, and other Google apps. See gemini.google.com.

Pros: Integrated across Google’s ecosystem. Long 2 million-token context.

Cons: Still looks like it’s catching up to OpenAI on certain creative benchmarks. Perhaps not as strong at natural conversation.

Anthropic: Claude 3.7 and Computer Use

Anthropic’s Claude models have quietly become the insider’s favourite. Models here include:

Claude Sonnet 3.7: A hybrid reasoning model. Can produce fast responses or take longer to ‘think’, depending on the task. Available free at claude.ai or via API.

Pro Plan: $20/month (about £17) gives faster access and priority use.

Computer Use: A more experimental agent designed to operate your machine. Still in beta. Billed by token usage.

Pros: Strong at coding, clear writing, and safe outputs. More control over behaviour.

Cons: Doesn’t generate images. Agents still under development.

xAI: Grok 3 and the Acquisition of X

Elon Musk’s xAI is pitching itself as the politically neutral, open challenger to OpenAI. However, it also appears to be playing a longer game. Models include:

Grok 3: Strong on maths, science and factual knowledge. Integrated into Musk’s X platform (formerly Twitter). Requires an X Premium+ subscription at $50/month (about £39).

Aurora: xAI’s image generator, capable of photorealistic visuals.

Also, in recent news (just this month) Musk’s xAI acquired his X (Twitter) platform in an all-stock deal. Musk claims it’s about combining data, distribution and compute. However, some commentators have suggested that the real goal may be access to X’s vast post database to supercharge AI training. With X’s 600 million users, xAI now has both data and a delivery vehicle.

It seems that according to some other commentators, there may be a financial angle to the deal related to possible troubles at Tesla. For example, with Tesla facing increased scrutiny over mounting debt and loan repayments, and with sales apparently affected by a backlash over Musk’s involvement with President Trump’s administration (and DOGE), Musk’s empire may need fresh capital and investor confidence. Folding X into xAI, now valued at $80 billion, may allow for more aggressive fundraising and help shield the broader group from Tesla’s recent turbulence. It may, therefore, be as much of a strategic hedge as a technical merger.

Pros: Fast-growing, quite good at logic tasks. Trained on unique datasets.

Cons: Limited availability outside X. Some controversy around political alignment and data use.

Meta: Llama 3.3 70B

Meta’s Llama series is aimed at developers and businesses looking for open source AI models.

Llama 3.3 (70B): Free and open source. Great for running on your own servers or fine-tuning in-house. Ideal for companies concerned with privacy.

Access it via Meta’s GitHub or model hubs like Hugging Face.

Pros: Free, transparent, customisable.

Cons: Needs technical setup. No hosted version available by Meta.

Cohere: Aya Vision & Command R+

Canadian firm Cohere focuses on language models optimised for enterprise and multilingual use.

Aya Vision: Multimodal. Great for image captioning and image Q&A, especially in non-English languages. Available for free via WhatsApp.

Command R+: Excels at RAG (retrieval-augmented generation) — good for firms needing AI to cite reliable sources. More info at cohere.com.

Pros: Multilingual strength. Strong on RAG.

Cons: Not as widely used yet. Hallucination issues persist in complex queries.

Stability AI: Stable Virtual Camera

Stability AI, known for Stable Diffusion, has pushed into 3D image generation.

Stable Virtual Camera: Turns 2D images into simulated 3D scenes and angles. Available on Hugging Face.

Pros: Innovative visuals. Good for creative use cases.

Cons: May struggle with complex or moving subjects. Research-use only for now.

Other Contenders Worth Watching

DeepSeek R1 (China): Impressive code and maths abilities, but data privacy risks due to Chinese government links.

Mistral Le Chat (France): Fast response AI with journalism tie-ins. Solid but prone to errors.

Alibaba Qwen: High benchmark scores in coding, but trust and censorship remain concerns.

Again, these models can be accessed via huggingface.co or the developers’ own sites.

What Does This Mean For Your Business?

The sheer volume of AI models / generative AI platforms now available, each claiming unique strengths, can make decision-making difficult. However, the good news is that these tools are maturing fast, with clearer use cases, pricing tiers, and performance benchmarks emerging.

Not surprisingly, OpenAI remains a dominant force, especially for content generation and general-purpose reasoning. However, it’s no longer the only serious player. Google’s Gemini, Anthropic’s Claude, and xAI’s Grok all offer increasingly credible alternatives, some with more transparency, others with deeper integration or specialisation in logic-heavy tasks. Open-source options like Meta’s Llama or Cohere’s RAG-optimised models give businesses more flexibility, particularly where privacy, cost, or fine-tuning are concerns.

The broader AI arms race is also having knock-on effects across sectors. For example, Musk’s consolidation of X and xAI could indicate a push towards tighter control of data, distribution, and development. While that may bring faster innovation, it also raises questions about data ownership, platform dependency, and regulatory oversight, all of which UK stakeholders, from policymakers to investors, will need to monitor closely.

The message for businesses, therefore, seems to be don’t get distracted by the hype. Instead, focus on what you actually need AI to do. Whether that’s speeding up internal workflows, improving customer service, or enhancing research and development, the right model is out there, but it may take some experimenting to find it. With the pace of change showing no sign of slowing, those who take the time to understand the landscape now will be better positioned to benefit from it in the months ahead.

Tech News : Google Unveils AI-Powered Holiday Planning Features

Google is rolling out a suite of new features (many powered by generative AI) across its core platforms to help users plan their summer holidays with greater ease and personalisation.

Tools For Inspiration and Planning

It appears that the update spanning Search, Maps, Lens, and Gemini, is designed to keep Google front and centre as more travellers begin to explore AI tools like ChatGPT for trip inspiration and planning.

With updates that offer real-time itinerary suggestions, hotel price tracking, custom trip planners and even the ability to turn screenshots into mapped-out adventures, Google is doubling down on AI’s potential to redefine how we prepare for getaways.

Smarter Search With AI Overviews for Trip Planning

At the heart of the rollout is the expansion of AI Overviews in Google Search, which is now capable of generating detailed travel itineraries based on simple queries.

For example, a search like “create an itinerary for Costa Rica with a focus on nature” will generate a multi-day schedule, featuring activity suggestions, restaurant recommendations, and key places to visit. Users will also be able to see user-contributed photos and reviews, all displayed on an expandable map.

These AI Overviews, which are powered by a customised Gemini model, are currently only available to all U.S.-based users in English, on both mobile and desktop (with no need to sign up for Search Labs). Once an itinerary is generated, users can export it to Docs, Gmail, or save it as a custom list in Google Maps for easy access on the go.

Increased User Engagement

This feature builds on Google’s AI Search experiments in Search Labs, which the company says have already generated billions of AI Overviews. According to Google, these overviews have increased user engagement with Search and boosted traffic to a wider range of websites. As Google says on its blog: “People like that they can get both a quick overview of a topic and links to learn more,” and that “We’ve found that with AI Overviews, people use Search more, and are more satisfied with their results.”

Hotel Price Tracking Goes Global

Another standout update from Google is the launch of hotel price tracking, which appears to be a logical expansion of the popular Google Flights alerts.

For example, users browsing google.com/hotels can now toggle on a price tracking option for specific dates and destinations. Once activated, Google will send an email alert if hotel prices drop significantly, based on your chosen filters (such as star rating, amenities or beach proximity).

The feature is now live globally across mobile and desktop browsers, thereby offering a new way for those planning holidays and getaways to (hopefully) save money during the booking process.

Maps Gets Smarter with Screenshot Integration

Google Maps is also getting a clever new upgrade. A common bugbear by travellers using Google Maps up until now has been the difficulty in managing the dozens of screenshots we take while researching holidays, from must-visit restaurants to hotel options and quirky local attractions. However, it seems that this new update means Maps can turn that visual clutter into something that may be genuinely useful.

For example, using Gemini’s image recognition capabilities, Google Maps will identify landmarks, businesses and attractions in users’ screenshots (provided the user grants the app access to their photos). From there, users can review and save these locations to a travel list, which will appear as pins on the user’s map. This feature is rolling out this week on iOS in English in the U.S., with Android support “coming soon,” according to Google.

The Gemini Gems Personal Trip Planner Is Now Free for All

Gemini, Google’s AI assistant, is being positioned as a travel buddy in its own right. A new feature called Gems allows users to create personalised AI agents, or “Gems”, that can help with specific tasks, such as trip planning, itinerary curation, or packing lists.

For example, a user could create a Gem that specialises in budget-friendly European trips, or one that only recommends dog-friendly hotels. Users can set up these Gems via the “Gems manager” on desktop, and they’re now available free of charge.

Also, if users aren’t sure where to start, Gemini can help in drafting the setup using a “magic wand” tool that expands on a basic idea. As Google says about the feature on its blog: “Now you have a travel guide at your fingertips to help you pick a destination, find restaurants in a new city or even suggest what to pack”.

Lens As A Pocket Tour Guide

Also in the new mix is an update to Google Lens, the company’s AI-powered visual search tool. While it’s long been able to translate signs and menus, Lens now supports AI Overviews too.

For example, if users point their phone’s camera at a building, landmark or mysterious object and ask questions like “what is this?”, the tool will deliver AI-generated insights, along with links to relevant online resources.

This may be particularly useful for cultural tourism, e.g. spotting unusual architectural features in city streets or identifying historical plaques while wandering through old European towns.

Lens’ new AI Overviews are already available for English users and will soon expand to languages including Japanese, Korean, Portuguese, and Spanish, in most countries where AI Overviews are active.

Why Now? Google’s Bid to Stay Ahead

It seems it’s no coincidence that Google is launching these travel-focused AI features just ahead of the summer rush, and at a time when travellers are increasingly turning to generative AI platforms like ChatGPT for itinerary building, travel hacks and destination research.

By integrating Gemini deeply into Search, Maps and Lens, Google appears to be aiming to maintain its dominance as the go-to tool for everyday planning. There’s also likely to be a broader strategic play where, in a world where large language models can provide well-rounded answers to complex queries, Google needs to prove it can do more than just return links.

Google’s generative AI push, particularly through the custom Gemini model embedded in Search, may therefore be an attempt to leapfrog competitors by offering deeper context, personalisation and functionality, all while keeping users within Google’s ecosystem.

What It Means for Users and Google’s Competitors

From the user’s perspective, the appeal is clear, i.e. less time spent bouncing between tabs and apps, and more coherent, streamlined planning experiences. For travellers in particular, it removes much of the friction that can turn holiday prep into a chore.

However, the rollout is heavily U.S.-centric for now, with many features restricted to English queries and limited platforms. Google has promised broader global access “soon,” but it remains to be seen how quickly that happens, and how seamlessly these tools will translate to other markets and languages.

For competitors like OpenAI, Expedia and other travel-focused apps, Google’s integration of AI into Search and Maps could be seen as setting a new bar. In other words, the battle is no longer just about who can answer your questions, but who can help you do something with those answers, from booking flights to building shareable, actionable itineraries.

What Does This Mean For Your Business?

By weaving AI more tightly into the platforms people already us (i.e. Search, Maps, Lens and Gemini), Google is effectively trying to become not just the place you go to find information, but the place you go to do something with it. For travellers, the convenience is likely to be attractive but for Google, the strategic value runs deeper. This isn’t just about helping you book a hotel or plan a day out but is more about keeping you inside Google’s ecosystem from inspiration to itinerary, thereby strengthening its role as your go-to AI assistant.

That said, the rollout raises some questions around accessibility and reach. For example, at present, many of the most powerful new features are only available in the U.S. (and just in English), meaning UK users will need to wait a little longer to take full advantage. There’s also the wider concern of how this affects travel publishers, bloggers and third-party platforms that have long relied on Google Search traffic. While Google insists AI Overviews are increasing clicks to a broader range of sites, the company’s tighter grip on the journey from question to action inevitably puts pressure on other players in the space.

For UK businesses, particularly those in travel, hospitality and tourism, the implications are twofold. On the one hand, it opens up new opportunities to reach audiences earlier in their decision-making journey, especially if their content is well-optimised and engaging enough to surface within AI Overviews. However, on the other, it places more power in Google’s hands, with brands having to work harder to stand out in a results page that may be increasingly curated by AI rather than traditional search rankings.

It seems, therefore, that Google’s summer AI upgrades represent more than a seasonal refresh, i.e. they’re more of a sign of the company’s wider ambition to redefine search, streamline planning, and keep pace with the generative AI boom. For users, the trade-off is between ease and control. For businesses, it’s a question of visibility and adaptability. Either way, the travel planning landscape just got a lot more intelligent, and a lot more competitive.

Tech News : Croydon First Place To Get Permanent Facial Recognition Cameras

It’s been reported that Croydon is set to become the first place in the UK (and possibly the democratic world) to host permanent live facial recognition (LFR) cameras on its streets.

Two Fixed Units This Summer

The Metropolitan Police has confirmed the installation of two fixed units in the town centre this summer, the first fixed deployment of the technology in the UK.

What Is Live Facial Recognition and How Does It Work?

Live facial recognition (LFR) technology uses cameras to scan the faces of people passing through a defined area in real time. The images are instantly compared against a police watchlist, which may include suspects, wanted criminals, vulnerable individuals, and even victims of crime.

If a match is found, an alert is sent to nearby officers who are on standby and ready to make an arrest. If there is no match or the alert turns out to be a false positive, the captured image is deleted.

The Met insists the system is accurate, quoting a false match rate of less than one percent during its mobile van trials across London. However, as this tech becomes fixed and potentially more widespread, questions are being raised about its reliability, legality, and ethical implications.

Why Croydon, and Why Now?

Croydon has long struggled with violent crime. For example, the borough recorded more than 10,000 violent offences in a single year, making it one of London’s most crime-plagued areas. High-profile tragedies like the fatal stabbing of schoolgirl Elianne Andam outside the Whitgift Centre last year have amplified public concern.

It seems that this may well be the reason why the Met has chosen Croydon as the launch site for permanent LFR deployment. It’s been reported that the fixed cameras will be installed on North End and London Road (both busy pedestrianised streets) and mounted on lampposts and buildings. Crucially, the Met says the cameras will only be switched on when officers are present and ready to respond.

According to Superintendent Mitch Carr, the move will make LFR a “business as usual” policing tool, rather than relying on the availability of roving LFR vans. “It will give us much more flexibility around the days and times we can run the operations,” he told community leaders earlier this month.

What Are the Claimed Benefits?

The Met claims that the technology is already proving its worth. For example, last year, mobile facial recognition units reportedly led to over 500 arrests across London, including the identification of suspects wanted for stalking, domestic abuse and rape. In Croydon alone, about 200 arrests were linked to LFR use, including at least two alleged rapists.

Supporters argue that fixed cameras will enhance public safety and act as a powerful deterrent to criminals. Croydon South MP Chris Philp, who also serves as the Conservative Shadow Home Secretary, called the move a “logical next step”. In a recent interview with The Times he was quoted as saying: “Those few people opposing this technology need to explain why they don’t want wanted criminals to be arrested.” It’s been reported that for some residents, the technology is a welcome intervention.

But What Are the Critics Saying?

Despite the police’s reassurances, the move has ignited fierce opposition from privacy campaigners and civil liberties groups.

Big Brother Watch, a leading advocacy group, is particularly scathing. Its interim director, Rebecca Vincent, called the Croydon deployment “an alarming expansion of the surveillance state” and part of a “steady slide into a dystopian nightmare”.

She added: “It also underscores the urgent need for legislative safeguards on LFR, which to date has not been addressed in any parliamentary legislation.”

One of the key criticisms is the absence of clear regulation. There is no UK law specifically governing the use of LFR technology, meaning that police forces are left to write their own policies on how it should be used.

Big Brother Watch also points to real-world examples of things going wrong, i.e. cases of mistaken identity by the camera systems. For example, one such case involved Shaun Thompson, an anti-knife crime campaigner who was wrongly identified as a suspect at London Bridge station. He was detained for nearly 30 minutes despite presenting multiple forms of ID proving he was not the wanted individual.

Madeleine Stone, Senior Advocacy Officer at the group, said: “Everyone gets something wrong sometimes, but what happens when the algorithm gets it wrong? Who is responsible then?”

Even more concerning, critics say, is the makeup of the ‘police watchlists’. These reportedly include not only suspects but also victims and vulnerable individuals, thereby blurring the line between surveillance and profiling.

Legality and Oversight Still in Question

The introduction of permanent facial recognition cameras comes at a time when the legal framework around the technology remains unclear. For example, a House of Lords committee recently expressed “deep concern” over its unregulated use and campaigners have called for Parliament to intervene.

In contrast, the government seems to be doubling down. The current Labour administration recently launched a £20 million fund to expand LFR use across UK police forces.

Despite this political momentum, critics remain unconvinced. Big Brother Watch recently filed legal action in response to what it calls an “unprecedented expansion” of facial recognition surveillance in both public and private sectors.

The group has warned that the Cardiff trial during the Six Nations tournament, where over 160,000 people were scanned and no arrests made, shows that mass surveillance doesn’t always yield results. In that instance, temporary LFR cameras were deployed throughout the city centre, but the operation failed to identify any wanted suspects.

Is the Technology Effective or Just Theatrical?

The core question for many people remains whether permanent facial recognition cameras will genuinely help tackle crime or whether the move is more about public reassurance and political point-scoring, i.e. more of a theatrical gesture than a real, practical solution.

Press reports about the subject (e.g. in the Metro) highlight how some Croydon residents clearly welcome the technology, particularly after years of rising violent crime and high-profile incidents in the town centre. With concerns about gang activity, knife crime and anti-social behaviour dominating local conversation, it’s no surprise that many see any effort to increase safety as a step in the right direction.

However, doubts persist. In a borough where offenders often wear masks, balaclavas or hoodies to obscure their identities, it’s unclear how effective facial recognition will actually be in real-world conditions. Some commentators have also noted how the fixed locations of the new cameras may also work against them once people know where the cameras are, avoiding them could be as simple as taking a different route.

What This Could Mean For Your Business?

The decision to install permanent facial recognition cameras in Croydon isn’t just a local policing initiative – it’s the first real test of whether this kind of surveillance can be embedded into everyday British life. With no specific laws governing its use, and police forces writing their own rules, the move exposes a major gap in oversight that lawmakers have so far failed to address.

If the technology proves effective, it could pave the way for wider adoption in other towns and cities, which would bring facial recognition into regular public and commercial spaces. For UK businesses, particularly those in high-footfall retail or transport hubs, that might mean closer partnerships with police or even the rollout of their own systems. However, this could raise fresh challenges around data protection, customer consent, and reputational risk, especially as public awareness of privacy rights continues to grow.

For residents and civil liberties groups, the concern is not just how the technology works, but also how it’s controlled, and who gets to decide where the limits lie. As Croydon essentially becomes the UK’s surveillance testbed, its experience will likely shape future policy, public trust, and the broader role of biometric surveillance in Britain’s urban life. Whether it’s a breakthrough or a step too far, the rest of the country is now watching closely.

Company Check : NHS Supplier Fined £3m Over 2022 Ransomware Failures

A software provider to the NHS has been fined £3.07 million after serious security lapses allowed hackers to steal sensitive personal data in a 2022 ransomware attack.

A Breach With Real-World Impact

The penalty, issued by the Information Commissioner’s Office (ICO), follows a detailed investigation into Advanced Computer Software Group Ltd. In August 2022, the company’s health and care subsidiary was targeted by cybercriminals linked to the LockBit ransomware group. The attackers exploited a customer account that lacked multi-factor authentication (MFA), gaining access to systems used across NHS services.

In total, the personal data of 79,404 individuals was compromised. This included extremely sensitive information such as care plans and (in 890 cases) detailed instructions for entering the homes of vulnerable patients receiving in-home care.

Examples of the seriousness of the effects of the attack include:

– The NHS 111 helpline was forced to revert to manual operations.

– Health professionals across the country were locked out of patient records for extended periods.

– Routine services were thrown into disarray, with some systems offline for weeks.

ICO Says “Fell Seriously Short”

The ICO concluded that Advanced Computer Software Group Ltd had failed to implement basic cybersecurity hygiene expected of an organisation handling high-risk data. While some systems were protected by MFA, coverage was patchy, leaving major entry points exposed. Investigators also found gaps in vulnerability scanning and weaknesses in the company’s patch management processes.

Information Commissioner John Edwards said Advanced’s security “fell seriously short of what we would expect from an organisation processing such a large volume of sensitive information.” He added: “People should never have to think twice about whether their medical records are in safe hands.”

Fine Halved From £6m to £3m

The ICO originally proposed a fine of £6.09 million but ultimately reduced the figure by half. The discount followed a voluntary settlement in which Advanced accepted the findings, agreed not to appeal, and worked closely with the National Cyber Security Centre (NCSC), the National Crime Agency (NCA), and NHS partners in the wake of the breach.

The regulator also acknowledged the company’s efforts to limit the damage and mitigate risks to affected individuals, which contributed to the final penalty being set at £3,076,320.

A Data Processor Under Pressure

As a data processor acting on behalf of healthcare providers, Advanced Computer Software Group Ltd was responsible for protecting information it handled but did not own. That legal duty, the ICO stressed, does not allow for shortcuts. The ICO highlighted how it was not enough to have security measures “in progress” but that they needed to be fully implemented, especially given the volume and sensitivity of the data involved.

This attack, enabled by a single unsecured login, revealed how thinly spread protections can lead to catastrophic consequences when threat actors find a gap.

More Than Just a Cyber Incident

It seems that the fallout in this case extended far beyond IT systems. For example, the data accessed by attackers contained private information used daily by carers, clinicians, and emergency staff. In some cases, the stolen data may have revealed access instructions to individuals’ homes, which is an unprecedented breach of trust and safety for those affected.

For many observers, this incident demonstrated how a breakdown in basic cyber hygiene can translate directly into disruption on the front lines of public health services.

One of the Largest Fines in Years

Advanced’s fine is the highest handed down by the ICO since TikTok was penalised in April 2023 and ranks among the regulator’s top six ever. It places the company alongside British Airways, Marriott, and Interserve in a growing list of high-profile data security failures.

What sets this case apart is the nature of the data compromised, i.e. health and care information linked to some of the most vulnerable people in society. It also highlights how private contractors embedded in public services now face the same scrutiny and accountability as frontline NHS bodies.

What Does This Mean For Your Business?

The clear message from the ICO, illustrated by this case, is that partial protections are not enough. If you’re handling sensitive data, especially as a supplier to critical sectors, every point of access must be secured, monitored, and updated. Incomplete MFA rollout, unpatched vulnerabilities, and weak incident response planning all count as regulatory failures.

This case also highlights how regulators are now expecting more from third-party vendors, and public sector clients are unlikely to forgive repeat offenders. For procurement teams, cyber due diligence is no longer optional. It must include not only accreditations and policies, but proof that systems are fully hardened and actively monitored.

That said, Advanced’s experience shows that cooperation can actually reduce fines, but it doesn’t undo the reputational and operational damage. For suppliers across healthcare, education, and government services, the priority now is clear, i.e. secure the basics or risk losing everything.

Security Stop Press : ‘Have I Been Pwned’ Mailing List Stolen in Phishing Attack

Troy Hunt (creator of ‘Have I Been Pwned’) has confirmed his blog’s mailing list was compromised after he fell for a phishing attack mimicking Mailchimp.

Hunt says that while he was jet-lagged in London, he received a convincing phishing email prompting him to log into a fake Mailchimp site, mailchimp-sso.com. Hunt says he entered his login details and a one-time password, only realising the mistake moments later. Despite resetting his password swiftly, the attacker had already exported his mailing list from a New York IP address.

Around 16,000 email addresses were exposed, including over 7,500 belonging to users who had unsubscribed, a detail Hunt criticised, questioning why Mailchimp retains unsubscribed data. The stolen data also included IP addresses and rough location metadata.

Hunt admitted the phishing email was well-crafted, creating just enough urgency without sounding alarmist. “We all have moments of weakness and if the phish times just perfectly with that, well, here we are,” he wrote. Ironically, the incident happened the day after he’d been discussing passkey adoption with the UK’s National Cyber Security Centre.

He has since notified affected users and loaded the breach into Have I Been Pwned, reinforcing his long-held message about transparency and rapid disclosure in data breaches.

For businesses, this incident is a reminder that even experts are vulnerable. Clear phishing awareness training, secure password management, and adoption of phishing-resistant technologies like passkeys are now essential steps in protecting sensitive data.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives