Are AI Chatbots Crossing A Dangerous Line?

A growing number of real-world cases and controlled tests are raising concerns that generative AI chatbots may, in certain conditions, contribute to harmful behaviour by reinforcing dangerous thinking and helping users turn intent into action.

What Has Been Reported?

Recent incidents across Canada, the United States and Europe have brought this issue into sharper focus. In one case in Canada, court filings indicate that a teenager who later carried out a fatal attack had previously used an AI chatbot to discuss feelings of isolation and violent thoughts, with conversations reportedly progressing towards how such an attack might be carried out.

In the United States, a separate case involved a man who developed an extended relationship with an AI chatbot, which he believed to be sentient. Legal filings suggest that these interactions escalated into instructions linked to a potential large-scale violent incident, which he prepared for before it failed to materialise.

In Europe, a teenager is reported to have used an AI chatbot over several months to help develop a manifesto and plan an attack on classmates, which was later carried out.

These cases differ in detail, but they show a consistent pattern. Conversations often begin with expressions of distress, isolation or anger. Over time, repeated interaction appears to reinforce those thoughts, sometimes progressing into more structured or actionable ideas.

Alongside these incidents, controlled research has tested how leading AI chatbots respond to prompts involving violence. In several cases, systems were able to produce guidance on weapons, tactics or targeting when prompts were reworded, layered or extended across longer conversations.

A report from the Centre for Long-Term Resilience noted that “AI systems can unintentionally provide a form of conversational scaffolding that helps users organise and refine harmful intent over time”, highlighting the risk posed by sustained interaction rather than single responses.

Companies including OpenAI and Google state that their systems are designed to refuse harmful requests and direct users towards support where appropriate. They have also acknowledged that safety systems can become less reliable during longer or more complex interactions.

How Chatbots Can Influence Behaviour

Unlike traditional online content, AI chatbots are interactive and responsive. They adapt to user input, maintain context and generate answers that feel personalised.

This creates a different type of risk. Rather than simply presenting information, chatbots can reinforce ideas through ongoing conversation. If a user expresses extreme or distorted views, the system may attempt to be helpful or empathetic. In most cases, this is appropriate. In some cases, it may unintentionally validate harmful thinking.

Over time, this interaction can shape how a user interprets their situation. A conversation that begins as general discussion can become more focused and more detailed, particularly when the system continues to respond without clear challenge or interruption.

This aligns with wider research into how AI affects human thinking. Studies into what has been described as “AI brain fry” suggest that prolonged interaction with AI systems can affect judgement, increase cognitive load and reduce the ability to critically assess information. While this research focuses on workplace use, it highlights how extended engagement can influence decision-making.

In more extreme scenarios, the combination of reinforcement and reduced critical distance may increase the risk of poor or harmful decisions.

Limits Of Current Safeguards

AI providers have introduced safeguards including refusal systems, content filters and escalation processes designed to identify high-risk conversations.

However, evidence suggests that these controls are not always consistent. In some tests, chatbots have provided restricted information when prompts are carefully framed or developed over multiple exchanges.

One reason for this is the way these systems are designed. They are built to be helpful, to continue conversations and to interpret user intent. When intent develops gradually or is presented indirectly, it can be difficult for the system to determine when to refuse or intervene.

Persistence is also a factor. Users can rephrase questions, introduce fictional scenarios or build context step by step. As conversations become longer, earlier safeguards may weaken.

OpenAI has acknowledged this limitation, noting that safety measures tend to perform more reliably in shorter exchanges and can degrade during extended interactions.

Why This Is Gaining Attention

The concern is not that AI chatbots are independently causing violent acts. The issue is that, in certain circumstances, they may reduce the friction between harmful thoughts and real-world behaviour.

This can happen through reinforcement, where ideas are echoed rather than challenged, and through translation, where vague or emotional thinking is turned into more structured plans.

The combination of speed, accessibility and detailed output means that users can move from general intent to specific action more quickly than before.

In response, AI providers are beginning to strengthen their approaches. This includes earlier escalation of concerning conversations, tighter controls on banned users returning to platforms, and closer coordination with authorities where risks are identified.

These steps suggest growing recognition that current safeguards need to evolve as the technology becomes more widely used.

What Does This Mean For Your Business?

For UK organisations, this is not just a consumer or public safety issue. Generative AI tools are already embedded in many workplaces, often with limited governance around how they are used.

One key consideration is how employees interact with these systems. AI can support research, communication and problem-solving, but it can also influence how information is interpreted, particularly during extended or complex use.

There is also a broader governance challenge. Many organisations focus on data security and accuracy when adopting AI. Behavioural influence and decision-making risk are less frequently addressed, yet they are becoming increasingly relevant.

Clear policies are an important starting point. Employees should understand when AI tools are appropriate, where human judgement is required and when outputs should be verified.

Training is equally important. As highlighted by research into AI-related cognitive strain, the way tools are used can have a direct impact on decision quality. Encouraging structured use, limiting over-reliance and maintaining critical thinking are essential.

Monitoring and escalation processes should also be considered. Organisations need to be able to identify when AI use is producing unexpected or concerning outcomes and respond accordingly.

There is also a duty of care element. As AI tools become more integrated into everyday work, organisations may need to consider how they support employees who are using these systems extensively or in sensitive contexts.

This issue reinforces a wider point. AI is not only a productivity tool. It also shapes how people think, decide and act. Businesses that recognise this and put balanced controls in place will be better placed to manage risk while still benefiting from what the technology can offer.

Google Maps Introduces ‘Ask Maps’

Google has launched a major update to Maps, introducing a new AI feature called Ask Maps alongside a redesigned 3D navigation experience powered by its Gemini models.

From Search To Conversation

For years, Google Maps has been built around search, users typed in a place or category and selected from a list of results. Ask Maps changes that model by allowing users to ask complex, real-world questions in natural language.

For example, instead of searching for a specific location, users can now ask contextual queries such as where to charge a phone without waiting, or where to find a suitable venue based on time, preferences, and availability. Google describes this as “a new conversational experience that answers complex, real-world questions a map could never answer before.”

This is part of a broader shift in how digital tools are evolving. Maps is no longer just a navigation platform, it is becoming a decision-making layer that interprets intent and delivers tailored outcomes.

How Ask Maps Works In Practice

The system combines Gemini’s AI capabilities with Google Maps’ extensive dataset, which includes information on hundreds of millions of locations and contributions from a global user community.

Ask Maps draws on this data to generate responses that are both relevant and personalised. According to Google, it is “uniquely helpful — tapping into Maps’ fresh information about the world to show you everything you need to know before you go.”

Personalisation plays a central role. The feature uses signals such as previous searches and saved places to refine results automatically. This means users may receive tailored recommendations without needing to specify preferences each time.

Once a decision is made, the system is designed to move seamlessly into action. Users can navigate, save locations, or share plans directly from the same interface, reducing the need to switch between apps or repeat searches.

Immersive Navigation Rebuilds The Driving Experience

Alongside Ask Maps, Google has introduced Immersive Navigation, a significant redesign of its core navigation experience. This replaces traditional flat maps with a dynamic 3D view that reflects real-world surroundings, including buildings, terrain, and road features.

The update also changes how directions are delivered. Instead of relying primarily on distances, Maps now uses more natural, landmark-based guidance. As Google explains, the goal is to make driving feel more intuitive, with directions that resemble how a person would guide someone in real life.

The company describes this as “our biggest transformation of the navigation experience in over a decade.” The system is supported by real-time data processing, drawing on imagery and live updates to reflect current road conditions and provide more accurate guidance.

Why Now?

This update arrives at a time of increasing competition in both mapping and AI-driven search. Apple has been expanding its own Maps capabilities, while AI-native platforms are beginning to integrate location-aware responses into their services.

For Google, Maps is not just a utility, it is a key part of its broader search and advertising ecosystem. Many local business discoveries begin within Maps, making it a critical interface for capturing user intent.

By integrating Gemini directly into Maps, Google is positioning the platform as a central point for real-world queries, rather than allowing that interaction to shift towards standalone AI tools.

At the same time, this reflects a wider trend whereby AI is increasingly being embedded into everyday products, transforming them from passive tools into active assistants that anticipate needs and guide decisions.

The Open Question Around User Behaviour

While the technology is significant, adoption is less certain. Google has introduced conversational features in other products before, and user behaviour has not always changed as quickly as expected.

There is still a question around whether people will naturally begin asking their maps complex questions, or whether they will continue to rely on familiar search habits.

However, the infrastructure is now in place. If users do adopt this behaviour, it could fundamentally change how people interact with location-based services.

What Does This Mean For Your Business?

This update signals a meaningful change in how customers may discover and choose businesses. Instead of appearing in a list of search results, businesses may increasingly be selected by AI systems interpreting user intent and context.

That has implications for visibility. Traditional local SEO, which focuses on keywords, categories, and rankings, may become less influential as AI-driven systems prioritise relevance, reputation, and contextual fit. Factors such as reviews, completeness of business profiles, and alignment with user preferences are likely to carry more weight.

There is also a change in how decisions are made. Ask Maps is designed to reduce friction by moving users from question to action in a single flow. This means fewer steps between discovery and conversion, which could benefit businesses that are well positioned within the ecosystem, but reduce opportunities for others to compete once a recommendation is made.

For organisations, this highlights the importance of maintaining accurate, detailed, and up-to-date information across platforms like Google Maps. It also reinforces the value of customer feedback and engagement, as these signals increasingly influence how AI systems rank and recommend options.

More broadly, this development reflects the growing role of AI as an intermediary between businesses and customers. Companies that understand how these systems interpret data, and adapt their digital presence accordingly, are likely to be better positioned as this model evolves.

Google Maps is no longer just helping people get from one place to another. It is beginning to shape how decisions are made along the way, and that has clear implications for how businesses are discovered, compared, and chosen.

Scam Surge Disproportionately Hits London

Londoners are being disproportionately targeted by online fraudsters, with police warning that technology is allowing scams to scale rapidly while making criminals harder to detect.

Why London Is Being Targeted

Evidence presented to the London Assembly highlights the scale of the issue. Fraud now accounts for around 41 per cent of all crime across England and Wales, and London is bearing a significant share of that impact.

At a recent (this month) London Assembly Police and Crime Committee meeting at City Hall, City of London Police indicated that around 40 per cent of fraud victims are based in the capital, with the Metropolitan Police adding that London accounts for a significant share of specific scams, including around 60 per cent of courier fraud cases.

There are several reasons for this concentration. London combines high population density, strong digital engagement, and a large volume of financial activity. This creates a large and varied pool of potential targets, from individuals to businesses.

Criminals are not targeting London randomly, but are simply prioritising it because the potential return is higher.

How Technology Is Changing Fraud

Police have made clear that the core driver behind this trend is the way technology is being used to scale fraud operations.

Oliver Little from the City of London Police told the committee: “We’ve seen an acceleration in people using technology to enable fraud – it allows [them] to target a much wider number of people, and then it’s a numbers game.”

This reflects a shift in how fraud operates. Rather than relying on highly targeted, manual scams, criminals can now reach thousands of potential victims simultaneously through text messages, emails, and social platforms.

Technology also creates distance between the criminal and the victim. As Little explained, it “puts more barriers between us and them and obfuscates who they really are.”

This makes investigation and enforcement more difficult, particularly when activity crosses multiple jurisdictions.

The Role Of AI In Modern Scams

Artificial intelligence is beginning to play a role in this evolution, although police have been careful to describe its current use accurately.

Little highlighted how familiar scams are already being enhanced: “[With] the ‘Hi Mum’ scams over text message, there’s the potential to use technology to turn that into a realistic voice, so people will be more easily manipulated.”

This type of scam typically involves a message claiming to be from a family member who has lost their phone and needs urgent financial help. AI-generated voice cloning could make these messages significantly more convincing.

At present, AI is not running fraud operations end-to-end. It is being used to improve specific stages, such as message generation, impersonation, and targeting.

The direction of travel is clear, even if full automation has not yet been reached.

Simple Scams Still Deliver Results

Despite the focus on advanced techniques, police and support organisations have stressed that many successful scams remain relatively basic.

Fraudsters are combining simple approaches with large-scale distribution. The effectiveness comes from volume rather than sophistication.

This is reinforced by the observation that criminals are increasing the “surface area” of their attacks. More messages, more channels, and more variations mean a higher chance that someone will respond.

In practical terms, even well-known scams continue to succeed because they are constantly adapted and reissued at scale.

An Ongoing Arms Race

The Police have acknowledged that tackling fraud is becoming increasingly challenging.

Little described the situation as an evolving contest, noting that it is “always shifting and changing” and reflects a wider “fraud arms race”.

The difficulty lies in the combination of speed, scale, and anonymity. Criminals can test and refine tactics quickly, while enforcement responses often take longer to implement.

There is also a growing gap between what technology enables and what the public understands. Many victims are not aware of how modern scams are constructed or delivered.

What Does This Mean For Your Business?

For UK businesses, this is not just a consumer issue. The same techniques are used to target organisations, often with higher financial stakes.

Fraud attempts are no longer occasional or targeted events. They are continuous, automated, and designed to reach as many people as possible. Every business should assume it is being targeted, whether or not incidents have been detected.

At the same time, scams are becoming far more convincing. Messages, emails, and even voices can appear realistic enough to bypass instinctive scepticism. Staff can no longer rely on spotting obvious warning signs, which means verification processes need to be clearly defined and consistently followed, particularly for payments, account changes, and sensitive requests.

Speed is also being used as a tactic. Many scams are designed to create urgency and reduce the time available for checks. Clear internal procedures that slow decisions down at critical moments can make a significant difference, even when a request appears legitimate.

Training plays a central role in reducing risk. Employees need to understand not just what scams look like, but how they work. Awareness of common tactics such as impersonation, payment diversion, and social engineering helps staff recognise situations that require extra caution.

There is also a broader operational point. Fraud is no longer a peripheral risk. It is one of the most common forms of crime affecting UK organisations, and it needs to be treated accordingly. This means building it into day-to-day processes, rather than addressing it only when something goes wrong.

The overall message from police is clear. Fraud is growing because it is scalable, adaptable, and effective. Businesses that respond with structured controls, consistent processes, and informed staff will be far better placed to reduce their exposure.

Lloyds App Glitch Exposes Customer Data

A short-lived IT fault at Lloyds Banking Group has raised serious questions about how modern banking systems handle and protect customer data.

What Happened?

Last Thursday morning, customers using apps from Lloyds Bank, Halifax and Bank of Scotland reported seeing transactions that did not belong to them. In some cases, users could view multiple accounts, including payment histories, salary details and references linked to National Insurance numbers.

The issue appeared between roughly 07:00 and 09:00 GMT and was resolved within a short period. Despite this, the nature of the error caused immediate concern among customers, many of whom initially believed their accounts had been compromised.

Lloyds Banking Group acknowledged the issue publicly, stating: “We’re sorry that some customers experienced an issue viewing transactions in the app for a short time this morning. The issue was quickly resolved and we’re looking into what happened.”

The bank has since confirmed that it has begun an internal review to understand the root cause and prevent a recurrence.

Why This Incident Is Different

Banking app outages are not uncommon. In recent years, several UK banks have experienced disruptions that prevented customers from accessing accounts or making payments, particularly around high-demand periods such as payday.

However, this incident is different. Customers were not locked out of their accounts. Instead, they were shown data belonging to other individuals.

That distinction matters. A service outage affects availability. This type of incident affects confidentiality, which carries greater regulatory and reputational risk.

Even if no accounts were directly accessed or altered, the exposure of transaction data, names and reference information represents a potential data protection issue. The Information Commissioner’s Office has confirmed it is making enquiries.

How Could This Happen?

While Lloyds has not yet disclosed the technical cause, incidents of this kind are often linked to how modern digital banking platforms manage and retrieve data.

Most large banks now operate on complex architectures made up of multiple systems working together. These include mobile apps, backend databases, authentication layers and application programming interfaces that allow systems to communicate.

When a customer logs in, the system must ensure that only the correct data is retrieved and displayed. If there is a failure in how sessions are managed or how data is matched to user accounts, it can result in information being shown incorrectly.

These types of faults are rare, but they can occur as systems become more distributed and reliant on real-time data processing.

Professor Markos Zachariadis of the University of Manchester described the incident as “unusual”, noting that increasing data complexity can increase the risk of such issues emerging.

Regulatory Response And Expectations

UK regulators have already taken an interest. For example, the Financial Conduct Authority has confirmed it is in contact with Lloyds Banking Group to understand what happened and how the situation is being resolved.

An FCA spokesperson said: “We’re in contact with Lloyds Banking Group to understand what’s happened and how it’s being resolved. We expect firms to protect customer data and be able to respond to and quickly recover from disruptions.”

The Information Commissioner’s Office has also stated it is aware of the incident and will be making enquiries.

These responses reflect two key expectations placed on financial institutions. Customer data must be protected at all times, with safeguards that prevent exposure even when systems fail. Organisations are also expected to detect issues quickly, respond effectively and restore normal service without delay.

The fact that the issue was resolved within hours may limit operational impact. It does not remove the need for scrutiny.

Why Trust Is At Stake

Retail banking depends heavily on trust. Customers expect not only that their money is safe, but that their personal and financial information is handled correctly.

Incidents like this can undermine that confidence, even when there is no evidence of malicious access or financial loss.

Several customers reported feeling alarmed after seeing unfamiliar transactions, with some believing their accounts had been hacked. This reaction highlights how quickly uncertainty can escalate when financial data appears inconsistent or exposed.

For banks, the challenge is not only to fix the issue but to demonstrate clearly how it happened and what has been done to prevent it happening again.

What Does This Mean For Your Business?

This incident is a useful reminder that data exposure risks are not limited to cyber attacks. They can also arise from internal system failures, particularly in complex digital environments.

Most organisations now rely on interconnected systems to manage customer, financial or operational data. This creates similar risks, even outside the banking sector.

One practical takeaway here is the importance of data segregation. Systems must be designed so that user data is strictly isolated and cannot be mixed, even if something goes wrong at an application level.

Another is the need for strong testing and monitoring. Issues like this often emerge under real-world conditions rather than in controlled environments. Continuous monitoring can help identify anomalies quickly before they affect large numbers of users.

Incident response also matters. Lloyds identified and resolved the issue within a short timeframe. That speed is critical, but it needs to be supported by clear communication and follow-up action.

There is also a broader point around user trust. When customers or clients see unexpected data, their first assumption is often that they have been compromised. Businesses should have clear processes for reassuring users and guiding them on what to do next.

This also highlights the importance of treating data integrity as a core business risk. It is not only an IT concern. It affects compliance, reputation and customer confidence.

As systems become more complex and data flows increase, the likelihood of this type of issue does not disappear. Organisations that build in strong controls, visibility and response processes will be better placed to manage it when it does occur.

Company Check : Adobe Agrees $150 Million Settlement Over Subscription Practices

Adobe has agreed to a $150 million settlement with US regulators over allegations that it hid key subscription terms and made cancellations unnecessarily difficult, bringing renewed attention to how subscription-based services are designed and enforced.

What The Case Was About

The case, brought by the US Department of Justice (DOJ) and the Federal Trade Commission (FTC), focused on Adobe’s “annual paid monthly” subscription plan, widely used across products such as Photoshop and Acrobat.

Regulators alleged that Adobe failed to clearly disclose early termination fees, which could reach hundreds of dollars, and placed this information in fine print or behind hyperlinks that many users would not reasonably see.

The complaint also claimed that cancelling a subscription was overly complex. Customers attempting to cancel online were reportedly required to navigate multiple pages, while those cancelling by phone faced delays, repeated explanations, and resistance.

In simple terms, the government argued that customers were not being given a clear, informed choice at sign-up, and were then discouraged from leaving.

Understanding ROSCA In Plain English

At the centre of the case is the Restore Online Shoppers’ Confidence Act (ROSCA), a US law introduced in 2010.

ROSCA requires businesses offering online subscriptions to do three basic things, which are to clearly explain all important terms before charging customers, obtain explicit informed consent before billing, and provide a straightforward way to cancel.

The law was designed to prevent so-called “dark patterns”, where companies use design techniques to push users into decisions they might not otherwise make.

In this case, regulators argued that Adobe’s processes fell short on all three counts.

What The Settlement Includes

The proposed settlement, which still requires court approval, includes both financial and operational measures.

Adobe will pay a $75 million civil penalty and provide $75 million worth of free services to affected customers. It must also introduce clearer disclosures around early termination fees, improve cancellation processes, and provide reminders before free trials convert into paid subscriptions where fees may apply.

The agreement also resolves claims against two senior Adobe executives named in the original complaint.

Regulator Response

US officials framed the case as part of a broader effort to tackle deceptive subscription practices across digital services.

“American consumers deserve the right to make informed choices when deciding where to spend their hard-earned money,” said Assistant Attorney General Brett A. Shumate, head of the Justice Department’s Civil Division. “The Justice Department will strongly oppose any attempt to harm Americans with deceptive and unfair business practices.”

“Consumers should not have to navigate a digital maze to cancel a subscription,” said U.S. Attorney Craig H. Missakian for the Northern District of California. “We will continue to hold responsible any company that uses deceptive business practices to harm the consumer.”

These statements underline a clear regulatory direction that subscription models must be built around transparency and user control.

Adobe’s Response

Adobe has denied wrongdoing while agreeing to settle the case.

In a statement, the company said it had already streamlined its subscription sign-up and cancellation processes and made them more transparent in recent years, adding that it was “pleased to resolve this matter”.

Adobe has also maintained that its subscription services are designed to be flexible and cost-effective, allowing users to choose plans that suit their needs, timeline and budget.

The decision to settle appears to be a practical step to close the case rather than an admission of liability.

Why This Matters Now

This case comes at a time when subscription-based models dominate much of the software industry.

Adobe itself generates the vast majority of its revenue from subscriptions, a model it helped to popularise. At the same time, regulators are increasing scrutiny of how these models are implemented, particularly where customer choice may be constrained.

There is also wider pressure on Adobe, including growing competition from AI-driven tools and uncertainty following the announced departure of its long-standing CEO.

The result is a situation where both regulatory and market pressures are converging on how digital services are delivered.

What Does This Mean For Your Business?

For UK businesses, even though ROSCA is a US law, the underlying expectations closely mirror UK consumer protection principles and CMA guidance around fairness and transparency.

Clear, upfront disclosure is becoming non-negotiable. Key terms such as pricing, renewal conditions, and cancellation fees need to be visible at the point of purchase, not hidden in links or lengthy terms and conditions that users are unlikely to read in full.

Cancellation processes are also under growing scrutiny. If a customer can sign up quickly online, they should be able to cancel just as easily. Any unnecessary friction, delays or forced interactions may now be viewed as a compliance risk rather than a commercial tactic.

There is also a broader design implication. Subscription journeys, user interfaces, and account settings are no longer just product decisions. They are part of regulatory compliance, with enforcement bodies increasingly examining how digital experiences influence user behaviour.

Customer expectations are changing as well. Users are more aware of their rights and less tolerant of being locked into services they no longer want, which means poor subscription design can quickly become a reputational issue.

For MSPs, SaaS providers and any business using recurring billing, this case is a clear signal. Transparent pricing, simple processes and easy exits are becoming the standard. Businesses that align with these expectations are more likely to build trust and retain customers, while those that do not risk both regulatory action and customer dissatisfaction.

Security Stop-Press : Automating Penetration Testing With AI Agents

Escape has raised $18 million to develop AI agents that automate penetration testing and help security teams keep pace with faster cyber threats.

Its platform simulates attacker behaviour in live systems, identifying vulnerabilities, proving how they can be exploited, and recommending fixes, replacing periodic manual testing with continuous coverage.

The move follows research that found over 2,000 high-impact vulnerabilities across 5,600 AI-built applications, including exposed secrets and personal data in live environments.

For businesses, the risk is clear. Occasional testing is no longer enough, and organisations should adopt continuous security monitoring and ensure vulnerabilities are identified and fixed quickly before they are exploited.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives