Medical Chatbot Hacked Into Giving Dangerous Advice
Security researchers have demonstrated that a healthcare AI chatbot used in a US medical pilot can be manipulated into producing dangerous advice and misleading clinical notes, raising new questions about how safely AI can operate inside real healthcare systems.
What Happened?
Doctronic is a US telehealth platform built around an AI medical assistant (a medical chatbot) designed to help patients understand symptoms, manage conditions and connect with licensed doctors. The system is intended to act as a first point of contact in a digital care pathway, gathering patient information, offering guidance and preparing summaries for clinicians.
The idea of Doctronic is that patients can consult the AI about symptoms, medications or health concerns, and the system prepares structured information that helps doctors review cases more quickly.
Can Be Manipulated
However, the platform has recently attracted attention after being examined by Mindgard, an AI security company that specialises in testing the safety of AI systems.
In its research, Mindgard showed that the chatbot could be manipulated into spreading vaccine conspiracy theories, recommending methamphetamine as a treatment for social withdrawal, generating altered clinical guidance and even advising users how to cook methamphetamine.
According to the researchers, the issue stems from weaknesses in the chatbot’s internal instructions. As Mindgard explained: “System prompts are the ‘keys to the kingdom’ when it comes to chatbots.”
The issue is particularly sensitive because Doctronic is currently being used in a pilot programme in the US state of Utah. The project operates within a regulatory “sandbox”, which allows new technologies to be tested under controlled conditions. As part of the trial, the system can assist with managing patient queries and renewing certain existing prescriptions before cases are reviewed by a human clinician.
Why The Exploit Matters
The issue is more serious than a typical chatbot error or AI hallucination because Doctronic sits inside a healthcare workflow. The system generates structured medical summaries and guidance that clinicians may review as part of patient care. If that output is manipulated or incorrect, it could appear credible enough to influence how a case is interpreted.
The researchers warned that this creates a new type of risk. As they put it, “the most dangerous advice can come from the most well-intended of chatbots.”
How The Prompt Injection Works
According to Mindgard, the weakness it discovered involved a type of attack known as prompt injection.
Large language models (LLMs) operate based on internal instructions known as system prompts. These hidden instructions guide how the AI behaves, what rules it follows and what information it should refuse to provide.
Mindgard said it was able to trick the chatbot into revealing those internal instructions by manipulating how the conversation was framed. By convincing the system that the session had not yet begun, the researchers prompted it to recite its own internal instructions.
Once those instructions were exposed, the chatbot became easier to influence. The researchers then introduced fabricated regulatory bulletins and policy updates, which the system treated as legitimate information.
This allowed them to push the AI towards unsafe advice, including altered medication guidelines and fabricated medical guidance.
Why SOAP Note Persistence Raises The Stakes
The most concerning aspect of the experiment involved clinical documentation.
When users request a consultation with a human clinician, the system generates a structured medical summary known as a SOAP note. These documents summarise the patient’s situation and provide context before the appointment begins.
Mindgard found that manipulated information introduced during a compromised session could appear in these summaries and be passed on to clinicians.
In its report, the company warned that this could “actively undermine the human professionals who might trust its authoritative-looking output.”
While the document itself is not a prescription, it becomes part of the clinical context surrounding the patient. In busy healthcare environments, that context can influence how clinicians interpret a case.
In other words, manipulated AI output could enter a legitimate medical workflow.
What Utah Says About The Limits Of The Pilot
Officials involved in the Utah pilot have, however, been keen to point out that the programme includes safeguards.
The trial is limited to renewing certain existing medications and does not allow prescriptions for controlled substances. Additional checks are also applied before any prescription renewal is approved.
Doctronic has said it has reviewed the research findings and continues to strengthen its safeguards against adversarial prompts and manipulation attempts.
Those limitations reduce the immediate risk in this particular pilot. However, the research highlights the types of challenges developers may face as AI systems move deeper into healthcare processes.
The Wider Evidence On Medical Chatbot Risk
This incident also aligns with concerns raised by other recent academic research.
A major study led by the University of Oxford earlier this year examined how people interact with AI systems when seeking medical advice. The study compared people using AI chatbots with those using traditional sources of information.
Researchers found that participants using AI tools were no better at identifying appropriate courses of action than those relying on other methods such as online searches. In some cases, users struggled to interpret the mixture of correct and incorrect advice produced by the models.
The study concluded that strong performance on medical knowledge tests does not necessarily translate into safe real-world interactions with patients.
Crucially, the researchers argued that systems intended for healthcare use must be evaluated in real-world conditions with human users before being widely deployed.
What Does This Mean For Your Business?
For healthcare providers and regulators, the findings reinforce a familiar lesson from other safety-critical industries. Introducing AI into a workflow does not simply add automation. It changes how information flows and how people trust that information.
Healthcare systems already rely on structured documentation and clinical summaries. If AI systems begin generating those summaries, their reliability becomes a core safety question rather than a technical curiosity.
For organisations developing AI tools in high-trust environments such as healthcare, finance or legal services, the message is that technical accuracy alone is not enough. Systems must also be resilient to manipulation, misuse and subtle changes in context.
The Doctronic case illustrates that prompt security, audit trails and robust human oversight are not optional features but fundamental safeguards when AI systems begin influencing decisions that affect real people.
Although AI may eventually become a valuable support tool in healthcare, the evidence emerging so far suggests that the journey from promising technology to safe clinical practice is likely to be longer and more complex than first thought.
Why OpenAI Has Agreed To Deploy AI Inside Pentagon Systems
OpenAI has reached an agreement with the Pentagon to deploy its AI models inside classified US government systems, highlighting how rapidly artificial intelligence is becoming part of national security infrastructure.
Department of Defense?
Before getting any further into this new story, it should be noted that, in its public statements, OpenAI refers to the US defence department as the “Department of War”, a name used by the current administration. However, many Americans continue to use the name the Department of Defense, noting that the department is formally established in US law under that title and that any official renaming would require approval from Congress.
What The System Is
Getting back to the main story here, OpenAI is the US technology company behind widely used AI models such as ChatGPT and GPT-4-class systems. These models are designed to analyse text, generate reports, summarise information and assist with complex analytical tasks.
Organisations already use similar AI systems for activities such as software development, intelligence analysis, planning and research. Governments have increasingly been exploring how these capabilities could support national security and defence operations.
What Happened With OpenAI and the Pentagon?
OpenAI confirmed in a recent announcement on its website that it has reached an agreement allowing its AI models to be deployed in classified government environments operated by the Pentagon.
According to OpenAI, these systems will operate inside secure networks used for sensitive national security work, and the deployment will run through a cloud-based architecture rather than installing the models directly on military hardware.
The company says this approach allows the government to use advanced AI capabilities while OpenAI retains control of its safety systems.
In its online announcement, OpenAI said collaboration between governments and AI developers will increasingly be required as the technology becomes more powerful. As the company wrote: “We believe strongly in democracy. Given the importance of this technology, we believe that the only good path forward requires deep collaboration between AI efforts and the democratic process.”
Why OpenAI Says The Deal Is Necessary
OpenAI argues that modern defence organisations are likely to require increasingly capable AI systems.
Military planners already use AI in areas such as intelligence analysis, operational modelling, logistics planning and cyber defence because these systems can process large volumes of information and identify patterns that may be difficult for human analysts to detect quickly.
OpenAI said it believes providing these capabilities with clear safeguards is preferable to governments relying on less controlled deployments.
As the company explained: “We think the US military absolutely needs strong AI models to support their mission especially in the face of growing threats from potential adversaries who are increasingly integrating AI technologies into their systems.”
How The Deployment Will Work
OpenAI says its models will not be embedded directly into weapons systems or military hardware.
Instead, the deployment will operate through cloud-based APIs managed by OpenAI. This architecture allows the company to maintain its safety controls, monitoring systems and model updates.
OpenAI also says cleared company personnel will remain involved in the deployment.
As the company stated: “We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections.”
Areas Where It Cannot Be Used
Crucially, for many people, the agreement also establishes three areas where OpenAI technology cannot be used. These include mass domestic surveillance, directing autonomous weapons systems and making high-stakes automated decisions.
Why The Deal Attracted Attention
The agreement emerged during a period of tension between the Pentagon and parts of the AI industry.
Anthropic, one of OpenAI’s main competitors, had been negotiating with the Pentagon but refused to remove safeguards limiting the use of its models for domestic surveillance or fully autonomous weapons systems.
After negotiations broke down, the Pentagon designated Anthropic a “supply chain risk”, preventing US defence contractors from continuing to use its technology.
OpenAI’s agreement followed shortly afterwards, prompting debate about how AI companies should engage with military organisations.
The Wider Context Of Military AI
The OpenAI agreement is part of a broader expansion of AI inside US defence systems.
Elon Musk’s AI company xAI has also reached an agreement allowing its Grok model to be used in classified military networks. As US news website Axios reported: “Elon Musk’s artificial intelligence company xAI has signed an agreement to allow the military to use its model, Grok, in classified systems.”
Axios also highlighted the strategic significance of the move, noting that “up to now, Anthropic’s Claude has been the only model available in the systems on which the military’s most sensitive intelligence work, weapons development and battlefield operations take place.”
These developments seem to suggest that the Pentagon is actively expanding its access to multiple frontier AI systems.
What Does This Mean For Your Business?
For technology companies and organisations using AI systems, the agreement shows how advanced AI models are increasingly being treated as strategic infrastructure rather than purely commercial tools.
Governments are beginning to integrate AI capabilities into systems used for intelligence analysis, defence planning and national security operations. That process is likely to deepen as AI systems become more capable.
For AI developers, this creates a growing responsibility around governance, safeguards and oversight. Decisions about how models are deployed now involve legal, ethical and political considerations as well as technical ones.
For businesses more broadly, the story highlights a wider trend. As AI systems become more powerful and widely adopted, questions about acceptable use, risk management and operational oversight are moving from theoretical discussions into real-world policy decisions.
In short, the OpenAI agreement seems to show that the future of advanced AI will be shaped not only by technological innovation but also by how governments, companies and regulators decide these systems should be used.
Why Meta Will Allow Rival AI Chatbots On WhatsApp In Europe
Meta has agreed to allow rival AI chatbots to operate on WhatsApp in Europe for the next 12 months, but providers will have to pay a per-message fee to access the platform.
Regulatory Pressure
The decision follows regulatory pressure from the European Commission, which has been investigating whether Meta unlawfully restricted competition by blocking third-party AI assistants from WhatsApp.
What Triggered The Dispute
The dispute began in October 2025 when Meta updated the terms governing its WhatsApp Business platform. The change effectively prevented developers of general-purpose AI assistants from offering their chatbots through the WhatsApp Business API.
As a result, from 15 January 2026, the only AI assistant available directly on WhatsApp was Meta’s own product, Meta AI. Rival services such as ChatGPT, Claude and other conversational AI systems could not be integrated through the platform in the same way.
Several AI companies complained to regulators that the move disrupted their ability to reach users and could limit competition in the fast-growing AI assistant market.
Regulators Step In
The European Commission responded by opening a formal investigation into the policy. In February it issued a Statement of Objections outlining its preliminary view that the restriction could breach EU competition rules.
The Commission said Meta is likely to hold a dominant position in the European market for consumer communication apps through WhatsApp. Blocking rival AI assistants from accessing the platform could therefore limit competition in a rapidly developing technology sector.
Regulators also warned that WhatsApp acts as an important gateway for companies trying to reach consumers with digital services. Excluding third-party AI assistants could create barriers for smaller competitors seeking to enter or expand in the market.
In light of those concerns, the Commission said it was considering imposing interim measures to prevent serious and irreparable harm to competition while the investigation continues
Meta Changes Course
Shortly after the Commission signalled it might intervene, Meta announced a policy change.
The company said it will allow general-purpose AI chatbot providers to operate on WhatsApp through the Business API in Europe for the next 12 months. Meta said the move should remove the need for immediate regulatory intervention while the investigation runs its course.
In a statement, Meta said: “For the next 12 months, we’ll support general-purpose AI chatbots using the WhatsApp Business API in Europe in response to the European Commission’s regulatory process.”
The company added that the move should allow regulators time to complete their investigation without disruption, saying: “We believe that this removes the need for any immediate intervention as it gives the European Commission the time it needs to conclude its investigation.”
However, the concession comes with an important condition.
AI providers will be charged for each message their chatbot sends through the WhatsApp Business platform.
How The Pricing Works
Meta has introduced a new pricing category specifically for developers of AI assistants.
Under the policy, third-party AI providers must pay a fee for every non-template message sent to users through WhatsApp. A non-template message is a standard conversational reply rather than a pre-written automated notification.
According to Meta’s developer documentation, the price will vary depending on the country but typically ranges from about €0.0490 to €0.1323 per message.
That structure could become expensive for AI providers because chatbot interactions often involve multiple messages in a single conversation. A typical exchange with an AI assistant may include dozens of prompts and replies.
Meta has clarified that the new pricing model only applies to developers offering general-purpose AI assistants through the platform.
Businesses using WhatsApp for customer service automation, such as retailers deploying support chatbots, will not be affected by the change.
Where The Policy Applies
The charging model applies in markets where Meta is legally required to permit AI assistants to operate on the WhatsApp platform.
The system is already in effect in Italy and is being extended across a wide range of European countries including France, Germany, Spain, Ireland, the Netherlands, Sweden and Portugal.
Meta says these changes are designed to comply with regulatory requirements while ensuring the company can manage the technical demands created by AI chatbots operating on messaging infrastructure.
The Wider Competition Debate
The dispute reflects a growing global debate about competition in the artificial intelligence market.
Messaging platforms such as WhatsApp represent one of the most direct ways for AI assistants to reach large numbers of users. With more than two billion users worldwide, WhatsApp is a particularly valuable distribution channel.
Regulators are increasingly concerned that technology platforms that control these channels could favour their own AI services over those developed by competitors.
Meta has previously argued that the AI market remains highly competitive and that consumers can access AI assistants through many other routes, including dedicated apps, websites and operating systems.
What Does This Mean For Your Business?
For businesses developing or deploying AI assistants, the decision highlights how platform access and regulation are becoming key factors in the AI economy.
Messaging apps such as WhatsApp provide direct access to very large audiences and are increasingly seen as a natural place for AI assistants to interact with users. However, access to those platforms may depend on pricing models, regulatory decisions and the policies of the platform owner.
The introduction of per-message charges for AI providers also shows how quickly the economics of AI services can change when they rely on third-party infrastructure. Organisations planning to deliver AI services through messaging platforms will need to consider not only development costs but also ongoing usage fees and platform dependency risks.
Businesses building AI services should therefore monitor platform rules, competition investigations and integration costs carefully. As AI assistants become more embedded in messaging environments, the ability to access those ecosystems on fair and predictable terms may prove just as important as the technology itself.
Microsoft Tests Copilot Update That Opens Web Links Inside The App
Microsoft is testing a new Copilot feature in Windows that opens web links directly inside the Copilot app rather than launching the user’s browser, allowing the assistant to display web content alongside AI conversations.
A New Way To Browse With Copilot
The change is part of an update to the Copilot app for Windows that is currently rolling out to users in the Windows Insider programme.
Under the update, when a user clicks a web link during a Copilot conversation, the page opens in a side pane next to the chat window instead of launching a separate browser window. The aim is to allow users to view web content while continuing their conversation with the AI assistant without losing context.
Microsoft said the feature is designed to make it easier to move between information sources and AI assistance during everyday tasks. In a blog post announcing the change, the company explained that when a link is opened, “Copilot opens the content in a side pane next to your conversation instead of a separate browser window, so you don’t lose context.”
Context Across Multiple Tabs
The feature also allows Copilot to work across several web pages opened during a conversation.
With user permission, the assistant can access the context of the tabs opened within that session. This allows Copilot to summarise information across pages, answer questions about multiple sources and help draft text based on what the user is reading.
Microsoft said this capability is intended to support tasks such as research, writing and document preparation, where users often need to combine information from several web pages.
Tabs opened during a Copilot conversation are saved alongside the chat history, allowing users to return to them later when reopening that conversation.
Microsoft explained that the feature allows users to “ask clarifying questions, summarise information across tabs, or ask Copilot’s help in drafting exactly the right words needed for the task.”
Optional Synchronisation Features
The update also introduces optional synchronisation features designed to make the Copilot interface behave more like a browsing environment.
If users choose to enable it, passwords and form data can be synchronised so that websites accessed through the Copilot side pane work more smoothly during tasks such as logging in or completing forms.
Microsoft says this functionality is optional and requires user permission. However, the possibility of synchronising sensitive information inside the Copilot interface may raise questions for some users following recent debates around AI assistants and personal data handling.
How The Technology Works
Technically, the browsing capability appears to rely on Microsoft’s WebView2 framework, which allows developers to embed a Chromium-based browser engine directly inside Windows applications.
This approach enables the Copilot app to display full web pages without launching a separate browser program.
Embedding browsing functionality directly inside an AI assistant also allows Copilot to analyse the information displayed on those pages and respond to questions about it within the same interface.
From Microsoft’s perspective, this integration helps turn Copilot into a more complete productivity environment where web research, reading and writing tasks can all happen in one place.
Concerns From Browser Vendors
The update has also raised questions among some browser vendors and technology observers.
Traditionally, clicking a web link in Windows opens the user’s default browser, along with their preferred settings, extensions and security configurations.
Opening links directly inside the Copilot app could bypass that behaviour by keeping users inside Microsoft’s own application environment. Critics argue that such changes could potentially affect competition among browser providers, although the feature is still in preview and may evolve before a full release.
At the moment, Microsoft has not provided detailed clarification about how the feature will interact with users’ default browser settings once the update becomes widely available.
Insider Testing Phase
The feature is currently limited to Windows Insider builds and is being rolled out gradually across Insider channels.
According to Microsoft, the update is part of a broader effort to improve the Copilot app by making it faster, more reliable and more closely aligned with the latest Copilot features available on the web.
The update also brings some capabilities from Copilot.com into the Windows app, including features such as Podcasts and Study and Learn mode, while other elements may be temporarily removed while the company refines the experience.
As with many Insider previews, the company says the design may change before the updated Copilot app becomes generally available to all Windows users.
What Does This Mean For Your Business?
For organisations using Windows and AI tools such as Copilot, the update highlights how rapidly AI assistants are evolving from simple chat interfaces into integrated productivity environments.
Embedding web browsing directly inside an AI assistant could streamline tasks such as research, writing, analysis and document preparation, particularly when employees need to combine information from multiple online sources.
However, it also introduces new questions about browser behaviour, data access and security policies. Businesses may need to review how AI tools interact with web content, authentication systems and sensitive information, especially if features such as password synchronisation are enabled.
As AI assistants become more tightly integrated with everyday computing environments, organisations will increasingly need to balance productivity benefits with governance, security and compliance considerations.
Company Check : Why Meta Is Being Sued Over AI Smart Glasses
Meta is facing a class action lawsuit in the United States over allegations that its AI-powered smart glasses collected and reviewed sensitive footage in ways users did not reasonably expect, raising new questions about privacy, transparency and the human labour behind modern AI systems.
Meta’s Ray-Ban Smart Glasses
The product at the centre of the controversy is Meta’s Ray-Ban smart glasses, developed in partnership with eyewear manufacturer EssilorLuxottica.
The glasses look similar to ordinary frames but include built-in cameras, microphones and an AI assistant that can take photos, record video, answer questions and analyse what the wearer is looking at. Users activate the assistant with a voice command such as “Hey Meta”.
The system works by sending captured data such as images, voice queries and video to Meta’s cloud infrastructure, where AI models interpret the information and generate responses.
These smart glasses are part of a growing category of wearable AI products designed to act as hands-free digital assistants integrated into everyday life.
What (Allegedly) Happened?
The legal case was filed in the US by two consumers who claim Meta misled customers about how the glasses handle personal data.
The lawsuit argues that Meta marketed the devices using statements such as “designed for privacy, controlled by you” and “built for your privacy”. According to the complaint, these claims gave users the impression that recordings captured through the glasses would remain private.
However, the case alleges that footage collected through the devices could be reviewed by human contractors involved in training Meta’s AI systems.
The complaint also names Luxottica of America, Meta’s manufacturing partner, and claims the companies violated consumer protection laws through misleading marketing.
The Investigation That Triggered The Case
The lawsuit follows an investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten.
Journalists interviewed workers at a Nairobi-based outsourcing company contracted to review data captured through the glasses. These workers act as data annotators, labelling images, video and transcripts so that AI systems can better understand real-world environments.
According to the investigation, the review queue sometimes included extremely private material captured by the glasses.
Workers said they encountered footage showing people undressing, using the toilet or engaging in intimate moments, alongside everyday scenes from homes and workplaces.
One worker described the scale of the material by saying: “We see everything – from living rooms to naked bodies.”
The Role Of Human Review
Human review is a common part of how AI systems are trained and improved.
When users interact with an AI assistant, some of those interactions may be reviewed by humans to check that the system is producing accurate results and responding appropriately.
Meta’s own AI terms state: “In some cases Meta will review your interactions with AIs… and this review may be automated or manual (human).”
According to the company, this process helps improve how the glasses interpret images, recognise objects and answer questions about the environment.
However, critics argue that users may not fully realise that recordings captured through wearable devices could enter a review pipeline involving human contractors.
Why Regulators Are Now Involved
The revelations have drawn the attention of regulators here in the UK.
The UK’s Information Commissioner’s Office confirmed it is contacting Meta after the claims emerged. The regulator described the allegations as “concerning” and said organisations developing products that process personal data must clearly explain how that data is used.
A spokesperson said devices that collect personal data should “put users in control and provide appropriate transparency”, particularly where the data may be used to train artificial intelligence systems.
The issue also raises questions about international data transfers. The workers reviewing the footage were employed by a subcontractor in Kenya, meaning data could potentially be processed outside the jurisdictions where the glasses are sold.
What Meta Says
Meta says that media captured by the glasses normally stays on the user’s device unless it is shared with Meta services.
The company also says it uses filtering techniques, including face blurring, to reduce the risk of identifying individuals in reviewed material.
In a statement, the company said contractors may sometimes review content shared with Meta AI in order to improve the experience provided by the glasses.
Meta has also pointed to its privacy policies and terms of service, which describe the possibility of automated or human review of interactions with its AI systems.
Why The Issue Matters
The controversy highlights a broader challenge facing many AI-powered products.
While these systems are marketed as automated technology, they often depend on large networks of human workers who label and review data in order to train AI models.
This hidden workforce is essential to machine learning systems, yet the role they play is often invisible to consumers.
The Meta case also raises questions about how transparent companies should be when marketing devices that capture audio, video and environmental data throughout daily life.
As wearable AI becomes more common, the line between personal devices and surveillance technology may become harder to define.
What Does This Mean For Your Business?
For organisations adopting AI-enabled devices or platforms, the case highlights the importance of understanding how data is collected, processed and reviewed behind the scenes.
AI tools frequently rely on human review processes, particularly during training and quality assurance. Businesses deploying such technologies must consider whether users, customers or employees fully understand how their data might be used.
The case also demonstrates how privacy expectations can quickly become legal disputes when marketing claims appear to conflict with how systems actually operate.
For technology companies, the issue reinforces the need for clear communication about data practices. For organisations adopting AI tools, it underlines the importance of governance, transparency and careful evaluation of how AI systems handle sensitive information.
Security Stop-Press : Data Brokers Selling AI Chat Transcripts
Researchers warn that private conversations with AI chatbots may be ending up in commercial databases sold by data brokers.
It’s been reported that AI visibility researcher Lee S Dryburgh found that some browser extensions marketed as free VPNs or ad blockers can intercept traffic to services such as ChatGPT, Gemini, Claude and DeepSeek, capturing both prompts and responses before they reach the chatbot provider.
Dryburgh’s analysis reportedly uncovered about 490 prompts from more than 435 users across sensitive topics including medical issues, financial problems and immigration questions, with some conversations containing identifiable personal details.
For businesses, the lesson is to void entering confidential information into public AI chat tools and restrict untrusted browser extensions, which can capture sensitive data before it even reaches the AI service.