Tesla Wins Licence To Supply Electricity In Britain

Tesla has been granted a licence to supply electricity directly to homes and businesses in Britain, marking a significant step in the company’s effort to expand from electric vehicles into a full energy provider.

Tesla Receives Approval To Supply Electricity

Tesla subsidiary Tesla Energy Ventures has reportedly (according to reports by The Wall Street Journal) received approval from the UK energy regulator Ofgem to supply electricity to domestic and commercial customers across England, Scotland and Wales.

The licence allows Tesla to sell electricity directly to households and businesses in much the same way as established suppliers such as British Gas, EDF, E.ON and Octopus Energy. Northern Ireland is not included, as it operates under a separate electricity market.

Ofgem confirmed that the application underwent a full regulatory review between July 2025 and March 2026. The regulator assessed whether Tesla could meet the financial, operational and consumer protection standards required of all electricity suppliers in Britain.

As with any licensed supplier, Tesla must now comply with the UK’s strict energy market rules covering billing transparency, customer treatment, financial resilience and dispute resolution.

A Long Term Strategy In The UK Energy Market

Although the licence approval is new, Tesla has actually been building its presence in the British electricity sector for several years.

The company first obtained an electricity generation licence in 2020, allowing it to operate energy assets connected to the national grid. Since then Tesla has deployed large grid scale battery systems across the country using its Megapack technology.

One of the most notable projects is the Pillswood battery facility near Hull, which at the time of its launch in 2022 was one of Europe’s largest battery storage systems with a capacity of 196 megawatt hours.

Tesla has also been active in energy trading through its Autobidder software platform, which uses artificial intelligence to automatically buy and sell electricity in response to market conditions.

These developments laid the groundwork for the company to move into direct electricity supply.

How Tesla’s Energy Model Works

Tesla’s entry into the UK electricity market is likely to follow a model already used in Texas through its Tesla Electric service.

The approach combines several elements of Tesla’s broader energy ecosystem. These include home solar generation, battery storage, grid scale energy storage and software driven electricity trading.

Customers with Tesla Powerwall home batteries can store electricity generated by rooftop solar panels or purchased from the grid when prices are low. The stored energy can then be used later or exported back to the grid.

When large numbers of home batteries are connected together they can form what is known as a virtual power plant. This network of distributed energy storage can help stabilise the grid during periods of high demand while also generating revenue for participants.

Tesla’s Autobidder software manages the flow of electricity between batteries, the grid and wholesale markets in real time. The system automatically adjusts when energy is bought, stored or sold.

This model allows Tesla to treat energy not simply as a commodity delivered to homes, but as a dynamic resource that can be managed through software.

Competition With Established Suppliers

Obviously, Tesla’s arrival adds a new competitor to a crowded but rapidly evolving UK energy market.

Companies such as Octopus Energy have already demonstrated how software driven platforms and flexible tariffs can disrupt traditional energy supply models. Octopus has grown rapidly by combining renewable energy sourcing with advanced pricing systems and digital customer services.

In fact, Tesla and Octopus have previously worked together in Britain through the Tesla Energy Plan, which connected Powerwall owners to Octopus electricity tariffs.

However, now that Tesla can operate as a supplier in its own right, that partnership may evolve into direct competition.

The company will also compete with large incumbent utilities including British Gas, EDF and E.ON, which together supply millions of UK households.

Public Opposition And Regulatory Scrutiny

Tesla’s application attracted some significant public criticism during the consultation process.

For example, campaign groups organised thousands of submissions to Ofgem expressing concern about Elon Musk’s political statements and online activity. Critics argued that these issues should be considered when deciding whether the company should operate in the UK energy market.

Ofgem stated that licensing decisions are based on regulatory and operational criteria rather than opinions about company leadership. The regulator concluded that Tesla’s application met the legal requirements for a supply licence.

Government officials also confirmed that Ofgem has sole responsibility for assessing such applications.

A Move Toward Software Led Energy Systems

Tesla’s move into electricity supply reflects a broader trend across global energy markets.

Electricity systems are becoming increasingly dependent on renewable energy sources such as wind and solar. These sources generate power intermittently, which creates new challenges for grid stability.

Battery storage and intelligent software systems are emerging as key tools for balancing supply and demand. Grid scale batteries can store excess energy when production is high and release it when demand rises.

Companies that combine generation, storage and software control may therefore gain a strategic advantage in the evolving energy sector.

Tesla has been positioning its energy division around precisely this combination.

What Does This Mean For Your Business?

Tesla’s entry into the UK electricity market highlights how energy supply is becoming increasingly technology driven.

Businesses may soon see new types of electricity tariffs that combine battery storage, renewable generation and software based energy optimisation. This could (hopefully) lead to more flexible pricing models and opportunities to reduce energy costs through smarter usage patterns.

Organisations with on site solar generation or battery storage may also benefit from emerging virtual power plant programmes, where surplus energy can be sold back to the grid.

The development also signals a wider transformation of the electricity sector. Traditional utilities are increasingly competing with technology companies that treat energy management as a data and software problem rather than simply a supply service.

For businesses planning long term energy strategies, the ability to integrate storage, renewable generation and intelligent energy management systems is likely to become increasingly important.

Why Some People Can Spot AI Images More Easily Than Others

New research suggests that the ability to detect AI-generated faces may depend less on intelligence or technical knowledge and more on a fundamental visual skill known as object recognition.

A Surprising Predictor Of AI Detection

As artificial intelligence tools become increasingly capable of generating realistic images, concerns about deepfakes and digital misinformation have grown rapidly. Synthetic faces created by AI systems now appear regularly across social media, advertising and online content, often looking convincingly real.

A new study from researchers at Vanderbilt University (in Nashville, Tennessee) has examined why some people are better than others at detecting these images. The findings suggest that the key factor is not intelligence, technological expertise or familiarity with AI tools, but a more basic perceptual ability.

Object Recognition

The research was led by Isabel Gauthier, professor of psychology at Vanderbilt University, together with Jason Chow and Rankin McGugin. Their study, published in the Journal of Experimental Psychology, found that individuals with stronger object recognition skills consistently performed better at identifying AI-generated faces.

Object recognition is a broad visual ability that allows people to distinguish between very similar objects quickly and accurately. In scientific research it is sometimes referred to as the “o factor”, a domain-general skill involved in recognising patterns and structures across many different visual tasks.

Testing The Ability To Detect AI Faces

To investigate how people recognise synthetic images, the researchers developed a new evaluation tool called the AI Face Test. Participants were shown a mixture of real photographs and faces generated by artificial intelligence systems and asked to determine which images were authentic.

The study then compared each participant’s performance with a range of cognitive and perceptual abilities, including intelligence, face recognition skills and familiarity with artificial intelligence technology.

The results revealed that object recognition ability is the strongest predictor of success in detecting AI-generated faces.

In contrast, factors that might seem more relevant, such as intelligence or experience with AI tools, showed little relationship with performance.

A Useful Visual Ability

As Professor Gauthier explained, “these results highlight a visual ability that has very general applications. It’s a stable trait that helps people meet new perceptual challenges, including those created by AI.”

The researchers were particularly surprised that technological experience did not appear to help participants distinguish between real and synthetic images.

“We were shocked to see how intelligence or even technology training did not help accurately judge if a face is AI,” Gauthier said.

Why Some People Are Better At Object Recognition

It seems that some people are just naturally better at this particular skill. Object recognition ability varies between individuals, but those with stronger visual processing skills are better at detecting small structural differences in images. This means that when looking at AI-generated faces, they are more likely to notice subtle inconsistencies in areas such as lighting, texture or facial proportions that others may overlook.

It’s An Underlying Perceptual Ability

In the Vanderbilt study, participants with higher object recognition scores consistently performed better at identifying AI-generated faces in the AI Face Test. Their performance also remained stable when tested again later, suggesting the skill reflects an underlying perceptual ability rather than something people quickly learn through experience with AI tools.

Looking Beyond Obvious Visual Errors

Researchers believe the advantage does not come from spotting obvious “AI mistakes”. Instead, people with stronger object recognition ability appear better at interpreting complex visual structure when the differences are subtle and the signals are noisy.

Can This Skill Be Improved?

All is not lost for those who do not naturally have this skill. There is some evidence that aspects of object recognition can be improved through training. For example, exercises that involve comparing similar objects, analysing small visual variations and practising detailed visual inspection can strengthen perceptual judgement over time.

Useful In Medical Imaging and Radiology

Research in fields such as medical imaging and radiology shows that targeted visual training can improve a person’s ability to recognise subtle visual differences. That said, people with stronger object recognition skills often perform better in visually demanding tasks, including identifying lung nodules in medical scans, recognising cancerous blood cells, reading musical notation and analysing retinal images.

A Wider Skill With Many Applications

Object recognition ability has been linked in previous research to success across a wide range of visually demanding tasks. The Vanderbilt University study takes things one step further by also challenging the widely repeated claim that AI-generated images are now impossible for humans to detect.

“There is this general message we hear in the media that AI images are so realistic that we can’t tell the difference, and I think that’s misleading,” Gauthier said.

According to the researchers, the results instead show a distribution of abilities across the population. Some people struggle to detect synthetic images, some perform moderately well and others identify them with high accuracy. Understanding these differences may become increasingly important as generative AI technologies continue to evolve.

What Does This Mean For Your Business?

For organisations concerned about misinformation, digital trust and online security, the research highlights an important point about the human side of AI detection.

Many current discussions about identifying synthetic media focus on technical solutions such as watermarking systems, detection algorithms or digital authentication tools. These technologies will likely remain important as AI-generated content becomes more widespread.

However, the new research suggests that human perception also plays a significant role. Individuals differ in their natural ability to interpret complex visual information, and this may affect how easily they recognise AI-generated imagery.

For businesses that rely on visual content, such as media organisations, marketing teams and social media platforms, understanding these differences could help shape training programmes, moderation strategies and verification processes.

As AI-generated media becomes more common across the internet, combining technical safeguards with a deeper understanding of human perception may become an increasingly important part of managing digital authenticity.

Amazon Brings AI Health Assistant To Its Website And App

Amazon has launched a new AI-powered healthcare assistant called Health AI on its website and mobile app, marking a significant step in the company’s effort to use artificial intelligence to help people understand medical information and access care more easily.

Why Amazon Is Expanding Into AI-Powered Healthcare

Amazon’s entry into AI-driven healthcare builds on several major moves the company has made in the sector over the past few years. In 2023 it acquired the primary care provider One Medical for $3.9 billion, adding a nationwide network of clinics and telehealth services to its growing health portfolio.

Alongside this, Amazon has expanded its pharmacy services and introduced digital tools designed to simplify medication management and appointment booking. Health AI now becomes a central interface within this ecosystem, allowing customers to ask health-related questions directly through the Amazon website or mobile app.

According to Amazon, the goal of Health AI is to make healthcare easier to navigate and more accessible. As the company explains on its website, the assistant is designed “to make health care easier by providing you with insights into your health, helping you understand your medical records, and seamlessly connecting you with licensed health care professionals when you need them.”

What The Amazon Health AI Assistant Actually Does

Health AI functions as a conversational assistant that can answer health questions and help users understand information about their health.

For example, users can ask questions about symptoms, medications or test results. The system can also explain medical records, provide guidance about possible next steps and help arrange professional care when needed.

In addition to answering questions, Amazon says Health AI can assist with practical tasks such as managing prescription renewals or booking appointments with healthcare providers. If a user needs medical support, the system can connect them to clinicians through Amazon One Medical.

However, Amazon is keen to point out that the tool is designed to help people better understand their health rather than replace professional medical advice.

How It Works

Health AI operates as what Amazon describes as an “agentic” AI system. This means that instead of acting only as a chatbot, the system can also take actions on behalf of the user, such as arranging appointments or managing prescriptions.

With a user’s permission, Health AI can access medical information such as diagnoses, medications and lab results through the United States Health Information Exchange. This nationwide network allows healthcare providers to share patient data securely.

Using that information, the assistant can provide more personalised responses. For example, if a user asks about a symptom, the system can consider their medical history and current medications when explaining possible causes.

When professional care is needed, the system can connect users directly to a One Medical clinician via message, video consultation or an in-person appointment.

Where?

At present, Amazon’s Health AI assistant is being rolled out only to customers in the United States. The company says availability will expand gradually across the US in the coming weeks as more users gain access through the Amazon website and mobile app.

Amazon has not yet announced when the service may become available in the UK or other international markets. Healthcare services are heavily regulated and differ significantly between countries, which means new digital health tools often launch first in the US before being adapted for other healthcare systems.

For now, the service is closely linked to Amazon One Medical and other US-based healthcare services, which makes a wider international rollout more complex.

What About Privacy And Safety?

Amazon says Health AI has been designed to meet strict privacy and security standards, reflecting the sensitive nature of the medical information the system handles.

All interactions take place within a HIPAA-compliant environment, the regulatory framework that governs the protection of medical information in the US. Conversations are encrypted and access to data is restricted to authorised personnel performing specific healthcare functions.

Amazon also says that Health AI models are trained using abstracted data patterns rather than identifiable patient information.

Warning

Despite these safeguards, privacy experts have warned that AI systems handling medical data must be monitored carefully, particularly as companies continue to improve and train their models using large volumes of user interactions. As Stanford researcher Dr Nigam Shah has noted, “AI systems in healthcare must be evaluated carefully in real-world settings because even small errors can have significant consequences for patients.”

The Rise Of AI Assistants In Healthcare

Amazon’s move reflects a wider trend in the technology sector where AI is rapidly becoming part of how healthcare services interact with patients.

For example, earlier this year OpenAI introduced a version of ChatGPT designed to answer health-related questions, while Anthropic launched its own healthcare-focused AI product.

Many technology companies believe AI assistants could help patients navigate complex healthcare systems, understand medical information and access care more quickly.

However, the expansion of these systems also raises questions about safety and reliability.

Safety Questions Surrounding Medical Chatbots

Recent research has highlighted potential risks when AI systems become involved in healthcare processes.

Security researchers at the AI safety firm Mindgard recently demonstrated that a medical chatbot used in a US telehealth pilot could be manipulated through a technique known as prompt injection. By exploiting weaknesses in the system’s internal instructions, the researchers were able to push the chatbot into generating misleading medical guidance and unsafe recommendations.

The experiment also showed that manipulated information could appear in structured medical summaries passed to clinicians as part of the consultation process.

Researchers warned that systems producing authoritative-looking medical information could influence clinical decision-making if safeguards are not robust.

Why Companies Are Still Pursuing AI Health Assistants

Despite these concerns, companies continue to invest heavily in AI tools designed to support healthcare services.

Healthcare systems in many countries are struggling with rising demand, administrative complexity and limited clinical capacity. Technology firms argue that AI assistants could help patients obtain basic guidance more quickly and reduce the burden on healthcare providers.

Amazon says Health AI is intended to support clinicians rather than replace them, helping patients understand information and navigate healthcare services more efficiently.

What Does This Mean For Your Business?

Amazon’s Health AI launch highlights how artificial intelligence is increasingly becoming a front door to complex services such as healthcare.

The move also places Amazon more directly in competition with other technology companies that are introducing healthcare-focused AI tools, including OpenAI’s health-oriented chatbot features and Anthropic’s Claude for Healthcare. As these systems improve, AI assistants could become a common way for people to seek initial medical guidance, interpret health information and navigate care services.

For businesses operating in sectors that depend on trust and accurate information, including healthcare providers, insurers, financial institutions and legal firms, the development illustrates both the opportunities and the risks of AI systems that interact directly with customers.

AI assistants may help simplify access to services and improve user experience. However, they also introduce new responsibilities around safety, transparency and oversight, particularly when systems provide advice or generate information that may influence important decisions.

As more organisations deploy AI to support customer interactions, ensuring that these systems remain reliable, secure and resistant to manipulation will become an increasingly important challenge.

ChatGPT Launches Interactive Visual Tools

OpenAI has introduced a new feature in ChatGPT that allows users to understand maths and science concepts through interactive visual explanations, turning formulas and equations into dynamic models that can be manipulated in real time.

Why?

This new capability reflects the growing role of ChatGPT as a learning tool rather than simply a conversational AI assistant. According to OpenAI, millions of people already rely on the system to help them understand academic subjects.

As the company explains, “ChatGPT has quickly become one of the most widely used tools for learning. Each week, 140 million people use ChatGPT to help them understand math and science concepts alone.”

Maths and science are areas where many learners struggle with abstract ideas and formulas that can be difficult to visualise. OpenAI says the new feature aims to make these concepts easier to understand by allowing users to interact with them directly rather than simply reading about them.

Turning Equations Into Interactive Experiments

The new feature, called dynamic visual explanations, enables ChatGPT to generate interactive visual modules when users ask questions about certain mathematical or scientific topics.

This means that, instead of receiving only a written explanation or a static diagram, users can adjust numbers and variables and immediately see how the results change. This effectively turns equations into small interactive experiments that learners can explore.

For example, someone asking about the Pythagorean theorem can adjust the lengths of the sides of a triangle and instantly see how the hypotenuse changes. A user exploring compound interest could modify the interest rate or time period and watch the growth curve update in real time.

According to OpenAI, the goal is to help people understand how relationships between variables actually work.

Examples Of What It Can Do

The interactive visuals support more than 70 maths and science topics, including concepts commonly taught at high school and college level.

Among the topics currently available are the Pythagorean theorem, Ohm’s law, Hooke’s law, exponential decay, kinetic energy, Coulomb’s law and compound interest. Geometry topics such as circle area and triangle area are also included.

When users ask ChatGPT about one of these subjects, the system can now provide both a written explanation and an interactive visual model that responds as the user adjusts variables.

OpenAI says the number of supported topics will expand over time as the system evolves.

Why Visual Learning Can Improve Understanding

The idea behind the feature is based on educational research suggesting that visual and interactive learning can improve understanding of complex subjects.

Many mathematical and scientific concepts describe relationships between variables. Seeing those relationships change dynamically can make them easier to grasp.

OpenAI explains that the goal is to move beyond simple explanations and help learners understand how ideas connect. As one educator involved in early feedback on the system noted, “What stands out is how strongly this feature emphasises conceptual understanding. When learning math, understanding why something works and how ideas connect helps concepts stick long term.”

The system has also been designed to encourage users to explore concepts further by adjusting variables and testing different scenarios.

How The Feature Fits Into ChatGPT’s Growing Education Tools

Dynamic visual explanations are actually part of a wider effort by OpenAI to develop ChatGPT as a learning platform.

In recent years, the company has introduced several features designed to support studying and exam preparation. These include Study Mode, which guides users through problems step by step, and quiz tools that allow people to create flashcards and test their knowledge.

OpenAI says the broader aim is to help people explore ideas and build deeper understanding.

As the company explains, “Helping people explore ideas, experiment with concepts, and build deeper understanding is one of the most meaningful ways we can bring the benefits of AI to people everywhere.”

The new visual tools extend that approach by allowing users to experiment directly with the mathematical relationships behind many common formulas.

AI Learning Tools Are Becoming A Competitive Battleground

OpenAI is not the only company exploring interactive learning tools powered by artificial intelligence. Other major AI developers are also introducing visual learning features. Google, for example, recently added interactive diagrams to its Gemini AI system to help explain scientific and mathematical concepts.

These developments really reflect a broader shift in how AI systems are being used in education. For example, rather than simply providing answers to questions, they are increasingly designed to act as interactive learning environments.

At the same time, the growing use of AI in education has sparked debate among teachers and policymakers. Some educators worry that students may become too dependent on AI tools, while others see them as a valuable way to support understanding of difficult subjects.

What Does This Mean For Your Business?

The introduction of interactive learning features in ChatGPT highlights how AI systems are evolving from information tools into platforms that help people understand complex ideas more effectively.

For businesses, this development may have implications for training, professional development and workplace learning. Many organisations already rely on digital tools to help employees develop technical skills in areas such as engineering, finance, data analysis and IT.

Interactive AI systems could make it easier for staff to understand complex concepts by allowing them to manipulate variables and see how formulas and scientific relationships behave in real time, rather than simply reading static explanations.

For organisations responsible for technical training, this could help accelerate learning and make difficult subjects more accessible to employees who may not have strong mathematical or scientific backgrounds.

At the same time, the growing role of AI in learning environments means businesses will need to think carefully about how these tools are used alongside traditional teaching, mentoring and professional guidance.

As AI learning tools continue to develop, they may increasingly become part of how organisations train staff in technical subjects such as mathematics, science, engineering and financial modelling.

Company Check : Microsoft Introduces Copilot Cowork For Agentic AI-Driven Work

Microsoft has introduced Copilot Cowork, a new artificial intelligence capability designed to move Copilot beyond answering questions and towards completing real tasks across Microsoft 365.

Why Microsoft Wants AI To Do More Than Just Chat

Since launching Copilot in late 2023, Microsoft has steadily expanded the role of AI inside its productivity tools, embedding AI capabilities directly into applications such as Word, Excel, Outlook and Teams.

The company now believes the next stage of workplace AI is turning responses into action. For example, rather than simply suggesting what to do next, Copilot Cowork is designed to carry out tasks across multiple Microsoft 365 applications on behalf of the user.

Microsoft said the goal is to move from answering questions to completing work. The company explained that Copilot is evolving from a tool that drafts responses into one that can help execute tasks across the digital workplace.

How Copilot Cowork Turns Requests Into Workflows

Copilot Cowork allows users to describe the outcome they want and then delegate the task to the AI system.

According to Microsoft, Cowork converts a request into a structured plan and then begins carrying out that work in the background. Users can monitor progress, pause the task or approve suggested actions before they are applied.

The company says the system “turns your request into a plan” and continues executing that plan in the background while allowing users to review each stage of progress.

This approach reflects a broader industry trend toward so called agentic AI systems, which are designed to execute tasks rather than simply generate answers.

The Role Of Work IQ In Understanding Workplace Data

A key element behind Copilot Cowork is a technology Microsoft calls Work IQ.

Work IQ acts as a context layer that connects enterprise data stored across Microsoft 365 applications. This includes emails in Outlook, files in SharePoint and OneDrive, conversations in Teams and documents created in Word, Excel and PowerPoint.

The same technology is also underpinning Microsoft’s new Microsoft 365 E7 bundle, sometimes referred to as “The Frontier Suite”, which combines Microsoft 365 E5 with Copilot and a new automation capability known as Agent 365.

Work IQ links organisational knowledge with AI tools so that Copilot can understand how projects, documents and communications relate to one another before taking action.

Why Microsoft Is Partnering With Anthropic

Another notable aspect of Copilot Cowork is Microsoft’s decision to incorporate technology from multiple AI developers.

The system includes Anthropic’s agentic AI technology and supports several AI models, including those developed by OpenAI and Anthropic. This multi model approach allows Microsoft to select different AI models depending on the task being performed.

Industry analysts say this reflects a broader direction in enterprise AI platforms, where companies combine several AI models rather than relying on a single provider.

Microsoft has already begun rolling out these capabilities as part of the latest wave of Microsoft 365 Copilot updates.

Growing Investment In Copilot

The launch of Copilot Cowork forms part of Microsoft’s wider effort to increase adoption of its AI tools across enterprise customers.

Microsoft said in its 2025 Annual Report that Copilot has reached more than 100 million monthly active users across both consumer and enterprise environments. Chief executive Satya Nadella has also said usage has grown nearly three times year on year.

However, adoption among paid enterprise users remains relatively modest. Industry reports suggest only a small proportion of Microsoft 365 users currently pay for Copilot Chat licences.

The company is therefore expanding the feature set and integrating Copilot more deeply into Microsoft 365 to encourage wider adoption across businesses.

What This Means For Your Business?

For organisations already using Microsoft 365, tools such as Copilot Cowork could change how routine tasks are managed across teams.

AI systems capable of coordinating tasks across email, documents, meetings and research may reduce the time employees spend gathering information or preparing materials for projects.

However, the effectiveness of these tools will depend heavily on how well an organisation manages its internal data and permissions.

As AI systems become capable of carrying out work tasks rather than simply generating responses, businesses may need stronger governance over data access, workflow controls and AI usage policies to ensure these systems operate securely and responsibly.

Security Stop-Press : Google Tool Removes Personal Data From Search Results

Google has updated its “Results about you” tool to help people find and remove personal information such as home addresses, phone numbers and email addresses from Google Search results.

The feature allows users to monitor whether their details appear in search results and request that links containing that information be removed from Google’s index. The move comes as the trade in personal data continues to grow, with Sophos X-Ops reporting a 1,253 per cent rise in dark web sales of personal data over the past five years.

Security experts say the tool can help limit the visibility of personal data, although it does not remove the information from the original website where it was published.

For businesses and organisations, regularly checking what information appears online and removing unnecessary personal data from public websites can help reduce the risk of identity theft, fraud and targeted harassment.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives