Government Offers Free AI Training for All UK Adults
UK adults are being offered free, government-benchmarked AI training for work as part of a national programme to upskill 10 million people by 2030 and address low confidence and adoption of artificial intelligence across the economy.
UK Government Expands Free AI Training Programme
The UK government has announced a major expansion of its national AI skills programme, making free AI training available to every adult in the country through the AI Skills Boost initiative. Led by the Department for Science, Innovation and Technology in partnership with Skills England, the programme is being positioned as a response to growing concerns about workforce readiness as artificial intelligence becomes more widely embedded across workplaces.
10 Million People By 2030
The expansion builds on a commitment made in June 2025, when government and industry partners first set out plans to train 7.5 million workers in AI-related skills. The latest announcement increases that ambition to 10 million people by the end of the decade, equivalent to nearly a third of the UK workforce, and frames the initiative as the largest targeted training programme since the creation of the Open University.
Who Can Access The Training And How?
The training is open to all UK adults and is delivered online through the government’s AI Skills Hub, a free platform where users can create a learning profile and follow a structured learning journey. No prior technical knowledge is required, and the courses are designed to be accessible alongside existing work or caring commitments.
Courses vary in length, with some taking under 20 minutes to complete, while others run for several hours. Participation is voluntary, and learners can choose which courses to take based on their role, interests or level of confidence with digital tools. The government has said that NHS staff and local government employees will be among the first groups actively encouraged to take part, supported by their employers and representative bodies.
What Do The Courses Teach?
The focus of the training is on practical workplace use rather than technical development of AI systems. For example, courses concentrate on helping workers use commonly available AI tools safely and effectively as part of everyday tasks.
This includes learning how to write and refine prompts for generative AI tools, use AI to draft text and create content, automate routine administrative processes, and interpret simple AI dashboards to identify trends. The training also covers responsible use, including understanding the risks, limitations and potential consequences of using AI at work.
All approved courses have been assessed against Skills England’s AI foundation skills for work benchmark, which sets out a nationally defined baseline for AI literacy in the workplace. Anyone who completes a course that meets the benchmark receives a government-backed virtual AI foundations badge, which can be used on CVs and professional profiles to demonstrate recognised skills.
Why The Government Is Prioritising AI Skills
The expansion of AI training reflects evidence that AI adoption in the UK remains uneven and that confidence among workers is low. For example, research published alongside the announcement found that only 21 per cent of UK workers currently feel confident using AI in their jobs. Business adoption data suggests that as of mid-2025 only around one in six UK businesses were using AI at all, with much lower uptake among small and micro businesses.
Government analysis suggests that improving adoption and confidence could deliver significant productivity gains. Ministers estimate that wider use of AI could unlock up to £140 billion in additional annual economic output by reducing time spent on routine tasks and enabling workers to focus on higher value activity.
Technology Secretary Liz Kendall highlighted how the training is intended to ensure the benefits of AI are widely shared, saying, “We want AI to work for Britain, and that means ensuring Britons can work with AI,” adding that, “Change is inevitable, but the consequences of change are not. We will protect people from the risks of AI while ensuring everyone can share in its benefits.”
The Role Of Industry And Public Sector Partners
Delivery of the programme relies on a large partnership between government, industry and public sector organisations. For example, founding partners including Accenture, Amazon, Google, IBM, Microsoft, Salesforce, Sage and SAS have been joined by a wider group that now includes the NHS, British Chambers of Commerce, Federation of Small Businesses, Institute of Directors, Local Government Association, Cisco, Cognizant, Multiverse, Pax8 and techUK.
Industry partners are responsible for developing many of the courses hosted on the AI Skills Hub, while representative organisations are expected to promote the training to their members and workforces. The involvement of the NHS, the UK’s largest employer, is intended to support large scale uptake in the public sector and reinforce the relevance of AI skills beyond technology focused roles.
Phil Smith, Chair of Skills England, has said the benchmark was designed to provide clarity for both learners and employers about what AI skills are needed for work. He said the digital badges awarded on completion would provide clear recognition of learning and help set consistent standards for AI upskilling across the economy.
Funding And Wider Skills Measures
The training offer forms part of a broader package of measures aimed at preparing the UK workforce for AI-driven change. For example, the government has announced £27 million in funding for a new TechLocal scheme, part of the wider £187 million TechFirst programme, which will support local employers and education providers to develop AI-related jobs, professional practice courses, graduate traineeships and work experience opportunities.
Alongside this, the government has launched applications for the Spärck AI Scholarship, which will fund up to 100 master’s students in AI and STEM subjects at nine UK universities. The scholarships will cover tuition and living costs while providing access to industry placements and mentoring.
A new AI and the Future of Work Unit has also been established to monitor the economic and labour market impact of AI. Supported by an expert panel drawn from business, academia and trade unions, the unit is intended to provide evidence-based advice on when policy interventions may be needed to support workers and communities as roles and skills evolve.
The Implications For Employers And Businesses
For employers, particularly small and medium-sized enterprises, the programme offers a low-cost route to building basic AI capability across teams. Business groups including the Federation of Small Businesses and the British Chambers of Commerce have welcomed the initiative, citing uncertainty among employers about what AI skills staff need and how to support responsible adoption.
Large employers involved in the programme have pointed to their own experience of rolling out AI tools internally, noting that productivity gains depend heavily on shared understanding and confidence rather than access to technology alone. The government argues that a nationally recognised benchmark will help employers set clearer expectations and reduce the risk of misuse or unrealistic assumptions about AI.
Criticisms And Questions
Despite broad support, the initiative has attracted criticism from some policy groups and professional bodies. For example, the Institute for Public Policy Research has warned that short, tool-focused courses risk oversimplifying what it means to be prepared for AI-enabled work. Critics argue that effective adaptation also requires judgement, critical thinking, leadership and organisational change, which cannot be delivered through brief online modules alone.
There are also questions about how impact will be measured over time. For example, while the government has committed to reaching 10 million workers by 2030, it has not yet set out detailed plans for tracking completion rates, long-term skills retention or productivity outcomes across different sectors. Concerns have also been raised about the mix of free and subsidised courses on the AI Skills Hub and whether this could cause confusion about access.
The government has said the AI Skills Boost programme will continue to evolve, with new courses, partners and benchmarks added as workplace use of AI develops and expectations around skills mature.
What Does This Mean For Your Business?
The expansion of free AI training marks a clear attempt by government to address one of the most persistent barriers to AI adoption in the UK, which is a lack of confidence and shared understanding rather than access to technology itself. By setting a national benchmark and backing it with widely accessible courses, the programme establishes a common baseline for what it means to use AI responsibly at work, something many employers and workers have so far lacked.
For UK businesses, particularly small and medium-sized firms, the initiative could lower the practical and financial threshold for experimenting with AI tools in everyday operations. A clearer definition of core skills may help employers move beyond uncertainty and begin integrating AI in measured, realistic ways, while also supporting better internal governance and expectations around use. Larger organisations and public sector bodies may benefit from a more consistent skills foundation across teams, reducing fragmentation and uneven uptake.
For workers, the availability of short, recognised courses offers a route to building confidence without committing to formal retraining or specialist qualifications. The emphasis on practical use, risk awareness and responsible adoption reflects an acknowledgement that AI will increasingly sit alongside existing roles rather than replace them outright in the near term.
At a national level, the programme aligns skills policy more closely with the government’s wider ambitions on productivity, economic growth and technological adoption. Whether it delivers lasting impact will depend on uptake, the quality of training, and how effectively it connects to broader workforce development and organisational change. The creation of the AI and the Future of Work Unit suggests an awareness that skills alone will not resolve all challenges, but it also places responsibility on government, employers and industry partners to ensure the transition is managed in a way that supports workers and delivers tangible economic benefit.
Google Integrates Gemini Into Chrome To Enable Agentic Browsing
Google has announced that it has begun integrating its Gemini artificial intelligence system directly into the Chrome browser as part of a wider effort to turn everyday web browsing into a more automated and assistant-led experience.
Why?
Chrome remains the world’s most widely used web browser, accounting for over 70 per cent of global desktop usage (StatCounter), and Google’s latest changes seem to reflect growing pressure from AI-focused rivals offering built-in assistants and automated task handling. For example, over the past year, browsers and browser features from Microsoft, OpenAI-backed projects, Perplexity and Opera have increasingly promoted AI agents as a way to reduce manual searching, form filling and comparison across multiple websites.
Therefore, rather than thinking about replacing Chrome or launching a separate AI browser, Google is now embedding Gemini directly into the existing product. The aim is to reshape how users interact with websites while preserving Chrome’s central role in daily computing and maintaining continuity for its vast installed user base.
Moving Gemini From A Floating Tool To A Built-In Side Panel
Google first added Gemini to Chrome in 2024, but its early implementation was limited. The assistant appeared in a floating window that sat apart from the main browsing experience and offered only limited contextual awareness. This latest update replaces that approach with a side panel that sits alongside web pages and can be opened across tabs.
According to Parisa Tabriz, Vice President of Chrome, the intention is to allow users to work across the web without losing context. In a Google blog post announcing the changes, she wrote that the new side panel “can help you save time and multitask without interruption” by letting users “keep your primary work open on one tab while using the side panel to handle a different task”.
This design allows Gemini to analyse the page currently being viewed, reference other open tabs, and respond to questions without forcing users to break their workflow. When several tabs originate from the same site or topic, such as product listings or reviews, Gemini can treat them as a related group, making it possible to summarise information or compare options across pages.
When and Where?
The update is rolling out now to Chrome users in the US on Windows, macOS and Chromebook Plus devices, extending availability beyond the platforms supported during earlier testing.
Built On Gemini 3 And Multimodal Capabilities
The new Chrome features are built on Gemini 3, which Google describes as its most capable AI model so far. Gemini is a multimodal system, meaning it can work with text, images and other structured inputs rather than relying solely on written prompts.
Google says this capability supports its aim of making Chrome more useful during complex tasks. In its announcement, the company described Gemini in Chrome as “an assistant that helps you find information and get things done on the web easier than ever before”, particularly when tasks involve multiple steps or different forms of content.
Multimodal understanding also enables Gemini to work directly with images viewed in the browser. For example, through integration with Google’s Nano Banana tool, users can modify images without downloading files or opening separate applications. Google appears to be positioning this as a practical feature for tasks such as visual planning or transforming information into graphics while remaining within the same browsing session.
Tighter Integration With Google Services
A key element of Google’s approach is deeper integration between Chrome and its wider ecosystem of services. Gemini in Chrome supports Connected Apps, including Gmail, Calendar, YouTube, Maps, Google Shopping and Google Flights.
With user permission, Gemini can reference information from these services to help complete tasks. In its announcement, Google highlighted examples such as travel planning, where Gemini can locate event details from an email, check flight options, and draft a message to colleagues about arrival times without requiring the user to move between applications.
Google has also confirmed that its Personal Intelligence feature will be brought to Chrome in the coming months. This feature allows Gemini to retain context from previous interactions and tailor responses over time. Tabriz stated that users remain in control, writing that people can opt in and choose whether to connect apps, with the ability to disconnect them at any time.
From Autofill To Agentic Browsing
The most substantial development is probably Google’s move towards agentic browsing, which refers to software systems capable of carrying out tasks across websites on a user’s behalf. For subscribers to Google AI Pro and AI Ultra in the United States, Chrome now includes a feature called auto browse.
Google is presenting auto browse as an extension of existing automation rather than a replacement for user involvement. In the blog post, Tabriz wrote, “For years, Chrome autofill has handled the small stuff, like automatically entering your address or credit card, to help you finish tasks faster.” She added that Chrome is now moving “beyond simple tasks to helping with agentic action”.
Auto browse is designed to handle multi-step workflows such as researching travel options, collecting documents, filling in online forms, requesting quotes, or managing subscriptions. Google says early testers have used it to schedule appointments, assemble tax documents, file expense reports and renew driving licences.
More advanced scenarios combine multimodal input and commerce. For example, Google describes cases where Gemini can identify items shown in an image, search for similar products online, add them to a shopping basket, apply discount codes and remain within a set budget. When sensitive actions are involved, such as signing in or completing purchases, auto browse pauses and asks the user to take control.
Google has stated that its AI models are not exposed to saved passwords or payment details, even when Chrome’s password manager is used to support these actions. The company says auto browse is designed to request explicit confirmation before completing actions such as purchases or social media posts.
Commercial Context And Industry Resistance
Google’s decision to deepen Gemini’s role in Chrome comes amid intensifying competition around AI-driven browsing and automation, for example Microsoft has integrated similar capabilities into Edge, while newer browsers have been designed from the outset around the use of AI agents.
There is also increasing interest in agent-led online commerce. For example, management consultancy McKinsey has projected that agentic commerce for business-to-consumer retail in the United States could reach $1 trillion by 2030. Google has indicated that Chrome will support its Universal Commerce Protocol, an open standard developed with companies including Shopify, Etsy, Wayfair and Target, which is intended to allow AI agents to carry out transactions in a structured and authorised way.
At the same time, some websites and platforms have begun limiting automated access or requiring explicit human review for transactions. Google appears to be positioning auto browse as a more controlled approach, with human confirmation built into sensitive steps, as it explores how agentic browsing can operate within existing legal and commercial frameworks.
What Does This Mean For Your Business?
Google’s decision to embed Gemini directly into Chrome seems to point to a future where the browser becomes an active participant in work rather than a passive gateway to information. For users, this could concentrate research, comparison and administrative tasks inside a single interface that already sits at the centre of daily digital activity. The immediate impact is likely to be incremental rather than transformational, with benefits most visible in time saved on repetitive or fragmented tasks, balanced against ongoing limits around accuracy, intent recognition and website compatibility.
For UK businesses, the changes could have practical implications across productivity, procurement and digital workflows. For example, tools such as auto browse could reduce the time staff spend on routine administration, travel planning, expense management and supplier research, particularly for small and medium sized organisations without dedicated support teams. At the same time, businesses that rely on web traffic, online forms or e-commerce will need to consider how agent-led browsing interacts with existing processes, security controls and customer journeys, especially as automated interactions become more common.
Website operators, retailers and platforms face a more complex picture, weighing potential efficiency gains against concerns over loss of control, while regulators and standards bodies are paying closer attention to how automated agents access data and complete transactions. Google’s emphasis on user confirmation, permissions and open standards reflects these pressures, while also highlighting that agentic browsing remains an evolving area. Chrome’s scale gives Google a strong position in shaping how this develops, although wider adoption and trust are likely to depend on how reliably these tools perform in real-world conditions rather than on their technical ambition alone.
AI-Written Virus Marks a New Step Towards Lab-Designed Life
A research team in California has demonstrated that artificial intelligence can design a fully synthetic virus from scratch, raising questions about how life itself may be engineered in the future.
What Has Been Developed?
The work centres on a new synthetic virus known as Evo-Φ2147, created using a generative AI model developed by researchers at Stanford University (in Silicon Valley, California) in collaboration with the Arc Institute and the University of California, Berkeley. The research was led by Brian Hie, who runs Stanford’s Laboratory of Evolutionary Design and works at the intersection of machine learning and biology.
Evo-Φ2147 was generated using Evo 2, an advanced version of Evo, a large language model for DNA. Instead of predicting words or images, Evo analyses genetic sequences and learns the underlying patterns that govern how DNA, RNA and proteins function together inside living organisms. Once trained, it can generate entirely new genetic sequences that have never existed in nature.
Asked The Evo2 Model To Make The Virus
As a proof of concept, the researchers asked Evo 2 to design new bacteriophages, viruses that infect bacteria rather than humans. The model then generated 285 complete viral genomes, all designed computationally rather than derived from natural viruses. Sixteen of those synthetic viruses were then tested in the lab and shown to successfully infect and kill Escherichia coli, a bacterium responsible for serious infections and a growing problem due to antibiotic resistance.
Evo-Φ2147 emerged as one of the most effective designs and, while it remains simple by biological standards, containing just 11 genes, it demonstrated that an AI-designed genome could function inside a living cell exactly as intended.
Why Evo-Φ2147 Matters Scientifically
What makes Evo-Φ2147 scientifically so significant is not the virus itself, but the method used to create it. For the first time, researchers have shown that an AI system can design a complete, functional genome at once, rather than tweaking or modifying existing biological sequences.
In the paper published in Science (.org), the authors describe Evo as “a genomic foundation model that enables prediction and generation tasks from the molecular to the genome scale.” Put simply, this means that Evo is AI that can understand DNA, predict what genetic changes will do, and design new genetic code, from tiny DNA parts right up to almost whole genomes (the complete set of genetic instructions inside an organism).
Trained
The model was trained on 2.7 million prokaryotic and phage genomes, representing around 300 billion DNA nucleotides. This scale allowed Evo to learn how tiny changes at the level of individual DNA bases can affect the fitness and behaviour of an entire organism.
The researchers emphasised that Evo operates at single-nucleotide resolution (at the level of individual DNA letters) and across very long sequences, up to 131,000 DNA bases at once. This matters because even the simplest microbes contain millions of base pairs, and previous AI tools struggled to capture long-range genetic interactions.
In the study, Evo was able to generate DNA sequences exceeding one million base pairs that showed realistic genome-like structure, including gene clusters and regulatory patterns seen in natural organisms. The researchers said that Evo “learns both the multimodality of the central dogma and the multiscale nature of evolution,” meaning it understands how DNA, RNA and proteins interact across molecular, cellular and organism-wide levels.
Evo-Φ2147 demonstrates that this understanding is not purely theoretical. It translated into a working biological system that could infect bacteria and replicate within them.
Is It Really “Life”?
Describing Evo-Φ2147 as life is a little controversial. For example, it is true to say that it does behave like a virus, which sits in a grey area between living and non-living systems. It contains genetic information, interacts with a host, and replicates using cellular machinery. However, it cannot reproduce independently and lacks the complexity associated with autonomous life.
The researchers themselves are being quite cautious, perhaps because Evo-Φ2147 does not meet most biological definitions of life, and its genome is vastly simpler than even the smallest free-living organisms. To give it some context, the smallest known bacterial genomes contain around 580,000 DNA base pairs, while a human genome contains roughly 20,000 genes.
What makes this situation a bit different is that the genome was not discovered or evolved through natural selection, but was written intentionally by an AI system. British molecular biologist Adrian Woolfson described this as a turning point, arguing that evolution has historically been blind, while genome-scale AI introduces foresight and design into biology for the first time.
This is why some researchers view Evo-Φ2147 as an early step towards lab-grown life, even if it does not yet qualify as life in a strict sense.
How This Fits Into The World of Synthetic Biology
Synthetic biology has long aimed to redesign living systems, but progress has typically relied on modifying existing organisms, and Evo represents a move from editing life to generating it computationally.
Earlier advances such as CRISPR gene editing allowed scientists to cut and paste DNA with precision. Evo goes further by designing entire genetic systems at once. In the Science paper, the authors reported that Evo successfully generated novel CRISPR-Cas systems and transposable elements that were validated experimentally, marking “the first examples of protein-RNA and protein-DNA codesign with a language model.”
This essentially places Evo within a growing movement to treat biology as an information science. DNA becomes a form of code, evolution a dataset, and AI a design engine capable of exploring biological possibilities far faster than natural processes or traditional lab work.
The researchers have explicitly framed Evo as a foundation model, comparable to large language models in AI, designed to underpin many downstream applications rather than a single use case.
Ethical, Security and Governance Questions
The ability to design and generate complete genetic systems using AI also raises some legitimate concerns about misuse, because the same tools that can create beneficial biological systems could, in theory, be applied in harmful ways, an issue the Evo team addressed directly by excluding viruses that infect humans and other eukaryotes from the training data.
For example, in the published Science paper, the authors warned that genome-scale AI “simultaneously raises biosafety and ethical considerations” and called for “clear, comprehensive guidelines that delineate ethical practices for the field.”
They pointed to frameworks such as those developed by the Global Alliance for Genomics and Health as a starting point, stressing the need for transparency, international cooperation and shared responsibility.
Importantly, in documenting their discovery, the researchers seem to have avoided too much sensationalism. For example, Evo does not enable the creation of dangerous organisms overnight, and its outputs remain constrained by biological reality, laboratory validation, and existing safety controls. The risks are real, but incremental rather than immediate.
Its Value to Humanity (and Business)
The most immediate promise of this discovery lies in medicine and biotechnology. For example, AI-designed bacteriophages could offer new ways to fight antibiotic-resistant infections, a growing global health threat. During the COVID-19 pandemic, the researchers noted that similar tools could have dramatically reduced vaccine development timelines.
Beyond healthcare, genome-scale design could also influence agriculture, materials science, and environmental remediation. The Stanford team highlighted potential applications such as reprogramming microbes to improve photosynthesis, capture carbon, or break down microplastics.
For businesses, this could signal a future where biological design cycles become faster, more predictable, and more software-driven. Companies working in pharmaceuticals, bio-manufacturing, and sustainable materials are likely to be among the earliest beneficiaries, while regulators and insurers will face new questions about oversight and risk.
Challenges and Questions
Despite the technical breakthrough, some significant challenges remain, because Evo’s generated genomes still lack many features found in natural organisms, including full sets of essential genes and robust regulatory systems, leading the researchers to describe current genome-scale outputs as “blurry images” of life that capture high-level structure while missing fine-grained detail.
Critics also argue that calling such systems a step towards creating life risks overstating what has actually been achieved. It is worth noting here that Evo accelerates design, but it does not eliminate the complexity, uncertainty, and failure rates inherent in biology.
Other critics have pointed to possible governance gaps, particularly around who decides what kinds of genomes should or should not be designed. As Woolfson put it, society will need to decide “who is going to define the guard rails” as these tools become more capable.
What Evo-Φ2147 ultimately represents is not the arrival of artificial life but offers a clear signal that the boundary between computation and biology is rapidly dissolving, with consequences that science, industry, and society are only beginning to understand.
What Does This Mean For Your Business?
This research shows that AI is no longer just analysing biology but beginning to shape it, turning genome design into something closer to a computational process that is then tested in the lab. Evo-Φ2147 does not redefine life, but it does change how genetic systems can be created and refined, replacing slow trial-and-error approaches with AI-driven design followed by targeted validation.
The wider impact of this capability lies in what it could unlock, because faster genome design has the potential to accelerate medical research, support the development of new treatments, and shorten response times during future health crises, while also increasing the importance of clear ethical oversight and realistic safety governance. For UK businesses operating in life sciences, pharmaceuticals, and sustainable manufacturing, this development points towards shorter development cycles and a growing reliance on advanced computing and biological expertise working together.
Taken together, Evo-Φ2147 highlights how quickly the boundary between computation and biology is fading, placing responsibility for how these tools are used not just with researchers, but with regulators, businesses, and wider society that will ultimately shape where genome-scale AI is allowed to go next.
Google DeepMind Opens Project Genie for Real-Time AI World Creation
Google DeepMind has opened access to Project Genie, an experimental world-building AI tool, as it looks to gather real-world feedback and accelerate progress on the world models it believes are central to the path towards artificial general intelligence.
What Project Genie Is and How It Was Built?
Project Genie is a web-based experimental research prototype developed by Google DeepMind that allows users to generate and explore interactive virtual worlds using text prompts or images. Technically, it is not a standalone model but a front-end experience built on top of several of DeepMind’s most advanced systems.
At its core is Genie 3, DeepMind’s latest general-purpose world model, which generates environments frame by frame in real time as users move through them. This is combined with Nano Banana Pro, an image generation model used to sketch and refine the initial appearance of a world, and Gemini, which handles higher-level reasoning and prompt interpretation. Together, these components allow Project Genie to turn a static description or image into a navigable environment that responds dynamically to user actions.
How Do You Use It?
Practically, users begin by creating what DeepMind calls a “world sketch”. This involves prompting the system with a description of an environment and a character, choosing a first- or third-person perspective, and optionally refining the generated image before entering the world. Once inside, the environment expands in real time as the user moves, with the model simulating basic physics, lighting, and object behaviour. Users can also remix existing worlds, explore curated examples, or download videos of their explorations.
Project Genie was built by DeepMind researchers including Jack Parker-Holder and Shlomi Fruchter, both of whom have been closely involved in the development of Genie 3 and earlier world model research.
DeepMind
Google DeepMind is the name for Google’s dedicated AI research lab, formed through the merger of DeepMind and Google Brain, and is focused on developing general-purpose AI systems. Its long-term stated ambition is to build AI that can reason, plan, and act across the full complexity of the real world, rather than being limited to narrow tasks.
Genie 3 Previewed Back In August 2025
DeepMind first previewed Genie 3 as a research model back in August 2025, positioning it as a major step forward in interactive world simulation. Five months later, the decision to open Project Genie to a wider audience appears to reflect a deliberate transition from closed research testing to broader, real-world experimentation.
In its own recent announcement, Google stated that “the next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation.” Access is currently limited to Google AI Ultra subscribers in the United States aged 18 and over, reinforcing that this is still a controlled research rollout rather than a mass-market launch.
Why Now?
It should be noted here that the timing matters. For example, world models are moving from abstract research concepts into systems that can be directly experienced and evaluated by users. Therefore, by opening access now, DeepMind is hoping to be able to collect feedback, usage patterns, and behavioural data that are difficult to obtain through internal testing alone, while also demonstrating tangible progress in a competitive and fast-moving field.
What Genie Can Do and Who It’s Aimed At
Genie 3 enables real-time interaction at around 24 frames per second, with worlds that remain visually consistent for several minutes. Unlike traditional video generation models that produce a fixed sequence, Genie 3 generates each new frame based on what has already happened and how the user moves, allowing for exploration rather than playback.
Project Genie is actually aimed at several overlapping audiences. For example, in the near term, it is most accessible to creators, researchers, and technically curious users who want to experiment with AI-generated environments. The tool supports whimsical and stylised worlds particularly well, including animated, illustrative, or fantastical settings.
Beyond creative exploration, DeepMind also appears to see some real value in Genie 3 for education, simulation, and research. World models can be used to train and test embodied agents (AI systems designed to act within an environment), including robots or software agents that move and make decisions. Instead of learning in the real world, where training can be expensive, slow, or risky, these agents can practise inside simulated environments. For example, an AI-controlled robot can learn how to navigate difficult terrain or react to unexpected situations without any physical risk or real-world consequences.
DeepMind described world models as systems that “simulate the dynamics of an environment, predicting how they evolve and how actions affect them,” framing Genie 3 as part of a broader capability rather than a single product feature.
How Project Genie Fits Into DeepMind’s AGI Strategy
World models now appear to occupy a central position in DeepMind’s vision for AGI (artificial general intelligence), which are AI systems that can understand, learn, and reason across a wide range of tasks rather than being limited to a single narrow function. The lab has argued that this kind of intelligence requires an internal model of the world that supports planning, prediction, and counterfactual reasoning. In practical terms, this means being able to ask “what happens if” and simulate possible outcomes before acting.
Genie 3 builds on earlier models such as Genie 1 and Genie 2, but adds real-time interaction and longer-horizon consistency. This allows agents to execute longer sequences of actions and pursue more complex goals, which DeepMind sees as essential for general-purpose intelligence.
The company has already demonstrated Genie 3 being used to generate environments for SIMA, its generalist agent for 3D virtual settings. This reinforces that Project Genie is not the end goal, but a way to expose and test the underlying capabilities that future agents will rely on.
The Competitive Landscape and Why Timing Matters
The release of Project Genie comes as competition around world models is intensifying, with several AI labs and startups racing to build systems that go beyond static generation and towards interactive simulation.
For example, Runway has recently introduced its own world model concepts alongside its video tools. Also, World Labs, founded by Fei-Fei Li, has launched Marble as a commercial product aimed at interactive environments. Yann LeCun’s AMI Labs has also signalled a strong focus on world modelling as a foundation for intelligence.
By opening access now, DeepMind is hoping to position itself as a leader not just in theory, but in demonstrable, hands-on systems. This visibility matters for attracting talent, shaping industry standards, and influencing how developers and researchers think about the future of AI simulation.
Limitations, Guardrails, and Why This Is Still a Prototype
Despite its capabilities, Project Genie is explicitly framed as an experimental research prototype. Usage sessions are currently limited to 60 seconds of world generation and navigation, reflecting the heavy computational cost of auto-regressive real-time models.
With this in mind, Google has acknowledged several known limitations. For example, generated worlds may not closely match prompts or real-world physics, characters can be difficult to control, and latency can affect interaction. Some Genie 3 capabilities announced in August, such as promptable world events that change environments mid-exploration, are not yet available in Project Genie.
DeepMind has also been quick to emphasise responsible development. For example, safety guardrails restrict copyrighted content, realistic depictions of certain subjects, and other sensitive material. The company stated that “as with all our work towards general AI systems, our mission is to build AI responsibly to benefit humanity.”
These constraints help explain why Project Genie is not being positioned as a consumer product or game platform, but is currently a testbed designed to surface technical weaknesses and user expectations before wider deployment.
Entertainment Today, Embodied Agents Tomorrow
In the short term, Project Genie’s most obvious use is entertainment and creative experimentation. Its strengths in stylised, animated, and imaginative environments make it well suited to playful exploration and concept development.
However, longer term, DeepMind’s ambitions extend far beyond games. World models offer a scalable way to train embodied agents, including robots and autonomous systems, in simulated environments that mirror the complexity of the real world. This could reduce costs, improve safety, and enable faster iteration across industries such as logistics, manufacturing, and healthcare.
The same technology could also support training, education, and scenario planning, where exploring “what if” situations is valuable.
Business and Industry Implications
For Google, Project Genie is intended to reinforce its position at the frontier of advanced AI research and supports the premium value proposition of its AI Ultra subscription. It also strengthens Google’s influence over how world models are commercialised and evaluated.
For competitors, the move appears to raise the bar for what qualifies as a leading-edge AI system, increasing pressure to demonstrate interactive, real-time capabilities rather than static outputs.
For businesses and developers, Project Genie offers an early glimpse into tools that could reshape simulation, training, design, and creative workflows. At the same time, its limitations highlight that world models are still an emerging technology with unresolved challenges around realism, control, and cost.
For the wider AI market, the release highlights a broader transition from generative content towards generative environments, where interaction and agency matter as much as visual fidelity.
Challenges, and Criticisms
It should be noted that some key challenges remain for world models like Genie 3, particularly around scalability, realism, and controllability. For example, auto-regressive world generation is computationally expensive, which makes long-duration or large-scale simulations difficult to run. Critics have also questioned how quickly these systems can achieve reliable real-world accuracy, especially for safety-critical applications where errors or inconsistencies could have serious consequences.
There are also broader concerns around data use, intellectual property, and the environmental cost of large-scale compute. DeepMind’s cautious, limited rollout reflects an awareness of these issues, even as it pushes the technology forward.
Project Genie, as DeepMind presents it, is not yet a finished destination but a visible step in a much longer journey towards AI systems that can understand and navigate the world in ways that begin to resemble human reasoning.
What Does This Mean For Your Business?
Project Genie shows how world model research is now being tested outside the lab, with DeepMind deliberately exposing early capabilities to real users in order to gather feedback that research alone cannot provide. The limited access, short session lengths, and strict guardrails make it clear that this is about learning and validation rather than product launch.
For UK businesses, the immediate value is not in using Project Genie directly, but in what it signals. Interactive simulation has long-term relevance for training, design testing, robotics, and scenario planning, particularly in sectors where real-world experimentation is expensive or risky. As these models improve, they could become a practical tool for reducing uncertainty before decisions are made in physical environments.
For the wider AI market, the release raises expectations around what advanced AI systems should be able to do. The focus appears to be shifting from static content generation to interaction, consistency, and decision-making over time. Project Genie does not solve those challenges yet, but it does show more clearly how DeepMind is approaching them and increases pressure on competitors pursuing similar world model capabilities.
Company Check : Tesla Repositions Its Future Around Robots Rather Than Cars
Tesla confirmed it was winding down parts of its car business as Elon Musk publicly repositioned the company around humanoid robots, artificial intelligence and autonomy rather than electric vehicles alone.
Tesla Drops Model S And X As Focus Shifts Beyond Cars
Tesla has said it will end production of the Model S and Model X and repurpose the manufacturing space at its Fremont, California plant to build its Optimus humanoid robots, marking the clearest signal yet that the company’s future strategy is moving away from premium car models.
Speaking on Tesla’s latest earnings call, Elon Musk said the space currently used to build the two vehicles would be converted into an Optimus production facility, with a long-term ambition of producing up to one million robots a year at the site. He described the change as part of Tesla’s broader shift towards what the company now calls “physical AI”.
The Model S and Model X were once central to Tesla’s rise, helping establish the brand in the early and mid-2010s. In recent years, however, both vehicles had become low-volume products compared with the Model 3 and Model Y, which now account for the majority of Tesla’s car sales.
Tesla said it would continue supporting existing Model S and Model X customers despite the end of production.
Why Tesla’s Core EV Business Came Under Pressure
The strategic change of direction came after a difficult year for Tesla’s automotive business, shaped not only by market conditions but also by growing scrutiny of Elon Musk’s leadership and public profile. The company reported total revenue fell 3 per cent in 2025, its first annual decline in revenue, while vehicle deliveries dropped by about 9 per cent to roughly 1.64 million cars worldwide.
The slowdown was particularly visible at the end of the year, with Tesla saying deliveries fell around 16 per cent year on year in the fourth quarter, reflecting weaker demand, intensifying competition and the impact of reduced government incentives in the United States. Analysts also pointed to rising unease among parts of Tesla’s traditional customer base following Musk’s increasingly high-profile political involvement, which included public support for US President Donald Trump and a senior cost-cutting role in his administration.
During the same period, China-based BYD overtook Tesla as the world’s largest seller of battery electric vehicles by volume, reporting more than 2.25 million BEV sales in 2025, up almost 28 per cent year on year. Chinese manufacturers including BYD, Geely and MG continued to pressure Western carmakers by offering a wider range of lower-priced models, while Tesla faced criticism for a relatively ageing vehicle line-up and a slower pace of major new car launches.
Tesla’s earnings update showed that while automotive revenue weakened, other parts of the business performed more strongly, with energy generation and storage revenue rising about 25 per cent year on year in the fourth quarter and services revenue increasing around 18 per cent, highlighting areas of growth beyond car sales as the company recalibrated its strategy.
How Musk Reframed Tesla’s Future Around Robots
Against that backdrop, Musk has been framing Tesla as an AI and robotics company rather than a car manufacturer. For example, in investor materials, Tesla described 2025 as a pivotal year in its transition from a hardware-led business to one centred on artificial intelligence deployed in the physical world.
Optimus
Optimus, Tesla’s humanoid robot programme first unveiled in 2021, has now become central to that narrative. Tesla said the robot is already performing limited tasks inside its factories, such as sorting objects and handling materials, though it remains far from Musk’s long-term vision of a general-purpose household robot.
In fact, Musk has repeatedly claimed Optimus could eventually perform a wide range of jobs, from factory work to domestic tasks, and has described it as more significant to Tesla’s future than vehicles over time. At the World Economic Forum in January, he said Tesla would probably begin selling humanoid robots to customers by the end of 2027, once safety and reliability reached an acceptable level.
Tesla told investors it plans to reveal a third-generation Optimus design in early 2026, describing it as the first version intended for mass production, with manufacturing expected to begin before the end of that year.
The Financial Stakes Behind The Robot Push
The move towards robotics also carries major financial implications for Tesla and Musk personally. For example, Tesla disclosed it had invested $2bn in Musk’s AI start-up xAI, while also signalling a sharp increase in capital spending, with guidance pointing to more than $20bn of investment in 2026.
That spending is expected to support multiple projects, including Optimus production, robotaxi development, battery manufacturing and AI infrastructure.
It’s worth noting here that Musk’s much publicised record-breaking pay package, approved by shareholders in late 2025, is also closely tied to Tesla delivering new growth drivers beyond car sales. Under the terms of the deal, Musk must significantly increase Tesla’s market value over the next decade, with Optimus and autonomous services positioned as central to that ambition.
Tesla has said its long-term targets include selling up to one million humanoid robots over ten years, a goal Musk has described as achievable if production and costs scale as planned.
Why Humanoid Robots Are A Riskier Bet Than EVs
Despite Tesla’s confidence, many experts view humanoid robots as one of the most difficult challenges in modern engineering. For example, unlike industrial robots designed for controlled environments, humanoids must combine balance, dexterity, perception and decision-making while operating safely around people in unpredictable settings.
Estimates of the potential market vary widely. Analysts at McKinsey have suggested a base-case market for general-purpose robotics of around $370bn by 2040, while other banks have forecast multi-trillion-dollar outcomes over longer timeframes if humanoids become widely adopted.
Supporters argue Tesla has relevant advantages, including experience in mass manufacturing, vertical integration across hardware and software, and expertise in motors and battery systems. Tesla has said those strengths allow it to iterate designs quickly and reduce costs as production scales.
However, critics say that the competitive landscape is far more crowded than when Tesla entered the EV market. For example, more than 90 companies are now developing humanoid robots, including established robotics firms, well-funded startups and technology giants supplying chips and AI platforms.
Questions have also been raised about whether consumer-facing humanoid robots will ever prove practical or affordable at scale, and whether Tesla’s ambitious timelines repeat a pattern seen in previous Musk-led projects, where public targets were missed or delayed.
Political Headwinds And Brand Risk
Tesla’s shift has unfolded alongside growing political and reputational challenges. For example, as noted earlier, Musk’s high-profile political involvement, including DOGE and his support for US President Donald Trump, has polarised public opinion and triggered protests and vandalism at Tesla dealerships in several countries.
Some investors and analysts have actually questioned whether that controversy could affect demand not only for Tesla’s cars, but also for any future consumer robot products, particularly if Optimus is positioned for home use.
Musk has acknowledged scepticism around Tesla’s ambitions but has maintained that the company is pursuing what it believes are the most important long-term technological opportunities, even if progress takes longer than expected.
What Does This Mean For Your Business?
Tesla’s decision to scale back parts of its car business in favour of robotics and AI signals a clear attempt to reset its long-term growth strategy. The move places Optimus and autonomous systems at the centre of Tesla’s future valuation, even though both remain technically complex, capital intensive and commercially unproven at scale.
For investors, suppliers and regulators, this has reframed Tesla less as a cyclical carmaker and more as a long-horizon technology bet, with outcomes likely to hinge on execution rather than vision alone. Success will require Tesla to solve problems in robotics that the wider industry has struggled with for decades, while managing near-term pressure on its automotive revenues and brand.
For UK businesses, the implications are more practical than speculative. For example, if humanoid robots move beyond pilot use in factories, logistics and warehousing, they could reshape labour planning, automation strategies and capital investment decisions over the next decade. At the same time, the uncertainty around timelines and costs reinforces the need for caution, with most analysts expecting meaningful deployment to arrive gradually rather than through rapid disruption.
More broadly, Tesla’s pivot shows how closely modern technology companies are now shaped by leadership choices, political context and investor expectations, not just product roadmaps. Whether Optimus becomes a transformative platform or an overextended ambition, Tesla’s repositioning reflects wider changes in how growth, risk and innovation are being recalibrated across the global technology and manufacturing landscape.
Security Stop-Press : Samsung and WhatsApp Strengthen Front-Line Privacy Controls
Samsung and WhatsApp are rolling out new security features aimed at reducing everyday privacy risks, including shoulder surfing in public places and cyber attacks targeting user accounts.
Samsung said it will introduce a new privacy layer for Galaxy devices that selectively hides sensitive on-screen content from side angles, while remaining visible to the user. The feature, developed over more than five years, can protect specific areas such as message notifications or passcode entry fields and builds on the company’s Knox security platform.
WhatsApp has also launched a new “Strict Account Settings” mode that groups multiple protections behind a single switch. When enabled, it blocks media and messages from unknown senders, disables link previews, restricts who can add users to groups, and turns on two-step verification and security alerts by default.
For businesses, the updates highlight the importance of reducing simple exposure risks by limiting what can be seen in public, tightening controls on unknown contacts, and enforcing strong default security settings across devices and messaging platforms.