Sustainability-in-Tech : Recyclable Plastics Using Light and Solvent

Scientists from a Swiss university have discovered a way to break down Plexiglass into its original building blocks using violet light and a common solvent, thereby making recycling plastics far more efficient and potentially helping to tackle global plastic waste.

Cracking the Plastic Code: The Science Behind the Discovery

The process, developed by lead researcher Dr Hyun Suk Wang at ETH Zurich, works by exposing Plexiglass, a type of polymethacrylate, to violet light while it is submerged in dichlorobenzene solvent. The scientists discovered that this exposure releases chlorine radicals from the solvent, which then break apart the strong carbon-carbon bonds in the plastic. The result is the recovery of methyl methacrylate (MMA), the original monomer building blocks from which Plexiglass is made.

This recovered monomer can be purified and repolymerised without losing any material quality, unlike traditional recycling methods that involve shredding, cleaning, and remelting. Those older methods degrade the properties of plastic with each cycle, whereas this new chemical process allows the material to be fully restored to its original state.

The Scale of the Plastic Waste Problem

The scale of plastic pollution globally remains a significant challenge. For example, over 400 million metric tonnes of plastic waste are produced worldwide each year and yet, only around 9 per cent of this waste is successfully recycled. Also, rather than being recycled, half ends up in landfills, while another 19 per cent is incinerated. One particularly damaging effect of our plastic use is that around 11 million metric tonnes of plastic enter the ocean annually, harming ecosystems and marine life.

Plexiglass Particularly Problematic

Polymethacrylates like Plexiglass are particularly problematic due to their durability and widespread use in industries ranging from construction to electronics. This resilience, while useful in manufacturing, makes them resistant to breaking down in traditional recycling systems.

Closing the Loop

Lead researcher Dr Hyun Suk Wang and his team have said they believe their light-based method could transform how Plexiglass and similar plastics are recycled. Dr Wang says: “By recovering monomers in near-pristine condition, we can effectively close the loop on Plexiglass production.”

The Implications

If adopted at scale, the implications of this breakthrough could include:

– Reduced use of fossil fuels. Since virgin plastic production depends on fossil resources, recycling monomers could significantly cut demand for petrochemical feedstocks.

– Lower energy consumption. The process requires less energy than current methods, which often involve high temperatures and extensive mechanical processing.

– Industrial adaptability. Preliminary tests suggest that the process can be applied on a larger scale with precision and control, making it a candidate for industrial recycling operations.

Is It Scalable?

It should be noted, however, that for this discovery to be commercially viable, several key challenges need to be addressed, which include:

– Being able to generate violet light at scale. The process depends on specific wavelengths of light, meaning industrial-level violet light sources would be necessary.

– Handling dichlorobenzene safely. The solvent used in the process is hazardous and would require strict handling protocols to ensure worker and environmental safety.

– Economic feasibility. Any new technology must be cost-competitive with the low expense of producing virgin plastics from petrochemicals.

Despite these hurdles, the researchers remain optimistic. As co-author Professor Athina Anastasaki points out, “What makes this process so promising is its ability to work on a wide range of polymethacrylates, regardless of how they were originally manufactured.”

What Next?

The research team is now working on refining the technique to handle mixed plastic waste streams, a major obstacle in current recycling systems. They are also exploring alternative, less toxic solvents to improve the process’s environmental impact.

At the same time, discussions are taking place with industrial partners to assess how this technology might be integrated into existing recycling facilities.

What Does This Mean For Your Organisation?

This breakthrough in recycling Plexiglass using violet light and a common solvent could mark a promising step forward in addressing the global plastic waste crisis. The discovery by Dr Hyun Suk Wang and his team at ETH Zurich presents a genuinely innovative approach – one that allows plastics to be broken down into their original building blocks without degrading their quality. By recovering monomers in a near-pristine state, this method could redefine what it means to “recycle” plastics, moving beyond the traditional processes that weaken materials with each cycle.

The potential environmental benefits are clear. If this technology can be successfully scaled, it could significantly reduce the dependence on fossil fuels required for producing virgin plastics, cutting both carbon emissions and petrochemical consumption. Furthermore, the process’s lower energy demands compared to conventional recycling could provide a more sustainable and economically viable solution, particularly for industries with high energy consumption rates.

For businesses, especially those in manufacturing, construction, and consumer goods, this development could offer both economic and strategic advantages. Companies that rely heavily on plastics might see reduced costs in sourcing high-quality recycled materials, avoiding the need to purchase more expensive virgin plastics. Also, integrating this technology into supply chains could help businesses meet increasingly stringent sustainability targets and regulatory demands around recycling and carbon emissions.

Beyond compliance, there is also the potential for businesses to strengthen their brand reputation by aligning with environmentally responsible practices. Early adopters of such groundbreaking recycling methods could position themselves as leaders in sustainability, attracting eco-conscious consumers and investors alike. However, industries will need to assess the commercial feasibility carefully, considering factors such as the cost of installing violet light technology and handling hazardous solvents like dichlorobenzene.

That said, significant obstacles remain. The need for scalable violet light sources and safe handling of potentially hazardous solvents are non-trivial challenges that could slow widespread adoption. Also, the economic viability of this method will need to be thoroughly tested against the low costs associated with producing virgin plastics, a factor that has historically undermined efforts to expand plastic recycling.

The optimism shown by researchers like Professor Athina Anastasaki highlights the broader potential of this technology. If successful refinements are made, particularly in handling mixed plastic waste streams and identifying safer solvents, the process could become adaptable enough for industrial-scale use.

While this innovation is not without its hurdles, this research looks as though it could open an exciting new chapter in the fight against plastic pollution. If industry stakeholders, policymakers, and scientists can work together to overcome the technical and economic barriers, this light-driven recycling method could play a pivotal role in creating a truly circular economy for plastics.

Tech Tip – Keep ChatGPT Conversations Going While Using Other Apps

You don’t need to stop your ChatGPT conversations just because you switch apps or lock your phone – there’s a handy feature to keep the chat flowing in the background. Here’s how it works:

How to:

– Open the ChatGPT app and tap the two-line menu button in the top-left corner.

– Press your account name at the bottom of the menu to access settings.

– Select ‘Voice’ and enable ‘Background Conversations’.

– Now, you can talk to ChatGPT while using other apps or even when your screen is locked.

– To disable this feature, simply turn off ‘Background Conversations’ from the same menu.

Bonus:

– On the same screen, you can customise the assistant’s voice. Tap ‘Voice’, scroll through options, and pick your favourite (e.g. Vale and Arbor are British ones).

– Hit Done to save your preferences.

This is perfect for multitasking, like asking questions while browsing or replying to messages on WhatsApp without pausing your conversation.

Featured Article : Altman Rejects Musk’s $97 Billion Offer

In a striking rebuke to Elon Musk, OpenAI CEO Sam Altman recently rejected a $97.4 billion acquisition bid led by Musk and his AI startup, xAI.

Long-Running Tech Feud

Altman’s decision has intensified the long-running feud between the two tech giants, bringing into focus their starkly different visions for the future of artificial intelligence (AI). With Musk levelling accusations of self-dealing and Altman responding with sharp jabs, the saga has left the tech industry and AI users questioning what comes next.

What Happened?

Musk’s unsolicited bid for OpenAI (revealed through legal filings and media reports) was supported by private equity firms Baron Capital Group and Valor Management. The proposal sought to acquire the non-profit entity that controls OpenAI, with Musk’s legal team arguing that OpenAI’s shift towards a for-profit structure contradicted its original mission.

As Musk’s attorney, Marc Toberoff, put it: “If Sam Altman and the present OpenAI board of directors are intent on becoming a fully for-profit corporation, it is vital that the charity be fairly compensated for what its leadership is taking away from it: control over the most transformative technology of our time.”

However, OpenAI swiftly dismissed the offer. In fact, Altman took to Musk’s own platform, X (formerly Twitter), to publicly rebuff the bid with a characteristically cheeky retort, saying: “No thank you, but we will buy Twitter for $9.74 billion if you want.” OpenAI board chair Bret Taylor reinforced the company’s stance, stating, “OpenAI is not for sale.”

Musk then fired back with some accusations, branding Altman as a “swindler” and claiming OpenAI had abandoned its founding principles in favour of corporate profit.

Musk’s Motivation and the OpenAI Backstory

The world’s richest man, Elon Musk, who co-founded OpenAI in 2015 alongside Altman, was one of its earliest financial backers. However, he left the board in 2018 following disagreements over the company’s direction and later launched his own AI startup, xAI, in 2023. Since then, he has been an outspoken critic of OpenAI, particularly regarding its partnership with Microsoft.

Musk’s lawsuit against OpenAI, first filed in February 2024 and later revived in August, accuses the company of prioritising profit over safety and betraying its original commitment to open-source AI development. The lawsuit argues that OpenAI has become a “closed-source de facto subsidiary” of Microsoft, which has invested over $13 billion in the company.

Musk said in a statement explaining his bid: “It’s time for OpenAI to return to the open-source, safety-focused force for good it once was. We will make sure that happens.”

OpenAI Says It Was Necessary

OpenAI, however, contends that its evolution into a public benefit corporation was necessary to secure the capital needed to develop cutting-edge AI models. Interestingly, internal emails published by OpenAI last year revealed Musk had previously acknowledged the necessity of attracting significant investment to fund AI infrastructure.

Musk’s Growing Problems in Business and Politics

The rejection of Musk’s bid comes at a time of mounting challenges for the billionaire across his now sprawling empire. For example, Tesla, his most well-known EV venture, has seen its stock plummet by over 31 per cent since December 2024, amid declining sales and growing criticism of what many see as Musk’s divisive political interventions. Analysts have attributed Tesla’s downturn in part to Musk’s polarising behaviour (e.g. ‘that’ salute, which alienated not just his environmentally conscious consumers, who were once the company’s core supporters).

Also, his social media platform, X, continues to struggle, with its valuation reportedly falling by over 50 per cent since he purchased it for $44 billion in 2022. A combination of mass layoffs (reducing staff by 80 per cent) and controversial content moderation policies has driven advertisers away, with Bluesky and Threads, contributing to X’s financial woes.

Musk’s huge $227 million spend on Trump’s election campaign (and his wealth increasing by a reported $170 billion since) plus his increasing entanglement with the US government have also sparked concerns about conflicts of interest. Also, as the head of the Department of Government Efficiency (DOGE) under President Trump’s administration, Musk has been widely criticised over his wielding of influence over federal agencies that regulate his businesses (including those that could investigate him). In fact, Musk’s DOGE team is now being investigated by the US government watchdog over its access to the Treasury’s payments system, which has been described as unconstitutional. In recent months, investigations into Tesla and SpaceX have been quietly shelved following the departure of key regulators, raising eyebrows in Washington and beyond.

Adding to the controversy, Musk recently conducted a White House interview alongside his son and President Trump, an appearance that critics claim blurred the lines between political advocacy and personal business interests. The interview, perceived by many as an attempt to shore up support for his ventures, has drawn scrutiny over whether Musk’s access to political power gives him an unfair advantage over his competitors.

What It All Means for OpenAI, Musk, and AI Users

For OpenAI, turning down Musk’s offer signals a firm commitment to its current trajectory. Despite Musk’s claims that OpenAI has lost sight of its original mission, the company maintains that its hybrid non-profit and for-profit model allows it to raise the funding necessary to develop safe and powerful AI. This decision also ensures OpenAI retains its independence from Musk’s influence, allowing it to continue its deep partnerships with Microsoft and other investors.

For Musk, the somewhat humiliating public rejection represents a significant setback in his efforts to steer the direction of AI development. With OpenAI remaining out of reach, Musk’s xAI faces more of an uphill battle in competing with OpenAI’s dominant ChatGPT and the backing of Microsoft. His mounting legal battles, combined with declining public confidence in his leadership, may further strain his ability to expand xAI’s influence in the AI sector.

As for users, the outcome of this feud will have lasting implications. OpenAI’s continued autonomy ensures stability in its AI offerings, but Musk’s persistent attacks raise questions about regulatory oversight and ethical AI governance. Meanwhile, the turbulence surrounding Musk’s ventures, from Tesla to X, may further shape consumer trust and industry dynamics in the coming months.

What Does This Mean for Your Business?

Sam Altman’s rejection of Elon Musk’s audacious $97 billion bid marks yet another defining moment in what is an ongoing power struggle over the future of artificial intelligence. OpenAI’s decision to remain independent reinforces its commitment to a hybrid model that balances innovation with commercial viability, even as Musk continues to frame this approach as a betrayal of the organisation’s original mission. While the tech world is no stranger to high-profile disputes, this particular clash holds deeper implications, not just for AI development but also for the regulatory and ethical landscape surrounding it.

For Musk, the rejection highlights the mounting challenges he faces in both the business and political spheres. His attempt to bring OpenAI under his control appears to have been a strategic move to counteract the growing influence of Microsoft and reassert his own role in shaping AI’s future. However, his declining public perception, ongoing legal battles, and the struggles of his various ventures suggest that he is facing headwinds unlike any before. While xAI may still emerge as a formidable competitor, OpenAI’s ability to operate without Musk’s intervention has, for now, reinforced its market dominance.

For business users, this standoff between two of the most influential figures in AI raises significant considerations. OpenAI’s continued partnership with Microsoft should ensure stability in its product offerings, giving enterprises confidence that ChatGPT and other AI models will continue to develop without abrupt strategic shifts. This means businesses relying on OpenAI’s technology can probably expect further refinements, better integration with Microsoft products, and sustained investment in safety and governance frameworks. However, Musk’s criticisms of OpenAI’s closed-source nature may also fuel discussions about transparency and accessibility, potentially pushing regulators and competitors to advocate for more open AI ecosystems.

While this latest chapter in the Musk-Altman rivalry has made headlines, the broader impact will be felt in how AI is shaped moving forward. OpenAI’s stance suggests that it remains committed to its vision, even as Musk continues to challenge its direction. Whether this leads to a more competitive AI marketplace or a further entrenchment of power among a select few remains to be seen, but for now, OpenAI has made its position clear, i.e. that it’s not for sale, not even to one of the world’s richest and most controversial figures.

Tech Insight : UK and US Refuse To Sign Paris Summit AI Declaration

At the recent Artificial Intelligence (AI) Action Summit in Paris, the UK and the United States refused to sign an international declaration advocating for “inclusive and sustainable” AI development.

60 Other Nations Signed It

With 60 other nations (including China, France, India, and Canada) endorsing the agreement, the absence of two major AI powerhouses has ignited some debate over regulation, governance, and the global AI market’s future.

The Paris AI Summit and The Declaration

The AI Action Summit, held on 10–11 February, brought together representatives from over 100 countries to essentially discuss AI’s trajectory and the need for ethical, transparent, and sustainable frameworks. The summit concluded with a declaration designed to guide AI development responsibly. The key principles of this declaration include:

– Openness and inclusivity. Ensuring AI development is accessible and equitable across different nations and communities.

– Ethical standards. Establishing guidelines that uphold human rights and prevent AI misuse.

– Transparency. Mandating clear AI decision-making processes and accountability.

– Safety and security. Addressing risks related to AI safety, cybersecurity and misinformation.

– Sustainability. Recognising the growing energy demands of AI and the need to mitigate its environmental impact.

The declaration emphasised the importance of global cooperation to prevent market monopolisation, reduce digital divides, and ensure AI benefits humanity as a whole. However, despite broad support, both the US and UK opted out of signing.

A Hands-Off Approach to Regulation For The US

US Vice President JD Vance delivered a fairly candid speech at the summit (his first major speech overseas in government), making clear that the Trump administration favours minimal AI regulation. For example, Vance warned that “Excessive regulation of the AI sector could kill a transformative industry just as it’s taking off”. He also criticised Europe’s approach, particularly the EU’s stringent AI Act and other regulatory frameworks like the Digital Services Act (DSA) and General Data Protection Regulation (GDPR), arguing that they create “endless legal compliance costs” for companies.

Vance’s remarks positioned the US as a clear advocate for innovation over restrictive oversight, stating, “We need international regulatory regimes that foster the creation of AI technology rather than strangle it.” He also expressed concerns that content moderation could lead to “authoritarian censorship,” a nod to the ongoing debates over misinformation and AI’s role in shaping public discourse.

Also, Vance (more subtly) warned against international partnerships with “authoritarian” nations (i.e. basically implying China) by stating that working with such regimes risked “chaining your nation to an authoritarian master that seeks to infiltrate, dig in, and seize your information infrastructure.” Some critics of Trump’s government in the US may have found this remark to be ironic given Trump’s past praise for authoritarian leaders and his administration’s own controversies regarding misinformation, media control, and political influence over tech and AI regulation.

Concern

US Vice President JD Vance’s speech at the Paris AI Action Summit was met with a mix of concern and criticism from European leaders. Vance’s strong stance against European AI regulations and his emphasis on an “America First” approach to AI development seemed to highlight quite a significant policy divergence between the US and its European allies. French President Emmanuel Macron and European Commission President Ursula von der Leyen responded by advocating for a balanced approach that fosters innovation while ensuring ethical standards, underscoring the contrasting perspectives on AI governance.

Why Didn’t The UK Sign?

The UK government’s stated reasons for not signing the declaration were its concerns over national security and AI governance. The UK was represented at the AI Action Summit in Paris by Tech Secretary Peter Kyle, with Prime Minister Keir Starmer opting not to attend. On the decision not to sign the summit’s AI declaration, a spokesperson for Starmer said the UK would “only ever sign up to initiatives that are in the UK’s national interest.” While the government agreed with much of the declaration, they argued it lacked practical clarity on global governance and failed to sufficiently address national security concerns.

A Downing Street spokesperson has also been reported as saying, “We felt the declaration didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”

While the UK has previously championed AI safety, hosting the first-ever AI Safety Summit in November 2023, critics have argued that its refusal to sign the Paris declaration could now undermine its credibility as a leader in ethical AI development. For example, Andrew Dudfield, head of AI at fact-checking organisation Full Fact, has warned, “By refusing to sign today’s international AI Action Statement, the UK Government risks undercutting its hard-won credibility as a world leader for safe, ethical, and trustworthy AI innovation.”

Are The Real Reasons For Not Signing Geopolitical?

All that said, some analysts have argued that economic and geopolitical factors (rather than concerns about governance) may actually be the driving forces behind the US and UK’s decision. For example, by not signing the declaration, both countries retain the freedom to shape AI policy on their own terms, thereby potentially allowing domestic companies to operate with fewer regulatory constraints and gain a competitive edge in AI markets.

The decision may also be seen as aligning with broader economic policies. For example, the Trump administration has pledged significant investment in AI infrastructure, including a $500 billion private sector initiative to enhance US AI capabilities. Meanwhile, UK AI industry leaders, such as UKAI (a trade body representing AI businesses), have cautiously welcomed the government’s stance, arguing that AI’s energy demands must be balanced with environmental responsibilities.

However, some political voices in the UK have suggested that the UK has little room for manoeuvre but to align with the US, e.g. for fear of losing engagement from major US AI firms if it adopted a restrictive approach.

The Implications for AI in the US and UK

The refusal to sign the Paris declaration could have some serious effects on the AI landscape in both countries. These could include, for example:

– Regulatory divergence. The US and UK are likely to diverge further from the EU’s AI regulatory approach, which could create complexities for companies operating in multiple jurisdictions.

– Market positioning. AI firms in these countries may benefit from a less regulated environment, attracting more investment and talent.

– Global cooperation. The lack of a unified stance could complicate international efforts to set AI standards, leading to regulatory fragmentation.

– Public perception and trust. Concerns over AI safety and misinformation could be exacerbated, potentially undermining public trust in AI systems developed in more lightly regulated markets.

The Possible Impact on the AI Market and Business Users

For businesses trying to get the benefits of leveraging AI, these developments could signal both opportunities and challenges, such as:

– Regulatory uncertainty. Companies may need to navigate a fragmented regulatory landscape, balancing compliance in stricter jurisdictions like the EU with more flexible environments in the US and UK.

– Competitive advantage. Firms operating in the US and UK may see accelerated innovation and reduced compliance costs, while those in heavily regulated regions may struggle to keep pace.

– Investment trends. Investors might favour jurisdictions with fewer regulatory barriers, shifting funding patterns in the AI sector.

A Growing Divide

The refusal of the UK and US to sign the Paris AI declaration essentially highlights a growing global divide over AI regulation. For example, while Europe and other signatories are pushing for stringent oversight to ensure ethical and sustainable AI, the US and UK appear to be prioritising market-driven approaches that foster innovation with fewer constraints. As AI continues to shape industries and societies, this divergence in policy is likely to significantly influence the future of AI governance, business strategy, and global competitiveness.

What Does This Mean For Your Business?

The decision by the UK and US to abstain from signing the Paris AI declaration reveals the fundamental and growing divergence in global AI governance. While Europe and other signatories advocate for regulatory frameworks designed to ensure ethical, transparent, and sustainable AI development, the UK and US are instead opting for a more market-driven approach. This contrast highlights deeper geopolitical and economic considerations, as both nations seek to maintain a competitive edge in the rapidly evolving AI sector.

Companies operating in the US and UK may benefit from reduced compliance burdens and faster innovation cycles, but they also risk regulatory uncertainty when engaging with more tightly controlled markets such as the EU. Meanwhile, concerns over AI safety, misinformation, and ethical considerations could influence public trust, potentially shaping consumer and business adoption patterns in the years ahead.

Beyond immediate market implications, the lack of a unified international stance raises broader questions about the future of AI governance. The absence of the UK and US from the Paris declaration may complicate global efforts to establish common AI standards, increasing the likelihood of regulatory fragmentation. This, in turn, could lead to inconsistencies in AI oversight, making it more challenging to address issues such as bias, cybersecurity risks, and the environmental impact of AI systems on a global scale.

That said, the refusal to sign the declaration does not mean the UK and US are simply abandoning AI regulation altogether; rather, both countries will continue to shape policy on their own terms. However, their decision does signal a clear preference for maintaining regulatory flexibility, even at the cost of global consensus. Whether this approach actually fosters long-term innovation or leads to unintended risks remains to be seen, but what is certain is that AI governance is now a defining battleground in the race for technological leadership. The coming years will likely reveal whether a hands-off approach delivers the promised benefits, or whether the cautionary stance of other nations proves to be the wiser path.

Tech News : Almost Half of Young People Have Been Scammed Online

A new study in Wales has shown that nearly half (46 per cent) of young people aged 8 to 17 have fallen victim to online scams, with 9 per cent (including children as young as eight) having lost money to fraudulent schemes.

Scams A Regular Part of Online Life For Young People

A recent study by the UK Safer Internet Centre (UKSIC) has unveiled a worrying trend, with findings released in conjunction with Safer Internet Day 2025 on 11th February. The results highlight how exposure to online scams has become a regular part of life for young internet users.

The Scale of the Issue

As part of the research, the UKSIC conducted an extensive survey to assess the frequency with which young people come across online scams, the types of scams they encounter, and their effects. Alarmingly, a massive 79 per cent of those surveyed said they come across scams at least once a month, with 45 per cent encountering them weekly and 20 per cent seeing scams every single day! These figures appear to show that scams are not occasional threats but are a persistent online hazard.

An Urgent Matter

Will Gardner OBE, Director of UKSIC, has highlighted the urgency of the matter, stating: “This Safer Internet Day, we want to put the importance of protecting children from online scams on the agenda. For too long, young people have been overlooked, yet our research clearly demonstrates how much of an impact online scams can have on them.”

What Are The Most Common Scams Targeting Young People?

The research identified several scams that young people are particularly vulnerable to. The most common include:

– Fake giveaways. Scammers promise free prizes or rewards to lure victims into sharing personal information.

– Phishing scams. Fraudsters send messages or emails pretending to be from a trusted source to trick individuals into handing over sensitive details.

– Fake websites. Counterfeit online stores or platforms that appear legitimate but are designed to steal money or data.

– Online shopping scams. These include fake ticket sales and fraudulent in-game purchases or ‘trust trades.’

Mostly On Social Media

Social media platforms were found to be the most common space for encountering scams (35 per cent), followed by email (17 per cent) and online gaming (15 per cent). The research revealed, perhaps not surprisingly, that younger children (8 to 11) are particularly vulnerable in online gaming environments, with 22 per cent reporting that they had experienced scams in this setting.

The Emotional and Psychological Toll

The impact of online scams appears to extend far beyond any financial loss. For example, the research found that almost half (47 per cent) of those scammed felt anger and frustration, while 39 per cent felt sadness. Other emotional reactions highlighted in the research included stress (31 per cent), embarrassment (28 per cent), and shock (28 per cent). Also, and alarmingly, over a quarter (26 per cent) said they blamed themselves for falling victim to a scam, a figure that rises to 37 per cent among 17-year-olds.

This sense of self-blame and embarrassment is thought to be preventing many from seeking help. For example, nearly half (47 per cent) of young people in the research said they believe embarrassment is the biggest barrier to reporting scams, while 41 per cent worry they would be blamed, and 40 per cent fear getting into trouble, such as having their devices taken away.

What Can Be Done?

The research appears to highlight an urgent need for better education about online scams. Encouragingly, 74 per cent of young people want to learn more about spotting and avoiding scams. Schools and parents must play a key role in this education, equipping children with the knowledge and tools to stay safe online.

For parents and carers, open conversations about online safety may also be essential in tackling this issue. For example, the study found that 72 per cent of young people would turn to a parent or carer if they were worried about an online scam, and 40 per cent of parents reported that their child had taught them how to recognise scams.

To help young people protect themselves, some steps that experts often recommend include:

– Think before you click. Avoid clicking on links from unknown sources, especially if they promise prizes or seem urgent.

– Verify sources. Check if a website or message is genuine before sharing any personal information.

– Protect personal data. Be cautious about sharing personal details online.

– Use security features. Enable two-factor authentication and use strong passwords.

– Recognise red flags. Poor spelling, urgent demands, and ‘too good to be true’ offers are common signs of a scam.

Government Action and Industry Responsibility

With online scams becoming more sophisticated, particularly with advancements in artificial intelligence (AI), there is growing concern that fraudsters will find it even easier to deceive young people. The study found that 32 per cent of young people worry that AI will make scams harder to spot.

Will The Online Safety Act Help?

The UK government has taken some steps to combat the rise in online fraud. For example, from next month, the Online Safety Act will require tech companies to take proactive measures to remove illegal content, including scams. As Tech Minister Baroness Jones says: “The normalisation of scams online is a shocking trend. Fraudsters are clearly targeting vulnerable young people who should be able to connect with friends and family without being subject to a barrage of scams.”

Technology companies also have a responsibility under the Act to ensure their platforms do not provide a hiding place for fraudsters. Scam job offers are a growing issue, with fraudsters impersonating TikTok employees and offering fake roles that promise high earnings in exchange for engaging with content.

The Importance of Intergenerational Learning

The study, which was focused on children in Wales, also highlighted the value of intergenerational learning when it comes to online scams. It seems that young people are not just learning from parents and carers but are also educating them. For example, a significant 40 per cent of parents admitted that their child had taught them how to spot scams. This exchange of knowledge may be crucial in strengthening online safety for all age groups.

What Does This Mean For Your Business?

The findings of this study paint a pretty stark picture of the digital landscape for young people, where online scams are no longer an occasional nuisance but a persistent and deeply embedded threat. With nearly half of young internet users having fallen victim to fraud, and a substantial proportion experiencing distress as a result, it’s clear that online safety must be given greater priority.

Much of the public discourse around scams tends to focus on older people being the primary victims, with news reports frequently highlighting cases of pensioners losing their life savings to fraudsters. While these concerns are entirely valid, this research sheds light on an overlooked reality, i.e. young people are also being targeted and, in many cases, successfully deceived. Their relative inexperience, combined with the digital environments they frequent (particularly social media and gaming platforms) make them attractive targets for scammers. This should serve as a wake-up call that online fraud is not just an issue for the elderly but one that affects all age groups.

Beyond the personal impact on victims, the prevalence of scams among young people may also carry wider implications for UK businesses. As the next generation of digital consumers, young people are forming habits and attitudes towards online transactions that could shape the future of e-commerce. If scams continue to erode trust in online platforms, businesses (particularly those reliant on digital sales) could face challenges in attracting and retaining younger customers. Companies that fail to create secure and transparent online experiences may find themselves losing out to competitors that prioritise fraud prevention and user safety. Also, with AI making scams more sophisticated, businesses will need to stay ahead by investing in stronger verification processes and customer education initiatives to protect their brand reputation.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives