Video Update : Create Powerpoint Slides Using ChatGPT
This video tutorial explains how you can make a Powerpoint presentation directly from ChatGPT.
[Note – To Watch This Video without glitches/interruptions, It’s best to download it first]
Tech Tip – Use “Print to PDF” to Save Files as PDFs Without Additional Software
Windows has a built-in “Print to PDF” feature, allowing you to save any file or webpage as a PDF, which is useful for sharing professional, non-editable documents. Here’s how it works:
– Open the document or webpage you want to save.
– Press Ctrl + P to open the Print dialog.
– Select Microsoft Print to PDF as the printer.
– Click Print, choose where to save the file, and it will be saved as a PDF.
Featured Article : Brazilian Ban For X
Following Elon Musk’s X (Twitter) platform having failed to meet the deadline imposed by a Supreme Court Judge to suspend dozens of X accounts for allegedly spreading disinformation, the judge has now ruled that the X platform will be suspended in Brazil.
What Happened To Get To This Point?
Back in April, tensions between the social media platform and Brazilian authorities escalated over X’s handling of misinformation and harmful content. The Brazilian government (concerned about disinformation and hate speech) criticised X’s moderation efforts as insufficient, especially regarding sensitive topics like elections and public health. This scrutiny was part of broader concerns in Brazil about the influence of social media on public order and political stability.
The situation intensified when Brazilian Supreme Court Justice Alexandre de Moraes ordered that X accounts spreading disinformation (many of which supported former right-wing president Jair Bolsonaro), to be blocked while under investigation. Not surprisingly, Musk criticised these actions, which led de Moraes to impose fines of 100,000 reais (approximately $19,774 or £15,670) per day for each reactivated account. Judge de Moraes also warned that X’s legal representatives in Brazil could face personal liability if the orders were ignored. In response to Musk’s defiance, de Moraes opened an investigation into Musk, including charges of obstruction of justice.
New Regulations
In June 2024, new regulations were enacted in Brazil that mandated stricter content moderation and transparency from social media which added to the pressure on Musk and his X platform. The “Fake News Bill,” also known as the “Brazilian Internet Freedom, Responsibility and Transparency Act” was introduced to target the spread of fake news and misinformation on social media platforms. This legislation also required platforms to have a legal representative in Brazil, banned anonymous automated accounts, and mandated transparency in content promotion and advertising. Regular transparency reporting and established mechanisms for content moderation appeals are also now required under this legislation.
Challenges and Failure To Comply
Despite these new regulations, X struggled to comply, resulting in a series of legal challenges throughout July and August. The platform was then accused of not sufficiently addressing the spread of illegal content, leading to mounting pressure from the Brazilian courts. By late August, X was facing multiple legal battles, highlighting its difficulties in balancing freedom of expression with the need to prevent harmful content.
Court Hearing and Suspension
The recent court hearings were therefore held to assess X’s compliance with Brazil’s new laws and Judge de Moraes, citing the platform’s failures, ordered the suspension of X’s business operations in Brazil. The suspension will continue until X names a new legal representative in the country (as required by the new law) and pays any fines for violating Brazilian law. The head of Brazil’s telecommunications agency has been tasked with the actual suspension of the X platform.
Also, Apple and Google have been given a five-day deadline to remove X from their app stores in Brazil and to block its usage on iOS and Android systems. Judge de Moraes has added that anyone or any business caught using a VPN to access X in Brazil may face a R$50,000 (£6,700) fine.
No Office To Close – Just Suspend Operations
X had already closed its São Paulo office back in November 2022 as part of global restructuring and cost-cutting effort following Elon Musk’s acquisition of the company, so this recent suspension will apply to any remaining operational ties or representation in Brazil rather than leading to any office closure.
Still Live
Despite X’s suspension and threats of fines for those using a VPN to access X, the service has remained live for Brazilian users. However, the suspension of X’s operations does appear to be a significant event in the enforcement of Brazil’s regulations on social media and highlights the growing conflict between global tech companies and national legal frameworks.
What Does Musk Say?
Predictably, Musk and X have come out fighting in terms of their response to the ruling. For example, Musk launched an attack (on X) on the judge saying there is “growing evidence that fake judge @Alexandre (Judge de Moraes) engaged in serious, repeated & deliberate election interference in Brazil’s last presidential election. Under Brazilian law, that would mean up to 20 years in prison”. Musk also added that “it appears that some former Twitter employees were complicit in helping him do so.”
Musk has also shared / commented many more times on his platform about the issue, including:
- Sharing a picture of an apparently overweight man lying on a pile of money with the caption “VPN Companies After Brazil Banned Twitter”.
- Commenting that “The people of Brazil are not happy with the current regime” and that “Investing in Brazil under their current administration is insane. When there is new leadership, that will hopefully change.”
- Commenting “I keep telling people that this guy @alexandre (Judge de Moraes) is the dictator of Brazil, NOT a judge. He just wears that as a costume. He has supreme executive, judicial and legislative power, aka a dictator. The cloak he wears is to trick fools in the West into thinking that he’s a judge.” Musk has also said that the responsibility for the situation lies with the judge and that there is “no question that Moraes needs to leave.”
In terms of the more focused and serious replies to the ruling itself, X has indicated in posts that it will not comply with the ruling. For example, X said on an official account “Soon, we expect Judge Alexandre de Moraes will order X to be shut down in Brazil – simply because we would not comply with his illegal orders to censor his political opponents”. X also said “The fundamental issue at stake here is that Judge de Moraes demands we break Brazil’s own laws. We simply won’t do that”, i.e. Musk is positioning X as defending legal and ethical standards against what he views as overreach by the judiciary.
What’s The Link With Bolsonaro?
During former president Jair Bolsonaro’s term in office, many of his supporters used social media platforms (including X) to spread disinformation, particularly regarding election results, COVID-19, and other politically sensitive topics. Whereas Judge Alexandre de Moraes was active in investigating and ordering the blocking of accounts linked to Bolsonaro supporters (with accusations of spreading fake news), Musk opposed these measures, citing concerns about free speech.
Musk Woes
The suspension of X in Brazil adds to a growing list of recent legal, regulatory, and PR challenges for Musk including:
– Criticism for complying with Türkiye’s demands to restrict content ahead of elections.
– Concerns over environmental impact and regulatory compliance for Starlink satellite deployment.
– Musk commenting on the recent UK far-right violence, saying “civil war is inevitable,” drawing criticism from the Prime Minister and UK officials for exacerbating tensions amid riots.
– Threats of fines from the EU if X fails to improve content moderation in line with the Digital Services Act.
– Regulatory and competition issues for Tesla in various markets, including the UK.
– A decline in advertising revenue on X (perhaps by as much as 60 per cent in the US) as advertisers pull back from the platform over concerns about their brands appearing alongside hate speech, misinformation, and offensive content on X.
Others Feeling The Heat
Twitter is not the only platform ‘feeling the heat’ recently. Others which have also found themselves under scrutiny include:
– Meta (Facebook and Instagram) over privacy issues and misinformation, with ongoing regulatory pressure from global governments to improve content moderation.
– TikTok, facing regulatory challenges in the US and EU over data privacy and security, amid fears of Chinese government’s influence on user data. For example, TikTok’s faces a potential ban in the US if ByteDance doesn’t divest it, plus TikTok has been banned from government devices in Australia, Canada, the UK, the EU, New Zealand, Denmark, Taiwan, and other countries.
– Google’s YouTube has been criticised for not adequately controlling harmful content, leading to increased scrutiny from regulators and demands for better content moderation.
– The end-to-end encrypted Telegram messaging app’s CEO Pavel Durov was recently arrested in France for alleged failures in moderating illegal activities on the platform, including child pornography and drug trafficking.
What Does This Mean For Your Business?
The suspension of X in Brazil, which has been on the cards for a while, is another blow for Elon Musk’s X social media platform and serves as a reminder of the challenges facing global tech companies as they navigate increasingly stringent regulatory landscapes. For Musk, it adds to his growing list of issues, from regulatory scrutiny in multiple countries to declining advertising revenues. The situation in Brazil also highlights the difficulty of balancing ‘free speech’ with the responsibility to curb misinformation, particularly in countries where social media plays a critical role in shaping public opinion and political discourse.
For Brazil, despite Musk’s allegations and comments about the particular judge involved, this decision highlights the government’s commitment to tackling misinformation and enforcing stricter content moderation laws. Crucially, it could set a precedent for other countries dealing with similar issues, potentially leading to a ripple effect where more governments implement more robust regulations to hold social media platforms accountable. This could impact how social media companies operate globally, as they may need to adopt more transparent and responsible content management practices to comply with local laws.
Other social media platforms are likely to watch closely, as this case could herald a new era of heightened regulatory pressure (some would say, not before time). Companies like Meta, TikTok and YouTube are already facing scrutiny over their handling of content, privacy concerns, and their impact on society. They may need to strengthen their policies and practices to avoid similar repercussions. For users, this could mean more restrictive environments on platforms, with tighter controls on what can be posted and shared. It’s worth noting here that in the UK during the recent unrest, inflammatory/false social media posts even resulted in the arrest of (and imprisonment of) some people.
Generally, the evolving regulatory landscape now presents both challenges and opportunities. On the one hand, social media companies may face increased operational costs and legal challenges as they adapt to new laws. On the other hand, businesses that can demonstrate compliance and commitment to ethical standards may gain a competitive advantage, attracting users and advertisers looking for safer and more reliable platforms. For advertisers, these changes could lead to a more controlled and brand-safe environment, reducing the risks associated with negative publicity from ads appearing alongside harmful content.
As the world grapples with the power and influence of social media, the situation with X in Brazil is a clear indication that some governments (or some judges, according to Musk) are willing to take bold steps to ensure that platforms act responsibly. The near future looks likely to see more countries imposing regulations, and social media companies may need to adapt more quickly than before to remain compliant and relevant. This ongoing shift could redefine the relationship between tech companies, governments, and society, making it crucial for all stakeholders to engage constructively in shaping the future of digital communication.
Tech Insight : What’s All The Fuss About Telegram?
CEO and founder of messaging app Telegram, Pavel Durov, was recently arrested in France over allegations that Telegram facilitates illegal activities, including money laundering, drug trafficking, and the distribution of child sexual abuse material.
Pavel Durov
Pavel Durov is a 39-year-old Russian-born billionaire tech entrepreneur, known for founding the social networking site VKontakte (VK) and later (2013), the messaging app Telegram. Durov gained a reputation for his commitment to freedom of speech and user privacy. After being ousted from VK due to conflicts with Russian authorities, he focused on Telegram, which is known for its strong encryption and privacy features, making it a preferred platform for those who value security. Durov, who holds citizenship in several countries, including France and the UAE, claims that Telegram has 950 million monthly active users. Also, Telegram has recently seen an increase in downloads, propelling it to no. 2 position on the U.S. App Store’s Social Networking charts (and boosting its global iOS downloads).
Somewhat bizarrely, Durov also made the news in June following claims he made on Telegram about having fathered over 100 biological children following a donation of sperm at a clinic 15 years ago to help a friend have a baby.
Arrest
On August 24, Mr Durov was arrested by French authorities (in an extended detention until August 28), then released on a €5.6 million bail but remains under judicial supervision (reporting to the police regularly and remaining within French territory). The Paris-based prosecutors stated that Pavel Durov is under formal investigation for 12 alleged offences, including:
– Assisting in the management of an online platform that facilitates illegal transactions by organised crime.
– Failing to cooperate with authorities (by not providing necessary information or documents).
– Being complicit in the possession and distribution of child pornography.
– Involvement in drug-related activities, e.g. acquiring, transporting, and selling narcotics.
– Participating in organised fraud.
– Money laundering (related to proceeds from organised crime).
– Using unauthorised cryptographic tools and providing encryption services without proper certification.
– Involvement in activities that could damage automated data processing systems.
In the French legal system, being under formal investigation means that there is sufficient evidence to warrant further inquiry, but it does not imply guilt or necessarily lead to a trial.
Telegram – Like Having The Dark Web In Your Pocket
One indication of how much Telegram may currently used by those involved in criminality and perhaps how there is room for better moderation comes in the form of criticism by podcaster Patrick Gray who, for months, has been describing Telegram as “the dark web in your pocket”. In fact, a recent BBC investigation (by cyber correspondent Joe Tidy), highlighted how Telegram users can (without their consent) become added to many different active illegal groups. These included a ‘Card Swipers group’ apparently selling stolen cloned credit cards (shipping worldwide), the ‘Drugs Gardens official’ group selling marijuana and illegal vapes, and a number of other groups where it seems members can buy fake vouchers, gift cards, passports, driving licences, prescription drugs, malicious software, guns, and more!
The report of the BBC’s investigation also included an allegation by Brian Fishman, co-founder of the Cinder software platform, that Telegram “has been the key hub for Isis for a decade” and that “it’s ignored reasonable law enforcement engagement for years”.
Why Now?
The timing of the French government’s actions against Pavel Durov and Telegram can be attributed to a combination of legal, political, and contextual factors, including:
– Recent legal and regulatory developments in France and the EU, placing increased emphasis on the responsibility of digital platforms to prevent illegal activities. Telegram’s strong encryption and large group capabilities, which allow up to 200,000 members per group have raised concerns about the platform being used for illegal activities such as money laundering, drug trafficking, plus the distribution of child sexual abuse material.
– Telegram’s commitment to user privacy and its resistance to moderating content or cooperating with law enforcement have made it a focal point for governments concerned about security, i.e. a political matter. Durov’s advocacy for free speech, along with his previous comments in interviews indicate that he would refuse certain requests from authorities to remove content from Telegram, plus the platform’s reputation as a haven for privacy (and end-to end to end encryption making it closed to government scrutiny or control) have made it a target for authorities. As with the UK for WhatsApp, the French government would dearly love to gain some kind of back door access and/or be able to exert control over digital communications like Telegram, especially considering broader geopolitical tensions and concerns about government overreach.
– It’s also been suggested by Russian politician Vyacheslav Volodin that the US may be behind Durov’s arrest due to the fact that Telegram is widely used in Russia and Ukraine and is one of the few large internet platforms that the US has no influence over.
– Telegram’s features mean it is a platform where disinformation, extremist content, and other illegal activities can thrive. The app’s weaker moderation policies (compared to other platforms) appear to have led to its usage by far-right groups and other extremists, which, not surprisingly, has drawn the attention of authorities. Incidents such as the use of Telegram to organise the recent violent disorder in UK cities have heightened scrutiny, pushing governments to take action against platforms that they believe facilitate such behaviour.
These factors combined have therefore culminated in a situation that has tipped the balance and made the French authorities act.
What Does Durov Say?
David-Olivier Kaminski, Durov’s lawyer, has stated that Telegram fully adheres to European digital laws and maintains content moderation standards comparable to other social media platforms. He has also argued that it is “ridiculous” to claim that Durov is connected to any “criminal activities that do not relate to him, either directly or indirectly.”
Support From Elon Musk
Durov has received support from another tech billionaire, Elon Musk, who expressed support for Durov following his arrest, thereby highlighting the growing concern among some tech leaders regarding issues of privacy and freedom of speech. Musk, known for his own advocacy of free speech and minimal regulation on digital platforms, appears to view Durov’s legal troubles as part of a broader struggle against governmental overreach into digital communications. Musk’s support could, therefore, be seen as part of a larger narrative involving tech (billionaire) entrepreneurs defending against what they perceive as unjust government actions against certain platforms.
Other Perspectives
Several prominent figures have defended Durov following his arrest, highlighting concerns about privacy and free speech. In addition to Elon Musk, who argued that “moderation” is often just another term for censorship and called for Durov’s release, Chris Pavlovski, founder of the free-speech-oriented platform Rumble, has also voiced his concern. Pavlovski has noted that Durov’s detention influenced his decision to leave Europe, which reflects broader fears among tech entrepreneurs about government overreach.
Edward Snowden, the famous whistleblower (now living in Russia), also condemned Durov’s arrest as an assault on basic human rights, accusing French authorities of trying to gain access to private communications under the guise of security. These reactions highlight a broader anxiety among privacy advocates about government efforts to restrict encrypted communication platforms like Telegram, viewing such actions as threats to freedom of speech and digital autonomy.
What Does This Mean For Your Business?
The arrest of Pavel Durov and the scrutiny of Telegram may signify a turning point in the debate over digital privacy and security. For businesses, this means a closer examination of the platforms they use for communication and the potential risks associated with them. Companies must consider the legal and ethical implications of using platforms that could come under government scrutiny for allegedly enabling illegal activities. Businesses need to ensure they are compliant with local regulations and are prepared to adapt to changing laws that might impact how they use these digital communication tools.
For platforms like Telegram, WhatsApp, and other similar messaging services, this incident highlights the challenges of balancing user privacy with regulatory compliance. These platforms, known for their strong encryption and large user bases, will likely face increased pressure to cooperate with law enforcement agencies or risk similar legal challenges. The scrutiny suggests that governments worldwide are now very keen to regulate digital communications more tightly, potentially forcing platforms to reconsider their privacy policies and moderation practices.
This broader trend reflects a global effort by governments to control speech and access to information under the pretext of fighting illegal activities. For platforms, it presents a double-edged sword. For example, while strong security features are essential for protecting dissidents and activists, these same features are viewed as obstructing law enforcement. Platforms must, therefore, find ways to navigate this complex environment and attain a kind of balance (which is likely to fluctuate) between safeguarding user privacy and meeting regulatory expectations to avoid legal repercussions.
For secure messaging platforms, it may now be a case of evaluating their current moderation practices and security features to ensure they can address both user privacy concerns and regulatory requirements. Platforms like Telegram may now have to engage more with policymakers (or at least appear to) and develop strategies that protect users while complying with laws designed to prevent illegal activities. Striking the right balance will be crucial in maintaining user trust and avoiding legal challenges, which could significantly impact their operations and reputation.
Tech News : Dublin Says No To New Google Data-Centre
South Dublin County Council has refused Google Ireland planning permission for a new data-centre at Grange Castle Business Park in South Dublin.
Why?
The reason given for the refusal was “the existing insufficient capacity in the electricity network (grid) and the lack of significant on-site renewable energy to power the data centre”.
What Data Centre?
Google already has two data-centres in South Dublin’s Grange Castle business park, and had submitted plans to build a third 72,400sq-metre data-centre consisting of eight data halls on a 50-acre site. This new data-centre would have created 50 jobs and documents lodged with the application by Google Ireland had highlighted how important the new data-centre would be for Google being able to meet the increasing demands for Information and Communications Technology (ICT) services to its customers in Ireland and in supporting Ireland’s digital economy.
Concerns
South Dublin County Council refused Google Ireland’s planning application for a third data-centre primarily due to concerns about energy usage and environmental impact.
The council was worried that the new data-centre would place a significant strain on the already limited capacity of the local electricity grid, potentially leading to grid congestion and challenges in managing power supply. This decision aligns with the stance of Ireland’s state-run electric power operator, EirGrid, which had previously (2022) indicated it would not accept applications for new data-centres in Dublin in the near future, due to insufficient grid capacity.
The council also criticised the lack of on-site renewable energy sources in Google’s proposal, which was seen as inconsistent with Ireland’s climate goals and efforts to reduce carbon emissions.
A lack of clarity regarding Google’s engagement with power purchase agreements and its failure to connect the proposed data-centre to the surrounding district heating network was also highlighted as contributing to the planning refusal.
In relation to concerns about the environmental impact, Google Ireland’s proposed design was deemed to not fully comply with the South Dublin County Development Plan (2022-28), particularly in terms of protecting green infrastructure, such as streams and hedgerows, and the overall integration of the facility into the local environment. The council ruled that the proposed usage was not suitable for the designated enterprise and employment-zoned lands, and it highlighted concerns about how the project would impact power supply once operational in 2027.
Also, An Taisce (an environmental advocacy group) further warned that the data-centre would compromise Ireland’s ability to meet its carbon budget limits and place additional pressure on renewable energy resources, leading to further environmental concerns.
What Now?
As yet, there’s been no official comment from Google about the refused application and Google now has one month to appeal the Council’s decision.
With part of the refusal of the application being based on the apparent lack of on-site renewable energy, it’s interesting to note that Google has long been involved with renewable energy sources. For example, back in 2021, Google signed a long-term supply agreement with solar energy firm Energix Renewables to supply Google with electricity via its solar power operations, covering a 1.5GW peak of solar project development until last year.
Data-Centres
Just how much of an effect data-centres are having on Ireland’s energy supplies was highlighted by a Silicon Republic report back in June which showed that data -entres now consume a massive 21 per cent of Ireland’s electricity! With this figure set to rise to one-third of the country’s total electricity consumption by 2026, it’s perhaps unsurprising that political leaders, environmental bodies and councils are now particularly concerned about the effects of more massive data-centres being built in Ireland.
That said, Amazon was granted permission last September for three new data-centres at a data campus near Mulhuddart, northwest of Dublin. As part of the conditions for granting planning, however, Amazon had to install the infrastructure to develop a district heating scheme for recycling the heat from the data-centres.
What Does This Mean For Your Business?
The refusal of Google’s planning application for a third data-centre in South Dublin highlights the growing challenges related to the energy consumption of data-centres and their environmental impact. As mentioned above, data-centres already consume a significant portion of Ireland’s electricity (currently 21 per cent), with projections indicating this could rise to a third by 2026. This heavy demand places immense pressure on the national grid, which has (not surprisingly) prompted concerns from local councils, environmental groups, and state energy operators like EirGrid. For businesses, the decision by South Dublin Council illustrates the increasing importance of considering energy efficiency and sustainability in operational plans. Companies will need to explore alternative energy solutions, such as integrating renewable energy sources, to avoid similar setbacks.
For Ireland, this decision reflects a broader commitment to sustainability and managing environmental impacts in line with its national climate goals. By setting a precedent for stricter energy consumption and environmental guidelines, the refusal indicates that future data-centre developments will be closely scrutinised. This could influence other tech giants and data-reliant businesses, prompting them to reassess their environmental strategies and engagement with local communities.
Google may now face a bit of an unexpected challenge in adapting its expansion plans to meet these new expectations. The company must demonstrate a stronger commitment to sustainable energy practices and compliance with local development regulations. That said, Amazon’s been able to get approval for three data-centres by focusing more on renewable energy and giving the community back something via a heat recycling scheme, so despite this initial refusal, it’s not unlikely that Google could still get approval on appeal and with the right alterations to its initial plans.
For the local community, this refusal could be seen as a step towards ensuring that large-scale developments don’t compromise the quality of life, local infrastructure, and environmental health. That said, the refusal also means that 50 jobs won’t be created, and Ireland’s digital economy and ICT may not be as well supported as it could have been, thereby missing out on the economic benefits.
For businesses in Ireland, this case serves as a warning and an opportunity. It signals a shift towards a need for sustainable growth and the need to align business operations with both local and national environmental standards.
Tech News : Warning Against Giving Smartphones To Under 11’s
Mobile network provider EE has launched a smartphone age guidance initiative in which it advises that children under 11 should only use non-smart devices with limited capabilities.
Children With Smartphones
EE says its new initiative is in response to concerns about children’s online safety and the impact of device usage on their well-being. Back in February, for example, an Ofcom study revealed that almost a quarter of UK five-to-seven-year-olds have their own smartphone. The study also showed that nearly two in five are using messaging service WhatsApp, despite the minimum age limit being 13 while over half of children under 13 use social media. In fact, the study showed that three-quarters of social media users aged between 8 and 17 have their own account or profile on at least one of the large platforms and many children in the study also said they simply lie to gain access to new apps and services. Worryingly, almost three-quarters of teenagers between ages 13 and 17 have encountered one or more potential harms online.
What’s The Problem?
The use of smartphones and social media by young children poses significant risks, including exposure to inappropriate content such as violence and explicit material, as well as cyberbullying. Privacy concerns also arise, as children may inadvertently share personal information or be subject to data collection by apps. Also, the extensive use of screens can lead to mental health issues, such as anxiety, depression, and sleep disruption, while also potentially contributing to screen time addiction and reduced attention spans.
As EE says, parental concerns are growing as children increasingly use devices at a young age, often bypassing age restrictions to access social media. These concerns centre on the content that children are exposed to, the amount of time they spend on devices, and the potential impact on their mental wellbeing and social development. In fact, children themselves report mixed experiences with social media, indicating that while it offers social connection, it can also lead to stress, anxiety, and feelings of inadequacy.
In the UK (back in March), a Parentkind survey showed that 58 per cent of parents would support the idea of introducing a ban on smartphones for under 16s (77 per cent among parents of primary school children). The survey also showed that 83 per cent of parents said that they felt smartphones are potentially harmful to young people.
EE’s Initiative Targeting Under 16s
In response to parental concerns about the above-mentioned issues, EE says its initiative is targeting under-16-year-olds. The initiative classifies device usage into three groups based on age suitability: under 11s, 11-13, and 13 -16.
Key Recommendations
EE’s key recommendations for each group as part of the initiative are that:
– Under 11s should only be allowed to use limited capability, non-smart devices, such as feature phones and that parents could ensure they can make texts and calls but restrict access to social media or inappropriate content.
– For children aged 11-13, the advice is that if they use a smartphone, it should have parental controls enabled, as well as a family-sharing app in place such as Google Family Link or Apple Family Sharing, while restricting access to social media.
– For 13-16-year-olds, EE suggests that smartphones are appropriate, but parental controls should be used to manage and restrict children’s access to inappropriate sites, content, and platforms. EE says this age group’s smartphones should allow social media access but should be linked to a parent or guardian account.
Backed By Charity Groups
EE is keen to stress that this smartphone guidance initiative has the backing of recognised charity groups, including Internet Matters, a leading child safety organisation. For example, Internet Matters CEO Carolyn Bunting said: “This initiative is timely and much needed. Parents and guardians want their children to be able to stay connected with them and to experience the benefits of digital technology, but they are also concerned about online safety and wellbeing. Our recent research showed that parents want to make their own decisions about their children’s use of technology, but that many would value guidance to help them in doing so. It is fantastic that EE is supporting parents with age-specific advice to support children’s diverse technology needs.”
Part Of A Range Of Measures
EE also says that the initiative is part of a wider design to promote safe and responsible use of technology among young people, which also includes enhanced in-app (parental) controls, child-friendly products (through a partnership with Verve Connect, to make the ‘Dash+’), and a family online Safety Hub, to be launched later this year.
Mat Sears, Corporate Affairs Director for EE commented: “While technology and connectivity have the power to transform lives, we recognise the growing complexity of smartphones can be challenging for parents and care-givers. They need support, which is why we are launching new guidelines on smartphone usage for under 11s, 11–13-year-olds, and 13 -16-year-olds to help them make the best choices for their children through these formative years.”
Other Mobile Operators?
EE is not the only mobile operator to have initiatives to protect their youngest customers. For example, Vodafone has its “Digital Parenting” platform, providing parents with tools and advice on managing children’s online activity, including guidance on setting screen time limits, and understanding online risks. Similarly, O2 has partnered with the NSPCC to offer resources and workshops aimed at educating both parents and children about online safety and responsible use of technology.
Government
It’s worth noting that back in May, the House of Commons Education Committee asked the UK government to consider a total ban on phones for under-16s. Although this wasn’t supported by the Prime Minister, he did say that the government would be looking again at what content young people can access online. It’s also worth noting that the initiatives by UK mobile operators like EE, Vodafone, and O2 to promote online safety for children are closely aligned with the objectives of the UK’s Online Safety Bill which mandates that companies must take proactive steps to protect users, particularly children.
What Does This Mean For Your Business?
In today’s digital society, the use of smartphones by children can offer several benefits, such as keeping them connected with family and aiding in their learning and development.
However, the increasing concerns about online safety and the impact of digital device usage on children’s well-being cannot be ignored. EE’s initiative to guide smartphone usage for different age groups, therefore, highlights the growing recognition of these issues and the need for tailored approaches to address them. For EE and other mobile operators, this initiative could, of course, mean an enhanced reputation as a responsible company that is seen to care about the well-being of its youngest users.
By offering practical tools and guidance, these companies can not only mitigate potential risks but also build stronger relationships with parents and guardians who are looking for ways to manage their children’s digital lives more effectively. It may also be a way for mobile operators to stay on the right of legislation and to be seen to be responding positively to government pressure.
For UK parents and guardians, EE’s guidelines may provide a bit more much-needed clarity and support in an area that is becoming increasingly complex. By adhering to age-specific advice and using the recommended parental controls, families may feel better able to navigate the challenges associated with children’s smartphone use and that they are able to do something more to protect their children.
EE’s proactive approach in this initiative could actually help children to enjoy the benefits of digital technology while minimising some of the risks, thereby supporting their mental and emotional development. Also, initiatives like these align with broader legislative efforts, such as the UK’s Online Safety Bill, reinforcing a collective move towards a safer digital environment for young users. For children, this could mean a safer, more structured online experience that could promote positive interactions and healthy digital habits. As these practices become more widespread, they may set a higher standard for how technology can be used responsibly to enhance, rather than harm, young lives.
However, it’s not just mobile operators that need to step up. Social media companies also play a crucial role in making the digital world safer for children and these platforms must take greater responsibility by implementing stricter age verification processes to prevent underage users from accessing content that is not suitable for them. Many believe that social media companies could be doing a lot more to enhance their content moderation systems to swiftly identify and remove harmful content, such as cyberbullying, explicit material, and misinformation. They could also provide better tools and resources for parents and guardians to monitor and control their children’s activities on these platforms, promoting a safer online environment. By collaborating with mobile operators, educators, and policymakers, social media companies could, therefore, be part of a more comprehensive approach to safeguarding children in the digital space.