An Apple Byte : Security Concerns For New AI Powered iPhones
As highlighted recently by the Wall Street Journal, the range of AI-driven features in Apple’s latest iPhones could transform how enterprises operate, provided company data is protected and businesses understand how best to use these new features.
Unveiled in their new iPhone 16 and iPhone 15 Pro models, the new “Apple Intelligence” tools offer improved capabilities like enhanced voice assistance and sophisticated text and photo editing. While these innovations may appeal to consumers, they may hold particular promise for UK businesses looking to harness the power of generative AI to streamline processes and boost efficiency.
A key selling point of these new Apple devices is their on-device AI functionality, allowing tasks to be run locally without the need for a cloud server which could offer a new way to boost innovation within the workplace. However, some businesses may be concerned about how this will ensure sensitive company data remains secure and may need reassurance that adopting these tools won’t put their information at risk, particularly with new AI technologies.
That said, Apple has addressed these concerns with its Private Cloud Compute system, designed to handle AI tasks securely when not processed on the device itself. They’ve also introduced transparency logs, which allow businesses to see exactly when AI apps are running locally versus in the cloud. However, the lack of clarity over exactly how and when data might be exposed to external servers continues to raise questions, particularly as Apple incorporates third-party AI tools like ChatGPT (though this feature will remain off by default).
Despite the challenges, many enterprise leaders may be optimistic about the potential for Apple’s AI tools and the benefits the technology could deliver once data security is fully addressed. With AI becoming an increasingly vital tool for productivity, businesses (rather than consumers) will likely be the driving force behind widespread adoption.
Security Stop Press : Teenager Arrested In Connection With TfL Cyber Attack
A 17-year-old male has been arrested on suspicion of Computer Misuse Act offences in relation to a cyber attack on Transport for London (TfL) on the 1st September.
Although TfL reported on its website on September 5th that “there is no evidence that any customer data has been compromised”, it has since been reported that a further investigation has revealed that this may not be the case.
It’s been reported that Shashi Verma, TfL’s chief technology officer has said that investigations have now revealed that “certain customer data has been accessed” which could include “some customer names and contact details” (which may include some email and physical addresses). Also, it’s been reported that some customer Oyster card refund data may also have been accessed which may include “bank account numbers and sort codes”. The teenage suspect (believed to be from Walsall) was arrested on September 5th, questioned, and bailed.
TfL has now referred itself to the Information Commissioner’s Office (ICO), says it is working with its partners to progress the investigation, and says it will be contacting customers directly about the matter. TfL also says it has implemented new IT security measures add extra protection to all its safety-critical systems and processes.
Sustainability-in-Tech : Rapidly Growing Water Demand For Data-Centres
Information recently obtained (by the Financial Times) has revealed that a huge spike in water consumption by dozens of facilities in Virginia’s “data-centre alley” likely means new initiatives to replenish or conserve water resources are urgently needed.
Usage Up By Two-Thirds
The county authority figures show that water consumption at the data-centres of hyperscalers which surround Ashburn, VA (which host a staggering 70 per cent of the world’s internet traffic daily) was up by nearly two-thirds between 2019 and 2023 – from 5 billion litres to 7 billion litres!
Hyperscalers
So-called hyperscalers are large-scale cloud service providers that offer massive computing resources and include companies like Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), IBM Cloud, Oracle, and Facebook. These companies play a major role in the technology landscape, providing cloud infrastructure that supports a wide range of services and applications globally. However, due to the expansion of power and water-cooling-hungry AI-driven infrastructure and data-centres, the water consumption of these hyperscalers has significantly increased in recent years.
For example, water usage at Microsoft data-centres rose by 34 per cent between 2021 and 2022, driven by the need to cool denser AI server racks and, similarly, Google reported a 20 per cent increase in water consumption, using 19.5 million cubic metres of water in 2022.
Why Water?
AI workloads require highly efficient cooling systems to prevent overheating of servers, which run continuously and generate significant heat. The traditional cooling methods, therefore, often involve evaporative cooling, where water evaporates to absorb heat, lowering the temperature of data centre-equipment. This results in heavy water usage, especially those data-centres operating in warmer regions.
Water Demand Fuelled by AI Infrastructure Growth
As cloud computing and AI have expanded, the need for more water (and more efficient water usage) has grown, i.e. AI infrastructure growth has fuelled the spike in demand for water.
Issues
In addition to the sharp increase in water demand, there are in fact many other issues that need to be taken into account when looking at trying to tackle water usage. For Example:
– Water scarcity. Many data-centres are located in regions already facing water shortages or droughts. For example, in The Dalles, Oregon, a Google data-centre was criticised for using one-third of the city’s water supply in a drought-prone area. It’s easy to see how this places additional stress on freshwater supplies in regions where water is a finite resource.
– The environmental impact. The use of water for cooling in such large quantities, particularly in arid or drought-prone areas, can negatively affect local ecosystems and water availability for communities.
– An apparent lack of transparency. It seems that many companies are not transparent about their water usage, making it difficult to gauge the full impact on local water resources. Public reporting on water use, similar to energy use, remains inconsistent across the industry.
The Type of Water Used In Data Centres
One key issue that deserves special attention is what type of water is used for data centre cooling. For example:
– Freshwater. Most data centres rely on freshwater sources, which are used in cooling towers for evaporative cooling. However, freshwater is a limited resource, and overuse can stress local supplies.
– Recycled water. It is worth noting here, however, that some hyperscalers are now beginning to use recycled or reclaimed water to mitigate their environmental impact. For example, Amazon Web Services (AWS) uses recycled wastewater in its Virginia data-centres, helping conserve high-quality water for community use.
Research
Research by the British Standards Institution (BSI) and Waterwise’s “Thirst for Change” also makes some key points and recommendations that need to be considered when looking at the subject of data-centre use of massive amounts of water. It highlights the critical issues related to freshwater resource management, focusing on the growing urgency of water security in the context of global environmental challenges. Some of the key relevant points and conclusions from the research include:
– There is now a water security crisis. The research makes the point that freshwater is a finite resource, and the global water security crisis is just as urgent as climate change. Both population growth and increased demand for water, particularly in industrial sectors, are straining water supplies.
– The tech sector is still highly water-intensive, especially data-centres. With the rise of cloud computing, AI, and data-centres, the demand for water has skyrocketed, adding to the strain on limited freshwater resources.
– Water management and responsible water usage are now critical. The research emphasises the need for large-scale industries, including tech companies, to recognise their role in contributing to water scarcity and to adopt more sustainable water practices.
– There is a need for a circular economy in water usage, i.e. water recycling. One of the primary recommendations from the report is the need to transition towards a circular economy mindset in water use, particularly in sectors like tech. This involves recycling and reusing water wherever possible, reducing excessive freshwater extraction.
– Innovation in water efficiency is needed, i.e. water-efficient technologies, especially in data-centres. The research suggests that the wider tech sector needs to adopt innovative systems that support water reuse and reduce reliance on freshwater for cooling and other processes.
– Companies need to push beyond the environmental net gain of merely becoming water-efficient and to strive for a net positive environmental impact by replenishing water resources and engaging in water conservation initiatives.
Alternative Cooling Technologies
The recognition of the need for action in meeting the cooling requirements of a data-centre boom fuelled by the growth of AI, and for data-centres to reduce their reliance on water-based cooling systems has led to experimenting with several alternative technologies. The hope is that one or more of them could be viable ways to address both efficiency and environmental concerns. Examples of such innovations:
– Liquid cooling. This is increasingly being adopted to handle high heat loads generated by AI and high-performance computing. It includes two main methods, namely direct-to-chip cooling, e.g. circulating liquid directly over a system’s heat-generating components (e.g. CPUs and GPUs) using cold plates, and immersion cooling. This involves fully submerging servers in a dielectric (non-conductive) liquid that absorbs and dissipates heat. This technology can eliminate the need for air cooling entirely, offering higher efficiency, especially for dense computing environments.).
– Refrigerant-based cooling. This method involves using refrigerants instead of water. Refrigerant-based systems have excellent thermal conductivity, making them more efficient at transferring heat away from components. They are becoming popular for high-density racks and can be scaled to handle increasing workloads.
– Chilled water systems. Some data-centres continue to use chilled water, but advancements like rear door heat exchangers (RDHx) are improving efficiency. These systems use chilled water to cool the air before it enters the data-centre, but now take up less space and offer “room-neutral” cooling, meaning the air exiting the system is at near-ambient room temperature.
– Air-based free cooling. This method uses external ambient air, particularly in cooler climates, to reduce the need for mechanical cooling. This approach works best in regions with cold climates, and it’s already being used in data-centres in places like Sweden and Finland.
– AI-optimised cooling. Ironically, the AI that’s creating more heat can also be used to optimise cooling efficiency by predicting heat loads and managing energy use dynamically. AI can help balance the use of cooling resources more effectively, ensuring that the cooling system is only used when necessary.
Water Replenishment Programmes
It should be noted that one thing tech companies are increasingly investing in to help the situation is water replenishment programs. These are being used to offset their water usage, especially as data centres require significant cooling resources. As well as helping the tech companies to meet their sustainability goals and reduce water consumption, as the name suggests, these programmes are also designed to replenish water in communities, particularly in areas impacted by drought or water scarcity. Examples include:
– Amazon Web Services (AWS) which has implemented a range of water replenishment projects globally. For example, in 2023, its efforts returned 3.5 billion litres of water to local communities. AWS plans to expand this to over 7 billion litres annually across 21 projects, with initiatives in countries like the US, Brazil, Chile, and China. For instance, in Chile’s Maipo Basin, AWS is partnering with local farmers and using AI to improve irrigation efficiency, saving around 200 million litres of water annually. Similar AI-driven projects in Brazil are helping monitor water usage and soil quality.
– Microsoft is working towards becoming water-positive by 2030, aiming to replenish more water than it consumes. It has invested in over 49 replenishment projects worldwide, focusing on areas of high-water stress. These projects include restoring wetlands and repairing irrigation systems to improve water supply reliability. For example, in Mexico City, Microsoft is reviving traditional wetland agriculture, expected to replenish 3.8 million cubic metres of water over a decade.
– Google has committed to replenishing 120 per cent of the water it consumes by 2030. In 2023, its water stewardship projects have replenished over 1 billion gallons of water, addressing 18 per cent of its freshwater consumption. These projects focus on improving water quality and enhancing water efficiency across regions with high water scarcity.
All that said, critics might argue that water replenishment programmes often focus on offsetting usage rather than reducing consumption, making them more of a band-aid solution than a long-term fix for the growing water scarcity problem.
Energy-Hungry
In addition to their massive water demand for cooling, it should be acknowledged that data-centres are also known for their huge energy requirements, a situation that is also getting worse with the growing demand for AI infrastructure. For example, investment firm Carbon Collective estimates that the electricity currently used by data-centres could power around 6.5 million average (U.S.) homes!
What Does This Mean For Your Organisation?
As data-centres continue to expand and support the growing demand for cloud computing and AI infrastructure, their immense consumption of water presents a critical challenge that can no longer be overlooked. The surge in water usage, particularly in hyperscale facilities, means there’s now an urgent need for the tech industry to rethink its approach to sustainability. Relying heavily on water-intensive cooling systems is becoming increasingly untenable, especially as regions like Virginia and Oregon experience the strain of limited freshwater resources.
For businesses in the data-centre space, therefore, this trend highlights the necessity of embracing innovative cooling technologies, such as liquid cooling and AI-optimised systems, that reduce reliance on water while maintaining operational efficiency. Simultaneously, the shift toward using recycled water and investing in water replenishment programmes, as seen with Amazon, Microsoft, and Google, represents an important step toward more responsible resource management.
Ultimately, this evolving landscape presents an opportunity for tech companies to lead the way in sustainable water practices. By innovating and adopting circular water-use models, these businesses can mitigate their environmental impact, meet regulatory expectations, and build a more sustainable future for the industry. However, failure to act on this issue could not only jeopardise environmental sustainability but also risk operational and reputational challenges as resource scarcity intensifies.
Video Update : Undertake Competitor Analysis With AI
This video tutorial explains in depth how to identify your competitors’ strengths and weaknesses, via ChatGPT.
[Note – To Watch This Video without glitches/interruptions, It’s best to download it first]
Tech Tip – “File Explorer Preview Pane” for Fast Document Review
The Preview Pane in File Explorer lets you preview documents, images, and PDFs without opening them, saving time when reviewing multiple files in a folder. Here’s how it works:
Enable Preview Pane
– Open File Explorer by pressing Win + E.
– Click on the View tab and then click Preview Pane.
Preview Files
– Click on any file (document, image, PDF) in File Explorer, and it will be previewed in the right pane without opening the full application.
Featured Article : Musical Misconduct
In a first-of-its-kind case, a US musician has been charged with fraud for allegedly using thousands of automated bot accounts to stream AI-generated tracks from which he made more than $10m in royalty payments.
Which Tracks?
The music tracks that 52-year-old Michael Smith from North Carolina in the US allegedly used came from a co-conspirator, a music promoter, and the CEO of an AI music company, who (from 2018) supplied him with hundreds of thousands of AI-generated songs – songs described as “instant music” by the alleged co-conspirator.
Uploaded To Music Streaming Platforms
Smith then allegedly uploaded these tracks to music streaming platforms like Spotify, Apple Music, Amazon Music, and YouTube Music. Typically, when songs are uploaded to music streaming platforms, the artists earn royalties based on the number of streams their songs receive.
Then Used Automated Bots To Inflate The Number of Streams
In the case of Mr Smith, the allegation is that he then used “bots” (automated programs) to stream the AI-generated songs billions of times. The indictment says that, at the height of his alleged fraudulent scheme, Mr Smith “used over a thousand bot accounts simultaneously to artificially boost streams of his music across the Streaming Platforms”. It’s alleged that by manipulating the streaming data in this way, Smith was able to fraudulently obtain “more than $10 million in royalty payments to which he was not entitled”.
How Royalties Work Via Music Streaming Platforms
Royalties paid to songwriters, composers, lyricists, and music publishers (“Songwriters”) are funded by streaming platforms like Spotify and Apple Music. These platforms allocate a percentage of their revenue (called the “Revenue Pool”) to performance rights organisations (PROs) and the Mechanical Licensing Collective (MLC). PROs manage performance royalties, while the MLC handles digital mechanical royalties for reproducing and distributing songs. The streaming platforms send both streaming data and revenue to these organisations, which then distribute royalties proportionally to the Songwriters based on the number of streams their songs received.
Similarly, performing artists and record companies (“Artists”) receive royalties from a separate pool, also funded by a percentage of streaming platform revenues. These funds are allocated based on the total number of streams each artist’s recordings receive, and the royalties are typically paid to Artists through record labels and distribution companies.
Why Fraud?
Streaming fraud, using bots to inflate stream numbers, diverts royalties from legitimate creators to those engaging in fraudulent activity. In this case, the allegation is that Michael Smith committed fraud by making false and misleading statements to streaming platforms, the above-mentioned performance rights organisations (PROs), and music distribution companies. It’s been alleged that his intent was to conceal a massive streaming manipulation scheme, where he used bots to inflate the number of streams for AI-generated songs. By doing so, prosecutors say that Smith used deceptive practices, to fraudulently divert royalties meant for legitimate creators who earned their revenue through real consumer engagement / real listeners (not automated bots).
Technology Improved Over Time
Emails obtained from Smith and other participants in the scheme, also appear to show how the technology used to create the tracks improved over time, thereby making his scheme more difficult for the streaming platforms to detect. For example, an email from February shows Mr Smith claiming that his “existing music has generated at this point over 4 billion streams and $12 million in royalties since 2019.”
Not The Only Case Of This Kind
Although prosecutors in this case have described it as the first criminal case of its kind, it’s not the only music platform streaming fraud case of recent years. For example:
– The Danish executive case (2024) where a Danish executive got an 18-month prison sentence after using bots from 2013 to 2019 to inflate streams on platforms like Spotify and Tidal, earning around $635,000 in fraudulent royalties.
– The Boomy AI fraud incident (2023) where Boomy, an AI music startup, had millions of its tracks blocked by Spotify due to suspected bot-driven streaming fraud, leading to increased scrutiny of AI-generated music on platforms.
– The Tidal fake streams investigation (2019), where Norwegian authorities investigated Tidal (a global music streaming platform) for allegedly inflating streams for artists like Beyoncé and Kanye West by hundreds of millions, resulting in massive royalty payouts and one of the largest streaming fraud cases to date.
Other AI-Related Music Incidents of Note
It’s not just using bots to inflate streams on platforms that have caused AI-driven problems in the music world. For example:
– In 2023, a song titled “Heart on My Sleeve” featuring AI-generated voices that mimicked ‘Drake and The Weeknd’ (a Canadian singer/songwriter) went viral on platforms like TikTok and Spotify. Created by a user named Ghostwriter977, the track accumulated millions of streams before being pulled from streaming services following a complaint from Universal Music Group (UMG). UMG argued that the AI technology used to clone the artists’ voices breached copyright law and harmed the rights of real artists. Despite its removal, the incident highlighted growing concerns over the use of AI in the music industry and its potential legal implications.
– In April this year, over 200 prominent artists including Billie Eilish, Chappell Roan, Elvis Costello, and Aerosmith, signed an open letter calling for an end to the “predatory” use of AI in the music industry. This letter, coordinated by the Artist Rights Alliance, highlighted concerns that AI technology is being used irresponsibly to mimic artists’ work without permission, undermining creativity, and devaluing musicians’ rights. The artists warned that AI models are being trained on their copyrighted work without consent, with the potential to replace human artistry and dilute the royalties that artists depend on. They called for developers and platforms to commit to avoiding AI usage which infringes on artists’ rights or denies them fair compensation.
Can Tech Firms Steal Your Voice?
In an interesting AI-related case of a notable class action lawsuit filed in 2024, voice actors Paul Skye Lehrman and Linnea Sage accused AI startup Lovo of illegally cloning and selling their voices without consent. The pair were originally contacted via Fiverr in 2019 and 2020, where they were asked to record voiceover samples for what they were told were “academic research” or radio test scripts. Lehrman was paid $1,200, and Sage $400, with both assured that their recordings wouldn’t be used for anything beyond these stated purposes. However, they later discovered their voices had been cloned using AI and used in commercial content without permission.
However, much to Lehrman’s surprise and shock, he heard his voice on a YouTube video about the Russia-Ukraine conflict, discussing topics he had never recorded. The irony of his situation deepened when he heard his voice again on the podcast “Deadline Strike Talk,” where his AI-generated voice was used to discuss the impact of AI on Hollywood and the ongoing strikes, i.e. issues central to the lawsuit itself! Sage similarly discovered her voice in promotional materials for Lovo. The lawsuit claims that Lovo misappropriated their voices to market AI-generated versions under pseudonyms, “Kyle Snow” and “Sally Coleman,” which damaged their careers by reducing job opportunities and potentially replacing their work entirely with AI.
This lawsuit highlights a growing concern in the entertainment industry about AI’s unchecked use to clone voices and likenesses without authorisation, raising issues of intellectual property, consent, and fair compensation.
What Does This Mean For Your Business?
The rise of AI in the music and entertainment industry introduces both exciting opportunities and serious risks for music streaming platforms, artists, and individuals whose voices or music may be used without consent. For streaming platforms, cases like Michael Smith’s alleged fraudulent streaming manipulation expose real vulnerabilities in royalty systems, which requires platforms to implement more robust detection methods. As AI-generated content becomes more sophisticated, distinguishing between real and artificial streams will be crucial to prevent fraudulent activity that undermines royalty distribution and trust.
For artists, AI’s ability to clone voices, styles, and entire songs presents an existential challenge to creativity and ownership. The growing number of cases, including the Heart on My Sleeve incident and the lawsuit against Lovo, highlight how AI can be used to replicate an artist’s voice or music without permission. This threatens not only their revenue but also their creative integrity. This illustrates why prominent artists, as seen in the open letter signed by Billie Eilish, Chappell Roan, and others, are calling for clearer protections and industry standards i.e., to prevent AI from being used in ways that exploit human artistry without proper compensation.
Voice actors and other professionals who rely on their vocal talents are particularly vulnerable to AI voice cloning. Lehrman and Sage’s experience with Lovo illustrates how voice recordings can be misappropriated and used commercially under false pretenses, damaging careers and reducing future opportunities. This case highlights the need for businesses, especially those in the tech and entertainment sectors, to perhaps develop transparent and ethical policies around AI-generated content, thereby ensuring that creators are properly informed, compensated, and protected.
Beyond the entertainment industry, AI misuse poses a potential risk for the rest of us, especially when it comes to the unauthorised use of voices or faces. AI technology, like voice cloning and deepfakes, can be used to imitate individuals without their consent, creating the potential for serious ethical and legal challenges. For businesses, this means increased vulnerability to fraud, such as the possibility of AI-generated voices being used to impersonate employees or executives in phishing scams. Without proper safeguards, AI can become weaponised to deceive customers or commit fraud against organisations by replicating voices or faces in ways that can bypass security measures, leading to financial and reputational damage.
In response to these growing concerns, industry experts and creators are calling for stronger regulations and protections. Clear consent processes, the development of intellectual property rights linked to a person’s voice and likeness, and technological solutions for detecting fraudulent AI usage now appear to be essential. Ideally, companies and platforms now need to collaborate with policymakers and rights organisations to try and ensure that AI is used ethically, protecting the creative economy and the rights of individuals.