Featured Article : Is DeepSeek Safe?

Following the surprise introduction of Chinese AI chatbot DeepSeek, here we look at what makes it different, and why concerns are growing over its safety and privacy implications.

What is DeepSeek?

DeepSeek is a Chinese-developed AI chatbot that functions much like OpenAI’s ChatGPT or Google’s Gemini. The app exploded in popularity following its release in January 2025. It has reportedly already surpassed three million downloads, becoming the most-downloaded free app on Apple’s App Store in the US. Compared to competitors such as Perplexity, it has been downloaded at three times the rate.

Like ChatGPT, DeepSeek provides AI-generated responses to user queries but has been praised for its ability to perform complex reasoning tasks at a fraction of the cost of rival models. DeepSeek’s developers claim it was built with significantly fewer resources than models like GPT-4, making it an attractive and cost-effective alternative.

One of DeepSeek’s standout claims is its low development cost. While OpenAI’s GPT-4 reportedly cost over $100 million to train, DeepSeek’s AI model was (purportedly), built for just $6 million, i.e. a fraction of the budget! This efficiency has raised eyebrows in Silicon Valley and cast doubt on the assumption that only the most advanced AI chips can power state-of-the-art models. These relatively low development cost have raised doubts (as to their authenticity), as well as concerns for US companies.

Big Losses For Nvidia

DeepSeek’s rapid rise has sent shockwaves through the tech sector. For example, US chip giant Nvidia suffered a staggering $600bn (£482bn) loss in market value after investors questioned the future profitability of high-end AI chips. The shockwave spread to other major players, with Microsoft and Alphabet (Google’s parent company) also seeing significant stock downturns.

The shock of DeepSeek’s introduction and effect on the markets caused US President Donald Trump to call DeepSeek a “wake-up call” for American tech firms, stressing that they must compete harder. OpenAI CEO Sam Altman admitted DeepSeek was “impressive” but insisted OpenAI would continue to build superior models.

However, while DeepSeek’s capabilities have impressed many, its arrival has also triggered serious privacy and security concerns.

What Are the Privacy Concerns?

One of the biggest red flags surrounding DeepSeek is its data privacy policy. Unlike many Western AI platforms, which have moved to storing user data in local data centres, DeepSeek openly states that all user data is stored on servers in China. This includes:

– Personal information such as email addresses, phone numbers, and dates of birth.

– Chat histories, including all questions and responses.

– Technical data such as IP addresses, device information, and even keystroke patterns.

DeepSeek claims this data collection helps improve its services, but critics warn that it also grants the Chinese government (i.e. the Chinese Communist Party) potential access to vast amounts of sensitive user information. Under China’s cybersecurity laws, companies are required to cooperate with state intelligence efforts, meaning that the government could theoretically demand access to DeepSeek’s data at any time.

Warnings From Australia And The UK

Australia’s science minister, Ed Husic, has already warned users to be “very careful” when using the app, highlighting unanswered questions about data privacy. Also, the UK’s Information Commissioner’s Office has reminded users of their rights regarding data protection, urging AI developers to ensure transparency in how personal data is used.

US Navy Personnel Banned From Using It

Meanwhile, the US Navy has taken the drastic step of banning its personnel from using DeepSeek entirely, citing security concerns. White House press secretary Karoline Leavitt confirmed that US officials are actively investigating the national security implications of the app.

Security Breaches and Leaks

Privacy concerns escalated further when cybersecurity researchers at Wiz reported discovering that DeepSeek had an unprotected internal database leaking user chat histories, API keys, and other sensitive data to the open internet. More than a million unencrypted logs were exposed due to what appeared to be a simple misconfiguration. While DeepSeek moved quickly to secure the database, it remains unclear whether any unauthorised parties accessed the data before the breach was fixed.

Experts warn that this kind of security lapse suggests a worrying lack of basic cybersecurity hygiene. A spokesperson for Wiz has been reported as saying: “Misconfigured databases are often due to human error rather than malicious intent,” and “When dealing with user data at this scale, mistakes like this are simply unacceptable.”

Censorship and Propaganda Concerns

Another major issue with DeepSeek appears to be its approach to content moderation. For example, users have reported that the chatbot censors politically sensitive topics, particularly those related to the Chinese government. One widely reported example is that when asked about the 1989 Tiananmen Square massacre, DeepSeek simply refused to provide an answer, stating: “I am sorry, I cannot answer that question.”

Some critics have also argued that this suggests DeepSeek is designed not just as a neutral AI assistant but as a tool that aligns with Chinese government policies. For example, John Scott-Railton, a senior researcher at the University of Toronto’s Citizen Lab, has warned that AI models like DeepSeek could be used to subtly influence public opinion, saying: “When you interact with an AI like this, you’re not just getting neutral information—you’re getting content that is shaped by the policies and priorities of the company behind it.”

How Does It Really Compare to Other AI Models?

DeepSeek’s privacy policy is not entirely unique, i.e. many AI platforms collect extensive user data. ChatGPT, for example, retains user inputs to improve its models, and Google’s Gemini collects detailed device and usage data.

However, the key difference lies in where the data is stored and who has access to it. Unlike DeepSeek, OpenAI and Google operate under strict US and EU regulations that limit how personal data can be shared or used. DeepSeek’s data policies, on the other hand, appear to leave the door open for potential state surveillance.

For users, this means that while DeepSeek may offer an innovative and cost-effective AI experience, it may also come with significant risks that are hard to overlook.

What Does This Mean For Your Business?

The rise of DeepSeek showed (much to the shock of the US) that powerful models are no longer exclusive to Silicon Valley’s tech giants. However, while its impressive capabilities and cost efficiency make it an attractive option, concerns surrounding its privacy policies, security vulnerabilities, and potential government oversight have raised some serious questions for businesses considering its adoption.

For companies looking to integrate AI into their operations, DeepSeek’s ability to perform complex reasoning tasks at a fraction of the cost of rival models may seem like a compelling advantage. However, its data storage practices appear to stand in stark contrast to those of Western AI providers. Unlike ChatGPT or Google’s Gemini, which operate under strict US and EU data protection regulations, DeepSeek openly stores user data on servers in China. Given China’s cybersecurity laws, which require companies to cooperate with state intelligence efforts, businesses using DeepSeek should acknowledge the very real possibility that sensitive information could be accessed or monitored by the Chinese government.

This, of course, raises critical concerns for organisations handling confidential or highly regulated information. For example, companies operating in finance, healthcare, legal services, and government sectors may want to be particularly cautious, as the use of DeepSeek could lead to unintended breaches of data protection laws such as the UK’s Data Protection Act or the EU’s GDPR. The potential for regulatory scrutiny, legal repercussions, or even outright bans on the software in certain jurisdictions can’t be ignored. The fact that the US Navy has already prohibited its personnel from using DeepSeek, citing national security risks, suggests that further restrictions may follow, particularly in industries dealing with sensitive data or intellectual property.

Beyond privacy and compliance issues, the recent security breach that left user chat histories and API keys exposed raises further doubts about the reliability of DeepSeek’s cybersecurity practices. While the developers moved quickly to secure the database, the fact that such an oversight occurred at all suggests a worrying lack of basic security protocols. For businesses, this highlights the potential risk of data leaks, unauthorised access, and cyber espionage, i.e. concerns that no organisation can afford to take lightly.

Another challenge lies in DeepSeek’s approach to content moderation and information control. Reports of the chatbot refusing to answer politically sensitive questions, particularly those related to the Chinese government, indicate that it is not merely a neutral AI assistant but one that aligns with the policies of the state that developed it. This raises important questions about bias, censorship, and the reliability of information provided by the platform. Businesses relying on AI for research, market analysis, or customer engagement must be aware that the responses they receive may not always be objective or complete.

Given these concerns, organisations considering the use of DeepSeek should carefully evaluate whether the potential benefits outweigh the risks.

While the app’s cost efficiency and performance may seem appealing, any business dealing with sensitive data, regulatory requirements, or intellectual property should approach it with extreme caution. Those who do choose to explore its capabilities should ensure that no confidential or personally identifiable information is entered into the system and should implement strict internal controls to mitigate the risk of exposure.

Ultimately, DeepSeek represents both the promise and the peril of modern AI. Its rapid ascent proves that cutting-edge technology can emerge from beyond the traditional powerhouses of the tech industry, but its controversial data policies and security concerns serve as a stark reminder that not all AI models are created equal.

Tech Insight : ‘Operator’ – UK Employers Ramp Up Workplace Surveillance

Workplace surveillance is becoming an inescapable reality for employees across the UK, with new research showing that 85 per cent of employers now monitor their staff’s online activity.

What’s Going On?

While businesses argue that these measures are essential for productivity and security, research indicates that employees are increasingly feeling the strain, leading to stress, distrust, and even resignations. So, just how widespread is workplace surveillance, what methods are being used, and what does it mean for the future of work?

The Scale of Workplace Surveillance

Workplace surveillance refers to the various ways employers track, record, and analyse their employees’ activities during work hours. While some monitoring practices, such as logging clock-in times, have been around for decades (going as far back as the nineteenth century), the digital era has significantly expanded what’s possible. Employers are now using sophisticated tools to track emails, internet usage, keystrokes, and even employees’ locations. In some cases, surveillance goes even further, with real-time screen monitoring and video surveillance becoming increasingly common.

The latest findings from an ExpressVPN survey have highlighted just how pervasive this practice has become. Their research shows that 85 per cent of UK employers admit to monitoring their staff in some way, with 54 per cent tracking active work hours, 36 per cent keeping an eye on website visits, and 27 per cent using software to observe employees’ screens in real-time. More intrusive measures, such as keystroke logging and location tracking, are also on the rise.

Employers Prefer In-Office Work

It appears that this shift has been accelerated by remote and hybrid work models which, since the pandemic years, have left many employers feeling out of control. The ExpressVPN study found that 72 per cent of employers prefer in-office work because it reduces the need for surveillance, while 51 per cent openly admit that they do not trust employees to work unsupervised.

Who’s Watching – Large Corporations or Small Businesses?

It seems that workplace surveillance isn’t just confined to large corporations and both small and medium-sized enterprises (SMEs) are known to be engaging in these practices. However, larger firms tend to have more sophisticated monitoring tools and policies in place.

For example, tech giants such as Amazon have long been criticised for using surveillance technology to track warehouse workers and delivery drivers. Employees have reported feeling intense pressure due to constant monitoring, with some even being penalised for taking bathroom breaks. Similarly, financial institutions such as Barclays and PwC have been reported to track employees’ computer activity, logging how long they are active on their devices.

Also, companies like Microsoft have faced backlash over their “Productivity Score” tool, which was criticised for allowing managers to monitor individual workers’ performance at an almost microscopic level. In response to concerns about privacy, Microsoft eventually scaled back the tool’s capabilities, but the fact remains that workplace surveillance is no longer just about keeping track of attendance but is more about watching employees’ every digital move.

Why is Workplace Surveillance Increasing?

The rise of workplace surveillance is closely tied to the growing number of employees working remotely or in hybrid arrangements. Many businesses feel that without physical oversight, they cannot ensure that staff are working productively.

Employers also cite security concerns as a major reason for surveillance. With more employees accessing company data from home, businesses worry about sensitive information being leaked, stolen, or misused. Surveillance is seen as a way to safeguard against these risks, ensuring that employees are not engaging in unauthorised activities.

However, there is also a less talked-about reason behind the rise in monitoring, i.e. control. Many businesses simply feel uneasy about not being able to see what their staff are doing at all times. This has led to an increasing reliance on tracking tools to maintain a sense of authority, even when employees are working from home.

The Most Common Forms of Workplace Surveillance

Workplace monitoring can take many forms, ranging from relatively standard practices to highly invasive measures. The main forms highlighted by the ExpressVPN research include:

– Email and Chat Monitoring – 36 per cent of companies track employees’ emails, while 28 per cent monitor internal chat logs. This means that even private conversations between colleagues on work devices may not be as private as employees think.

– Keystroke Logging – 15 per cent of businesses record keystrokes, capturing exactly what employees type, including passwords and personal messages.

– Real-Time Screen Monitoring – More than a quarter (27 per cent) of employers actively view employees’ screens, allowing them to see what is being worked on in real time.

– Location Tracking – 21 per cent of businesses use GPS to monitor where employees are working from, raising concerns about whether staff members are being tracked outside of work hours.

The Ethical and Legal Debate

Workplace surveillance is actually a bit of a legal grey area in the UK. For example, although employers are permitted to monitor employees, there are rules about how they must go about it. The Data Protection Act 2018 and the European Convention on Human Rights provide some safeguards, stating that surveillance must be proportionate, transparent, and conducted for a legitimate business purpose.

However, many employees remain unaware of their rights. For example, ExpressVPN’s research found that 38 per cent of UK workers did not realise their employers were legally allowed to monitor their digital activity. Also, 79 per cent of Brits believe that workplace surveillance needs stricter government regulation to protect employee privacy.

The ethical concerns are even more pressing. Many employees feel that excessive monitoring creates a culture of distrust, reducing morale and increasing stress. If workers constantly feel watched, they are less likely to feel comfortable in their roles, which can lead to lower productivity and higher staff turnover.

How Workplace Surveillance Affects Employees

The impact of surveillance on employees is profound. Nearly half (46 per cent) of UK workers report feeling increased stress due to monitoring, with many saying they are constantly worried about how their actions might be perceived.

ExpressVPN’s research revealed that some employees have even altered their behaviour in response to surveillance. For example, 27 per cent say they take fewer breaks to avoid appearing unproductive.

It seems that workplace surveillance can also take its toll on employees mentally and emotionally. For example, according to the research:

– 23 per cent feel pressured to work longer hours.

– 32 per cent constantly wonder whether they are being watched.

– 14 per cent report feeling dehumanised by the extent of monitoring.

Young Employees Affected The Most

Young employees are particularly affected, with workers aged 18-24 feeling the highest levels of stress over being monitored.

Employees’ Reactions

As revealed by the survey, in response to surveillance, some employees have begun using creative, if questionable, tactics to avoid being flagged for inactivity. For example:

– 18 per cent admit to keeping unnecessary applications open to appear busy.

– 15 per cent schedule emails to send at certain times to give the impression of constant engagement.

– 11 per cent use ‘mouse jigglers’ or keyboard simulation software to avoid being marked as inactive.

These workarounds suggest that rather than boosting productivity, excessive surveillance may actually be encouraging employees to focus more on appearing busy rather than doing meaningful work.

It should be noted that employers are increasingly deploying advanced monitoring tools capable of detecting deceptive behaviours used by employees to get around surveillance. For example, companies like Wells Fargo have identified (and dismissed) employees for simulating keyboard activity to appear productive.

Is Workplace Surveillance Actually Effective?

Employers argue that monitoring increases productivity, but much of the evidence seems to suggest otherwise. While some studies indicate that limited monitoring can help prevent misconduct, excessive surveillance tends to have the opposite effect. Employees who feel watched are more likely to experience burnout, decreased engagement, and ultimately lower performance.

For example, a study by the Austrian research group Cracked Labs found that overly aggressive surveillance can lead to a toxic work environment, where employees feel like they are constantly being scrutinised. This, in turn, leads to lower morale and higher staff turnover, which can cost businesses more in the long run.

The Future of Workplace Surveillance

With AI and advanced analytics becoming more sophisticated, workplace monitoring is only set to expand. Some companies are already using AI-powered surveillance to track everything from facial expressions during video calls to time spent away from a keyboard.

However, the backlash is growing. Employees are increasingly demanding transparency and greater legal protection. If businesses fail to strike a balance between oversight and trust, they risk creating a workforce that feels resentful, stressed, and ultimately disengaged.

What Does This Mean For Your Business?

While workplace surveillance is often justified by employers as a necessary tool for maintaining productivity and security, the reality may be more complex. The evidence suggests that while some level of monitoring may help prevent misconduct, excessive surveillance can backfire, leading to stress, disengagement, and resentment among employees. Instead of fostering a culture of productivity, it can create an environment of fear and mistrust, where workers are more focused on appearing active rather than doing meaningful work.

The increasing reliance on monitoring technology, particularly in remote and hybrid work settings, appears to reveal a fundamental lack of trust between employers and employees. This lack of trust, rather than improving performance, is more likely to damage morale and increase staff turnover. The findings from ExpressVPN’s research make it clear that many employees feel dehumanised and pressured under constant scrutiny, with younger workers being the most affected. When employees feel like they are being watched at every moment, the psychological toll can be significant, affecting their well-being and ultimately their performance.

While UK law does allow workplace monitoring for legitimate business purposes, the rules surrounding transparency and proportionality are not always strictly enforced. The fact that nearly four in ten employees are unaware of their rights in this regard suggests a concerning lack of clarity and communication. This is why there is growing demand for stronger regulations to ensure that workplace surveillance is conducted fairly and with clear boundaries.

For businesses, the challenge lies in striking the right balance. Employers should really weigh the benefits of monitoring against the potential negative consequences. Surveillance should ideally be used as a tool to support productivity, not as a mechanism of control that erodes trust and morale. Transparency is key. When employees understand why monitoring is in place, how data is being used, and what safeguards exist, they are more likely to accept it as a legitimate part of their working environment rather than as an invasive overreach.

The future of workplace surveillance is likely to be shaped by advancements in AI and monitoring technology, but also by the growing pushback from employees and privacy advocates. If businesses fail to recognise the risks of excessive surveillance, they may find themselves facing higher attrition rates, lower engagement, and potential legal challenges. The key takeaway from all of this is really that trust and productivity go hand in hand. If employers truly want a motivated and efficient workforce, they may wish to focus less on surveillance and more on creating a workplace culture built on transparency, fairness, and mutual respect.

Tech News : First Video Call via Satellite in No-Signal Zone

Vodafone has successfully conducted the world’s first satellite-enabled video call using a standard 4G/5G smartphone from a location devoid of terrestrial mobile coverage.

The Call

Vodafone has reported that recently (the exact date has not been specified), an engineer from the company, Rowan Chesmer, initiated a video call from a remote mountainous area in mid-Wales, which is a region historically lacking mobile broadband access, i.e. it has ‘not spots’. Using a standard Android smartphone, Chesmer connected directly to a Low Earth Orbit (LEO) satellite operated by AST SpaceMobile, a partner of Vodafone. The call was received by Vodafone Group Chief Executive Margherita Della Valle at the company’s UK headquarters in Newbury, Berkshire. This event was further distinguished by the presence of British astronaut Tim Peake, who joined Della Valle to commemorate the achievement.

Vodafone is keen to highlight the call as being a milestone that could signify a significant leap towards universal connectivity, potentially bridging the digital divide in remote and underserved regions.

The Mechanism

The success of this endeavour hinges on the integration of standard smartphones with LEO satellites. Unlike traditional satellite phones, which are often bulky and require specialised equipment, Vodafone’s approach allows regular smartphones to connect directly to satellites without the need for additional hardware. The process involves the smartphone communicating with the satellite, which then transmits data to and from a ground-based relay station. This relay station is connected to Vodafone’s terrestrial network, facilitating seamless communication between the satellite and ground infrastructure.

Implications for Vodafone Users

This technological advancement promises to eliminate mobile coverage ‘not-spots’, i.e. the areas where traditional mobile signals are unavailable. For Vodafone users, this means the potential for uninterrupted connectivity, even in the most remote locations. The service aims to mirror the experience of existing 4G and 5G networks, enabling users to make video calls, access the internet, and use online messaging services without any noticeable difference. Importantly, users will not need to invest in specialised devices (their existing smartphones will suffice).

The Projected Rollout Timeline

While the initial test was successful, Vodafone says it plans to conduct further evaluations throughout the spring. The company says it’s aiming to progressively introduce the direct-to-smartphone broadband satellite service commercially in markets across Europe later this year and during 2026. It hopes that this phased rollout approach will ensure the technology is robust and reliable before widespread deployment.

The Broader Impact on the Telecommunications Industry

Vodafone’s achievement sets a new benchmark in the telecommunications sector, highlighting the feasibility of integrating satellite connectivity with standard mobile devices. This development is likely to prompt other mobile operators to explore similar technologies to enhance their coverage and service offerings. Notably, companies such as AT&T and Verizon have also partnered with AST SpaceMobile to develop satellite-based mobile broadband services, indicating a broader industry trend towards leveraging satellite technology for comprehensive coverage.

What’s Been Said About It?

All the key players at Vodafone and its partners have been keen to highlight the significance of this milestone and what it could mean. For example, Margherita Della Valle, Vodafone Group Chief Executive, said: “Vodafone’s job is to get everyone connected, no matter where they are” and that “This will help to close the digital divide, supporting people from all corners of Europe to keep in touch with family and friends, or work, as well as ensuring reliable rural connectivity in an emergency”.

UK Astronaut Tim Peake has also reflected on Vodafone’s achievement, saying: “Having spent six months on the International Space Station, I can fully appreciate the value in being able to communicate with family and friends from remote and isolated locations. I am delighted to join Vodafone and AST SpaceMobile in this significant breakthrough.”

Abel Avellan, Founder, Chairman, and CEO of AST SpaceMobile, highlighted the collaborative effort involved, saying: “This historic milestone marks another significant step forward in our partnership with Vodafone, a long-time investor in AST SpaceMobile and a key technology partner. Together, we have achieved several world firsts in space-based broadband connectivity.”

Technical Specifications and Capabilities

The satellite system employed in the test used AST SpaceMobile’s BlueBird satellites, which operate in Low Earth Orbit at approximately 500 km above the Earth’s surface. This proximity allows for lower latency and faster data transmission compared to traditional geostationary satellites. The system is designed to provide peak data transmission speeds of up to 120 Mbps, supporting a full mobile broadband experience. Also, the technology employs beamforming techniques to direct radio signals precisely, enhancing speed and minimising interference.

Why Is Direct-To-Phone Satellite Different?

While some smartphones, such as recent iPhone models, offer emergency SOS features via satellite, these services are limited to text messaging and require clear line-of-sight to the sky. In contrast, Vodafone’s direct-to-phone satellite service aims to provide a comprehensive mobile broadband experience, including video calls and internet access, without the need for specialised equipment or ideal environmental conditions.

Drawbacks

While Vodafone’s satellite-enabled smartphone video call marks a major breakthrough, several challenges remain. Early tests revealed issues with connection quality, including choppy video and noticeable lag due to higher latency and lower bandwidth than traditional networks. Regulatory hurdles could also slow progress, as securing spectrum approvals and navigating complex legal frameworks take time. Also, some critics argue that eliminating mobile ‘not-spots’ may reduce opportunities for solitude and digital disconnection. Astronomers have raised concerns about the increasing number of satellites interfering with space observations and asteroid detection. Lastly, Vodafone has yet to disclose pricing details, raising questions about affordability, as satellite communication has historically been costly. Addressing these issues will be key to ensuring a smooth and responsible rollout of the technology.

Looking Ahead

The successful demonstration of satellite-enabled video calls using standard smartphones could open new avenues for global connectivity. As Vodafone and its partners continue to refine this technology, it holds the promise of connecting underserved and remote regions, thereby enhancing emergency response capabilities and ensuring that users remain connected regardless of their location. However, the widespread adoption of this technology will require substantial investment in satellite infrastructure and careful coordination with existing terrestrial networks to ensure seamless service delivery.

What Does This Mean For Your Business?

Vodafone’s successful satellite-enabled video call marks a significant step towards a future where mobile connectivity is no longer restricted by geography. By demonstrating that a standard smartphone can make a video call via satellite without additional hardware, Vodafone has shown the potential to bridge the long-standing gaps in mobile coverage. For those living in or travelling through remote areas, this could mean reliable access to communication services where traditional networks have struggled to reach. In emergency situations, where connectivity can be a matter of life and death, the ability to make calls and access the internet via satellite could prove invaluable.

However, while the achievement is impressive, there are still challenges to overcome before this technology becomes widely available. Issues with connection quality, including latency and bandwidth limitations, need to be addressed to ensure a seamless user experience. Regulatory approvals and the logistical task of deploying enough satellites to provide consistent coverage remain significant hurdles. Vodafone’s timeline for a full commercial rollout, set for later in 2025 and 2026, suggests that further development and testing are required before the service can be reliably offered to the public.

There are also broader concerns to consider. The expansion of satellite connectivity raises questions about its impact on the night sky, with astronomers warning that an increasing number of satellites could interfere with space observations. Others have questioned whether eliminating mobile ‘not-spots’ entirely is beneficial, as some value the ability to disconnect in remote locations. The issue of cost is another key factor, as Vodafone has yet to confirm how much customers will need to pay to access the service. If pricing is too high, the benefits of satellite connectivity may be limited to specific industries or wealthier consumers rather than the wider public.

Despite these challenges, Vodafone’s innovation signals a shift in how mobile connectivity is delivered. Rather than replacing existing terrestrial networks, this technology is likely to act as a complementary solution, ensuring coverage in places where it has previously been unfeasible. For Vodafone, it cements its position as a leader in mobile network evolution, following on from its historic role in launching the UK’s first mobile call 40 years ago. For the wider industry, it sets a precedent that other telecoms providers will inevitably follow, as companies explore ways to integrate satellite connectivity into their networks.

This breakthrough is essentially a glimpse into the future of mobile communications. While it is not yet a complete solution, it has the potential to reshape the way people stay connected, providing mobile broadband access to areas that have long been left behind. If Vodafone and its partners can overcome the technical and regulatory obstacles, satellite-to-smartphone connectivity could redefine what it means to be online, anytime, anywhere.

Tech News : Google’s New ‘Ask for Me’ AI Feature Calls Businesses For You

Google has unveiled ‘Ask for Me’, an innovative feature that employs AI to phone local businesses for you, to get information such as service pricing and availability.

What is ‘Ask for Me’?

Google’s ‘Ask for Me’ is essentially designed to streamline the process of obtaining information from local businesses. Instead of making calls yourself, Google’s AI does it for you, inquiring about specific services, their costs, and scheduling options. For example, a user seeking an oil change can use this feature to find out prices and available appointment times from local garages/mechanics.

Rose Yao, Vice President of Search Product at Google, recently announced the feature on X, stating: “New experiment just launched on Search Labs – you can use AI to call businesses on your behalf to find out what they charge for a service & when it’s available, like an oil change ASAP from nearby mechanics. We’re testing right now with auto shops and nail salons, to see how AI can help you connect with businesses and get things done.”

How Does It Work?

To use ‘Ask for Me’, users must first opt into the experiment via Google Search Labs. Once enrolled, using the examples provided by Google’s Rose Yao, when searching for services like “oil change near me” or “nail salons nearby,” an “Ask for Me” prompt appears. Upon selecting this option, users are prompted to provide details about the service they require. For auto services, this includes specifying the type of service (e.g. tyre replacement, factory scheduled maintenance), car details (year, make, model, and mileage), and preferred timing (soonest availability, weekdays only, or weekends only). For nail salon services, users can specify the type of manicure, such as basic, French, or gel. After collecting this information, Google’s AI makes calls to local businesses and, within approximately 30 minutes, provides users with a summary of prices and availability via text message or email.

Uses Duplex Technology To Talk On The Phone

This feature uses Google’s Duplex technology, an AI system introduced in 2018 that can conduct natural conversations to perform real-world tasks over the phone. Duplex is built on a recurrent neural network using TensorFlow Extended and is designed to sound natural, incorporating speech disfluencies like “um” and “uh” to mimic human conversation. It has been previously used for tasks such as making restaurant reservations and updating business hours in Google Maps.

Benefits of ‘Ask for Me’

The primary advantage of ‘Ask for Me’ is the convenience it offers. By delegating the task of calling businesses to AI, users save time and avoid the potential hassle of phone conversations. This is particularly beneficial for those who are uncomfortable making phone calls or have very busy schedules. Also, it ensures that users receive accurate and up-to-date information without the need for multiple calls or waiting on hold.

What Could Possibly Go Wrong?

Despite its benefits, here in the real world, ‘Ask for Me’ is likely to present certain challenges. For example, businesses receiving AI-initiated calls are likely to be unprepared or uncomfortable interacting with an automated system, thereby potentially leading to miscommunication or simply mistaking the calls for scams/spam calls. To address this, each call begins with the AI announcing itself as an automated system calling from Google on behalf of a user.

Businesses Can Opt Out Of Receiving The AI Calls

Businesses can opt out of receiving these calls through their Google Business Profile settings or by informing the AI during a call. Google has also implemented call quotas to prevent businesses from being overwhelmed by automated calls.

Availability

Currently, ‘Ask for Me’ is in an experimental phase and is available to users in the United States who have opted into Google’s Search Labs experiments. As the feature is still being tested, capacity is limited, and users may encounter a waitlist when attempting to use it. Google has not yet announced plans for a broader rollout or availability in other countries.

AI For Everyday Convenience

The introduction of ‘Ask for Me’ reflects Google’s ongoing efforts to integrate AI into everyday tasks, enhancing user convenience. By automating routine interactions, Google is hoping to make information more accessible and reduce the friction associated with obtaining service details from local businesses. As AI continues to evolve, features like ‘Ask for Me’ could become commonplace, transforming how consumers and businesses communicate.

What Does This Mean For Your Business?

While ‘Ask for Me’ presents a compelling vision of AI-enhanced convenience, its success will ultimately depend on how both users and businesses embrace the feature. For users, it undoubtedly removes a common frustration, i.e. having to wait on hold or making multiple calls to get simple information. By allowing AI to handle these interactions, people can focus on more important tasks without the hassle of ringing around for prices and availability. However, the reliance on AI-driven phone calls does raise some concerns, particularly around business reception to these automated requests.

For businesses, the arrival of AI-initiated calls may be a double-edged sword. On the one hand, it could help streamline inquiries, reducing the number of customer calls staff need to handle directly. On the other, if businesses are unprepared for (or sceptical of) AI calls, they may dismiss them as spam or fail to engage with them properly. There is also the question of whether AI can fully replicate the nuance of human conversation, which is something that could be particularly important for businesses offering bespoke services or those that rely on a more personal touch.

Google’s efforts to mitigate potential drawbacks, e.g. AI identifying itself at the start of each call and giving businesses the option to opt-out, suggest that the company is aware of these concerns. However, whether these measures will be enough to prevent issues remains to be seen. Some businesses may still find AI calls intrusive or inconvenient, particularly if they disrupt workflows or lead to miscommunication.

More broadly, ‘Ask for Me’ signals another step towards AI taking on an increasingly active role in everyday life. It follows a trend where AI is being used not just for search and recommendations, but to interact with the real world on users’ behalf. If successful, it could pave the way for further AI-driven customer service features, potentially reducing the need for direct human interaction in many routine transactions. However, this also raises questions about AI’s role in society and whether reliance on automated interactions could lead to unintended consequences, such as reduced human engagement in business transactions.

For now, ‘Ask for Me’ remains an experiment, limited to certain types of businesses and available only to a select group of users in the US. How it evolves, and whether it expands beyond its current test phase, will depend on feedback from both users and businesses. If widely adopted, it could redefine how people access business information, but if businesses push back, Google may need to rethink its approach. Either way, the feature highlights AI’s growing presence in daily life and raises important discussions about the future of human-AI interactions.

Company Check : Trump Says Microsoft in Talks to Buy TikTok

U.S. President Donald Trump has said that Microsoft is in talks to acquire TikTok, the popular social media platform owned by China’s ByteDance.

In a news conference, President Trump suggested that multiple bidders are interested, stating, “There’s great interest in TikTok” and indicating that a competitive bidding process could be on the horizon. The comments come as the app faces ongoing regulatory pressure in the U.S. due to national security concerns.

TikTok, which has around 170 million users in the U.S., was briefly taken offline earlier this month after a law came into effect requiring ByteDance to either sell its American operations or face an outright ban. However, President Trump intervened by signing an executive order delaying the enforcement of this law by 75 days, allowing negotiations to continue. Microsoft has yet to comment publicly on the talks, while TikTok and ByteDance have also remained silent on the latest developments.

This isn’t the first time Microsoft has been in the frame to acquire TikTok. Back in 2020, the company was one of the leading contenders when Trump, during his first term, sought to force a sale of TikTok’s U.S. operations due to national security concerns. At that time, Oracle and Walmart were also involved in negotiations, though no deal was ultimately reached. Now, with Trump back in office, Microsoft has once again emerged as a potential buyer.

Other parties are also making moves. AI startup Perplexity AI has reportedly submitted a revised bid to merge with TikTok in a deal that would give the U.S. government up to 50 per cent ownership of the newly formed entity. Under the latest proposal, the U.S. government would receive its stake following an initial public offering (IPO) valued at a minimum of $300 billion. Perplexity has revised its offer based on feedback from the Trump administration, suggesting the White House is actively involved in shaping potential acquisition deals.

Trump has previously floated the idea of other high-profile bidders, including Tesla CEO Elon Musk and Oracle Chairman Larry Ellison, taking over TikTok. However, Musk has yet to publicly express any interest, while Oracle’s role remains unclear. Trump recently told reporters, “I’ve spoken to many people about TikTok, but not with Oracle.” Meanwhile, billionaire Frank McCourt has also made a formal offer for the platform.

The next 30 days could be pivotal for TikTok’s future in the U.S., with Trump indicating that discussions are ongoing and a decision is expected soon. With national security concerns cited as being at the heart of the issue, ByteDance remains under pressure to divest its American operations. Whether Microsoft, Perplexity AI, or another bidder ultimately secures control remains to be seen, but the stage is set for a high-stakes battle over one of the world’s most influential social media platforms.

Security Stop Press : GhostGPT AI Chatbot Threat

Cybercriminals are using an AI chatbot called GhostGPT to generate malware, craft phishing emails, and develop exploit code, according to a recent blog post by security firm Abnormal Security.

Unlike mainstream AI tools, GhostGPT has no ethical safeguards, making it a powerful tool for cybercrime.

Available as a Telegram bot, GhostGPT provides instant, uncensored responses and has a strict no-logs policy, making it easy for attackers to use while remaining anonymous. Despite being advertised for “cybersecurity,” it is openly sold on cybercrime forums, with subscriptions starting at $50 per week.

GhostGPT follows a growing trend of AI-powered cybercrime tools, including WormGPT and WolfGPT, which have made attacks more sophisticated and accessible. Security experts warn that by removing ethical restrictions, these chatbots allow criminals to create highly convincing phishing scams, develop malware that evades detection, and exploit software vulnerabilities with minimal effort.

With AI now being used to bypass traditional defences, businesses must adapt their security strategies. Implementing AI-driven threat detection, strengthening email security, and training employees to recognise phishing attempts are essential to mitigating the risks posed by tools like GhostGPT.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives