Tech News : What’s In The New iOS 17 iPhone Update?

With the recent release of the latest update of Apple’s iPhone software, iOS 17, we look at many of the useful new features and their benefits.

What’s Happened? 

Apple recently released a new updated version of its iOS (iOS 17), which contains many new features, security updates and fixes for Apple’s iPhone, iPad, and smartwatch. It has also subsequently released the iOS 17.0.2 update to fix an issue that prevented transferring data directly from another iPhone during setup, and the iOS 17.0.3 update to fix a recently reported overheating issue in Apple’s newly launched iPhone 15. The recently launched iPhone 15 range includes the iPhone 15 Pro with a titanium finish.

The iPhone X, however, and the iPhone 8 and 8 Plus have not been included in the update (although the iPhone 8 and 8 Plus will still receive security updates).

New Features of iOS 17

Some of the key features of note of iOS 17 include:

Upgraded Autocorrect 

An upgraded autocorrect that learns the user’s normal language, allows swearing (and doesn’t substitute with the word “ducking”), reverts corrected text by tapping on the underlined where needed, and can predict full sentences during typing.

Standby Mode – More Like A ‘Hub’ 

The new Standby mode (which can be particularly useful when leaving the phone by the bed) gives a full landscape, hub style display while charging which includes the clock, calendar, weather, photos, chosen widgets, Siri interaction, and more.

Greater Personalisation Through Contact Cards 

Changes to calls and messaging sees Apple introduce greater personalisation through customisable contact cards, whereby users can create their own personalised visual cards (including a photo, text, and customisable colours) that display on the recipient’s phone and in their contacts app when calls are made. This could, of course be very useful in business interactions, e.g. including company information / branding elements in the card, displaying creativity, and creating a memorable identity that stands out.

AirDrop – Seamless Sharing of Cards 

Also, the contact cards aren’t just for viewing. The (seamless) AirDrop function allows users to share cards (like swapping a digital business cards) by bringing two devices close together.

Voicemails Get Live Transcriptions 

Voicemails have also been updated with live transcriptions that allow users to quickly grasp the essence of every message, such as if there’s background noise or if you’re multi-tasking at the time, plus FaceTime allows video voicemails.

Voice Cloning! 

A voice cloning feature allows users to create an audible version of any typed phrase, thereby helping with accessibility and adding a new AI dimension to communications.

Siri Refined 

Siri, Apple’s voice assistant, has also been refined to enable users to adjust Siri’s speaking pace thereby catering to diverse listening preferences, and activation can now happen simply by saying “Siri” rather than “hey Siri.”

Privacy And Security Updated 

Privacy and security, two elements that are particularly important to businesses, have been updated with iOS 17, as users can securely share passwords stored in their iCloud Keychain with trusted individuals. Also, Apple’s Safari browser has fortified its privacy stance by introducing facial recognition for private sessions. Users also get another useful heads-up in the form of receiving alerts before accessing potentially sensitive content.

Photo Recognition

iOS 17’s people album photo recognition also promises to be a helpful feature, e.g. with identifying people in business event photos, favourite people, and even family pets according to Apple.

Food Images – Suggestions 

For those in the food business or needing to find content about food (or simply food and cookery enthusiasts), tapping on a shared food image adds a culinary twist by offering recipe suggestions.

Paying Attention To Mental Wellbeing – Health App Updated 

Particularly since the pandemic, our mental wellbeing has been more in focus and Apple’s health app, traditionally associated with tracking physical activities, now ventures into the realm of mental health. Users can monitor their moods, thereby providing insights into patterns that might indicate anxiety or depression.

iPadOS 17 

With the update, Apple’s iPadOS 17 now has a suite of features tailored for the larger screen. The lock screen has received a lively makeover, allowing users to infuse it with widgets and animated wallpapers, thereby allowing more personalisation and convenience that could help with time-saving and productivity.

Also, the Health app (as highlighted above, with its new mental health focus) debuts on the iPad, sporting a refreshed interface and extensive health data insights.

Multitasking has also received a boost, with users now having the freedom to resize and position apps on the screen, closely mirroring a desktop experience.

WatchOS 10 Too 

With smart wearables now very popular, Apple’s watchOS 10 has also received updates including the integrated apps getting vibrant redesigns that focus on user-friendliness and quick access. For example, directly from the watch face, widgets dynamically update based on several user-specific parameters, ensuring relevant information is just a swipe away.

Choice, in terms of the range of watch faces available has been expanded, e.g. with animated faces like Snoopy and Woodstock, and even a cycling feature that transforms the iPhone into a surrogate bike computer when paired with the watch.

What Does This Mean For Your Business? 

Apple’s iOS 17’s new features, and its new iPhone 15 launch, although marred slightly by the new phone having an overheating problem (plus a radiation-fear-fuelled banning of iPhone 12 sales in France), have given Apple something positive to shout about (and bury any less welcome news).

In the dynamic landscape of UK businesses, where agility and efficiency are paramount, Apple’s iOS 17 looks like offering enhanced productivity and a more sophisticated user experience. The introduction of customisable contact cards in the phone app, for example, offers businesses a modernised touchpoint, facilitating more personalised and streamlined digital communications with clients and partners. The innovative live transcription of voicemails allows businesses to rapidly digest essential information, potentially optimising response times and decision-making processes.

Also, the significant (and always welcome) advancements in privacy and security, including the ability to securely share iCloud Keychain passwords and safeguard private browsing sessions with facial recognition, promise to embolden businesses with heightened digital safety, hopefully helping to ensure that confidential business data remains uncompromised.

AI is making inroads everywhere it seems, and the photo recognition’s intuitive capabilities may be particularly useful in sectors like marketing and retail, enabling businesses to better categorise visuals and tailor marketing strategies.

The mental health tracking in the health app underscores a broader shift towards corporate well-being, allowing businesses to foster a more supportive and aware work environment.

Meanwhile, iPadOS 17’s multitasking enhancements echo the needs of dynamic enterprises, making workflow management and multitasking more like a desktop experience, thereby potentially aiding operational efficiency.

Ultimately, the suite of features presented in iOS 17, and its counterparts for iPad and Watch, could enhance the operational, communicative, and strategic dimensions of UK businesses. Apple is keen to show its commitment to the UK (e.g. with its 500,000 sq ft, six-story space inside Apple Battersea Power Station) and its contribution to the economy (claiming it supports more than 550,000 jobs across the country), and although most businesses use Microsoft rather than Apple products, Apple’s reputation for usability, security, and quality in the UK is likely to be enhanced by the iOS 17 update’s new features.

Tech News : TikTok Trend : AI-Enhanced Profile Photos For LinkedIn Job Seekers

It’s been reported that a TikTok video has started a trend of people using AI to enhance their appearance in their LinkedIn profile photos with a view to improving their chance of getting a job via the platform.

The TikTok Video 

The short TikTok video that’s been attributed to inspiring the trend was posted during the summer and has since been watched more than 50 million times. The video shows the face of a young woman being enhanced by AI and refences the Remini AI photo and video enhancer app.

Remini 

The Remini app, which claims to have 40 million monthly active users, says that it uses “innovative, state-of-the-art AI technology to transform your old photos into HD masterpieces” and that using its app you can “Turn your social media content into professional-grade images that engage your audience”.

By uploading 8 to 10 selfies (from different angles), the app offers generative AI so users can create hyper-realistic photos or alter ego versions of themselves or can enhance “ordinary” photos of themselves. The app lets users enhance the detail, and adjust the colour, face glow, background, and other details to create a more flawless look and improve photos, e.g. for use on social media profiles.

Why? 

With so much competition in the job market for young adults (among whom the AI photo trend is most popular), and with others having access to the same technology, it may seem that enhancing a photo (within reason) to get a competitive edge seems fair to many, particularly if it’s easy and cheap to do (as it can be with AI tools).

Also, research has shown that better profile photos can yield positive results in the labour market. For example, the results of a 2016 research study by Ghent University (Belgium) found that employment candidates with the most favourable Facebook profile picture received around 21 per cent more positive responses to their application than those with the least favourable profile picture, and that their chances of getting an immediate interview invitation differed by almost 40 per cent.

Psychology

In terms of human psychology, it’s known that people tend to form more favourable judgments of individuals who appear more attractive or have a better photographic representation of themselves due to a combination of psychological factors. These include:

– The psychology of first impressions. Grounded in our instinctual ability to quickly gauge and categorise new information, this trait that was historically essential for survival. Seeing an enhanced photo, within seconds, could potentially appeal to this trait and lead to an employer making a more positive judgement about trustworthiness, competence, and likability.

– The ‘Halo Effect,’ which is a cognitive bias that leads us to assume that individuals possessing one positive trait (e.g., physical attractiveness in a photo) must also possess other desirable qualities, even when no evidence supports these assumptions.

– Social Comparison Theory, which suggests that people tend to evaluate themselves by comparing themselves to others. This could mean that when a person’s photo exudes attractiveness, viewers may subconsciously compare themselves and feel admiration or envy, thereby influencing their judgments.

– Our human tendency of ‘confirmation bias’ means that we seek out and interpret information that aligns with our existing beliefs or stereotypes. In other words, if we believe that attractive people are more successful or competent, we may selectively notice and emphasise information in the photo that confirms this belief.

– Theories of ‘Psychological Attraction’ could also mean that a positive and happy looking profile photo could lead to an employer making a more favourable evaluation by associating the positive feelings with the person’s image.

– Other possible psychological influences that could result from an enhanced profile photo could potentially include evolutionary psychology. For instance, we may subconsciously favour those who appear more attractive as potential mates or allies, and cultural or social Influences. For example, cultural and societal norms play a significant role in shaping our perception of beauty, and a profile photo that displays popular beauty ideals could play to the biases of a potential employer looking at a profile photo.

Why Use Apps Like Remini? 

Apps such as Rimini offer many benefits for young adults (or anyone) looking to get a high quality, enhanced photo for a LinkedIn profile photo. For example:

– They’re cheap. Using an AI app (perhaps on a free trial basis) is less expensive than using professional photographic services, plus they don’t require any of the expensive equipment such as lighting, studio hire, etc.

– They’re fast, require minimal effort, and offer a better chance of satisfaction for the user. From just a few selfie uploads, with no need for any photographic knowledge or professional input or equipment, users can get great results in minutes with minimal difficulty.

– They produce high quality, professional looking results.

– They can be used on-demand and offer flexibility. For example, users can virtually try out different styles and looks that could even influence their own real look or could be used as a kind of split testing of response to their profile.

Other Apps Also Available 

It’s worth pointing out that Remini is not the only such AI photo/video enhancing app available. For example, others include Snapseed, iMyFone UltraRepair, VSCO, Pho.To, PicsArt, Photo Wonder, Pixlr, and many more.

Challenges

Obviously, choosing to present a photo that is not a true representation of yourself with the intention of using it to get a job could have its challenges. For example:

– LinkedIn and similar platforms are professional networks where credibility is essential. If you meet someone in person or on a video call and they realise you don’t look like your profile photo, it can set a negative first impression. They might question your authenticity in other areas if you’re willing to misrepresent your appearance.

– Integrity is paramount in professional settings and presenting picture that doesn’t genuinely represent you might be seen as a breach of trust or even deceptive. This perception could, of course, impact your relationships with potential employers, colleagues, or clients.

– Relying on an AI-enhanced image can also have psychological implications. It may suggest that you’re not confident in presenting your true self, which could translate to lower self-esteem or self-worth over time.

– Employers / employment agencies are likely to be more interested in experience and qualifications rather than appearance and also may be wise to the fact that candidates may be using AI-enhanced photos.

– AI-enhanced images, especially those overly refined, can sometimes be clearly identified as modified which could lead people to think you’re hiding something or are overly focused on superficial aspects.

– There could be cultural and ethical implications. For example, in some cultures or industries, authenticity and honesty are valued above all else. Misrepresenting yourself, even in something as seemingly trivial as a profile photo, could be deemed as unethical or unprofessional.

– While the intention behind using an enhanced photo might be to increase job opportunities, it might actually have the opposite effect. If employers or recruiters sense any deceit, they might choose not to engage with you.

– Using AI-enhancement tools, especially those online, could pose a risk to your privacy. There’s always a chance your photos might be used without your consent or knowledge.

What Does This Mean For Your Business?

Appearances are, of course, important in first impressions, in professional environments, and where there are certain expected or required appearance and dress codes to adhere to. Also, wanting a professional-looking photo that you can be happy with, that you think shows the best aspects of yourself as a candidate is understandable, as is thinking that it may help you overcome some known biases.

Having a low price/free way to obtain professional photos quickly is also an attractive aspect of these kinds of AI apps. However, a balance is needed to ensure that the photo is not too enhanced or too unlike what a potential employer may reasonably expect to see in front of them should they choose to invite you to interview. An overly enhanced photo could, therefore, prove to be counterproductive.

It should be understood, however, that for most employers and agencies, experience, qualifications, and suitability for the role are far more important than a photo in making fair and objective recruitment decisions. It’s also worth noting that even if a photo did contribute to getting an interview, the face-to-face, in-person interview is a challenge that AI can’t yet help with (yet). That said, many corporate employers are turning to AI to filter job applications, and young people may feel that with this and with other competing applicants potentially using AI to get an edge, so why shouldn’t they?

This story also highlights the challenges that businesses now face from generative AI being widely available, e.g. being used to write applications, emails, and more, as well as risks to security with deepfake based scams. Just as generative AI has helped businesses with productivity, it also presents them with a new set of threats and challenges, and may require them to use AI image-spotting tools as a means of filtering and protection in many aspects of the business, including recruiting, and may highlight why and when, even in a digital world, face-to-face meetings continue to be important in certain situations.

Sustainability-in-Tech : AI Energy Usage As Much As The Netherlands

A study by a PhD candidate at the VU Amsterdam School of Business and Economics, Alex De Vries, warns that the AI industry could be consuming as much energy as a country the size of the Netherlands by 2027.

The Impact Of AI 

De Vries, the founder of Digiconomist, a research company that focuses on unintended consequences of digital trends, and whose previous research has focused on the environmental impact of emerging technologies (e.g., blockchain), based the warning on the assumption that certain parameters remain unchanged.

For example, assuming that the rate of growth of AI, the availability of AI chips, and servers work at maximum output continuously, coupled with chip designer Nvidia supplying 95 per cent of the AI sectors processors, Mr De Vries has calculated that by 2027 the expected range for the energy consumption of AI computers will be of 85-134 terawatt-hours (TWh) of electricity each year.

The Same Amount Of Energy Used By A Small Country 

This figure approximately equates the amount of power used annually by a small country, such as the Netherlands, and half a per cent of the total global electricity consumption. The research didn’t include the energy required for cooling (e.g. using water).

Why? 

The large language models (LLMs) that power popular AI chatbots like ChatGPT and Google Bard, for example, require huge datacentres of specialist computers that have high energy requirements and have considerable cooling requirements. For example, whereas a standard data centre computer rack requires 4 kilowatts (kW) of power (the same as a family house), an AI rack requires 20 times the power (80kW), and a single data centre may contain thousands of AI racks.

Other reasons why large AI systems require so much energy also include:

– The scale of the models. For example, larger models with billions of parameters require more computations.

– The vast amounts training data processed increases energy usage.

– The hardware (powerful GPU or TPU clusters) is energy intensive.

– The multiple iterations of training and tuning uses more energy, as does the fine-tuning, i.e. the additional training on specific tasks or datasets.

– Popular services hosting multiple instances of the model in various geographical locations (model redundancy) increases energy consumption.

– Server overhead (infrastructure support), like cooling and networking, uses energy.

– Millions of user interactions accumulate energy costs, even if individual costs are low (the inference volume).

– Despite optimisation techniques, initial training and model size is energy-intensive, as are the frequent updates, i.e., the regular training of new models to stay state-of-the-art.

Huge Water Requirements Too – Which Also Requires Energy

Data centres typically require vast quantities of water for cooling, a situation that’s being exacerbated by the growth of AI. To give an idea how much water, back in 2019, before widescale availability of generative AI, it was reported (public records and online legal filings) that Google requested (and was granted) more than 2.3 billion gallons of water for data centres in three different US states. Also, a legal filing showed that in Red Oak, just south of Dallas, Google may have needed as much as 1.46 billion gallons of water a year for its data centre by 2021. This led to Google, Microsoft, and Facebook pledging ‘water stewardship’ targets to replenish more water than they consume.

Microsoft, which is investing heavily in AI development, revealed that its water consumption had jumped by 34 per cent between 2021 and 2022, to 6.4 million cubic metres, around the size of 2,500 Olympic swimming pools.

Energy is required to operate such vast water-cooling systems and recent ideas to supply adequate power supplies for the data centre racks and cooling have even includes directly connecting a data centre to its own 2.5-gigawatt nuclear power station (Cumulus Data – a subsidiary of Talen Energy).

Google In The Spotlight

The recent research by Alex De Vries also highlighted how much energy a company like Google would need (it already has the Bard chatbot and Duet, its answer to Copilot) if it alone switched its whole search business to AI. The research concluded that in this situation Google, a huge data centre operator, would need 29.3 terawatt-hours per year, which is the equivalent to the electricity consumption of Ireland!

What Does This Mean For Your Organisation? 

Data centres are not just a significant source of greenhouse gas emissions, but typically require large amount of energy for cooling, power, and network operations. With the increasing use of AI, this energy requirement has also been increasing dramatically and only looks set to rise.

AI, therefore, stands out as both an incredible opportunity and a significant challenge. Although businesses are only just getting to grips with the many benefits that the relatively new tool of generative AI has given them, the environmental impact of AI is also becoming increasingly evident. Major players like Google and Microsoft are already feeling the pressure, leading them to adopt eco-friendly initiatives. For organisations planning to further integrate AI, it may be crucial to consider its environmental implications and move towards sustainable practices.

It’s not all doom and gloom though because while the energy demands of AI are high, there are emerging solutions that may offer hope. Investments in alternative energy sources (such as nuclear fusion) although it’s still in its very early development (it’s only just able to generate slightly more power than it uses) could redefine how we power our tech in the future. Additionally, the idea of nuclear-powered data centres, like those proposed by Cumulus Data, suggest a future where technology can be both powerful and environmentally friendly.

Efficiency is also a key issue to be considered. As we continue to develop and deploy AI, there’s a growing emphasis on optimising energy use. Innovations in cooling technology, server virtualisation, and dynamic power management are making strides in ensuring that AI operations are as green as they can be, although they still aren’t tackling the massive energy requirement challenge.

Despite the challenges, however, there are significant opportunities too. The energy needs of AI have opened the door for economic growth and companies that can offer reliable, low-carbon energy solutions stand to benefit, potentially unlocking significant cost savings.

Interestingly, AI itself might be part of the solution. Its potential to speed up research or optimise energy use positions AI as a tool that can help, rather than hinder, the journey towards a more sustainable future.

It’s clear, therefore, that as we lean more into an AI-driven world, it’s crucial for organisations to strike a balance. Embracing the benefits of AI, while being mindful of its impact, will be essential. Adopting proactive strategies, investing in green technologies, and leveraging AI’s problem-solving capabilities will be key for businesses moving forward.

Tech-Trivia : Did You Know? This Week in Tech-History …

October 23, 2001 : “A Thousand Songs In Your Pocket”

Around this time 22 years ago on October 23 2001, Steve Jobs promised to give people “A Thousand Songs In Their Pocket”. His timing couldn’t have been better because at the time, Apple was primarily known for its computers and was struggling financially.

Arriving eight months following the Macintosh version of iTunes, and lasting 20 years, iPods were discontinued last year (2022) after around 450 million iPods had been sold worldwide. Not bad !

Steve had a canny knack of spotting gaps in the market then filling them with game-changing devices which appear so blindingly obvious in hindsight. He’s been quoted as saying that the digital music players at the time were “big and clunky or small and useless” with user interfaces that were “unbelievably awful”.

So he did something about it, in secret. In fact, the project was so secret that employees working on it couldn’t tell their families about it.

Inspired by the movie “2001: A Space Odyssey”, copywriter Vinnie Chieco proposed the name “iPod”. The phrase “Open the pod bay doors, HAL” from the film, along with the small, white ‘pods’ in the movie, were a reference to this film.

In the first month of 2007, Apple announced an unprecedented quarterly revenue of US$7.1 billion, with iPod sales accounting for almost 50% of that figure. Then, on April 9, 2007, the company reached a milestone by selling its one-hundred millionth iPod, securing its place as the most popular digital music player ever sold.

Some Business Lessons To Consider :

1 – Innovate by Addressing Pain Points: A primary reason for the iPod’s success was Steve Jobs’ ability to understand customer frustrations with existing products.
2 – Build Integrated Ecosystems: The iPod was a critical part of a larger ecosystem. The seamless integration with iTunes software and the iTunes Store made it incredibly easy for users to purchase, manage, and enjoy music.
3 – Joint-Venture With Strategic Partners. When Apple entered into a partnership with HP, it was a move to expand their market presence. At the time, Apple’s market share was predominantly within its loyal customer base, while HP had a broader reach in the PC market and strong relationships with big-box retailers. Apple was then able to tap into a wider demographic, extending its reach to consumers who might not have considered Apple products before.

Steve was brilliant at taking what was already out there and “re-thinking it” with incredible success, which can be modelled.

When will it be your turn to have your own “iPod moment” ?

Tech Tip – Create Shortcuts for Important WhatsApp Chat

If there’s a particular and important chat you access frequently, you can create a shortcut for it on your device’s home screen. Here’s how:

Long-press on the specific chat in the chat list until it’s selected.

Tap on the three dots (top right).

Choose “Add chat shortcut.”

Tap on “Add”.

This will create a shortcut icon on your device’s home screen, so you can save time by accessing the chat directly without opening WhatsApp first.

Featured Article : Safety Considerations Around ChatGPT Image Uploads

With one of ChatGPT’s latest features being the ability to upload images to help get answers to queries, here we look at why there have been security concerns about releasing the feature.

Update To ChatGPT 

The new ‘Image input’ which will soon be generally available to Plus users on all platforms, has just been announced along with a voice capability, enabling users to have a voice conversation with ChatGPT, and the ‘Browse’ feature that enables the chatbot to browse the internet to get current information.

ChatGPT and Other Chatbot Limitations and Concerns 

Prior to the latest concerns about the new ‘Image input’ feature, several concerns limitations about ChatGPT have been highlighted.

For example, ChatGPT’s CEO Sam Altman has long been clear about the possibility that the chatbot is capable of making things up in a kind of “hallucination” in reply to questions. Also, there’s a clear warning on the foot of the ChatGPT’s user account page confirming this saying: “ChatGPT may produce inaccurate information about people, places, or facts.”  

Also, back in March, the UK’s National Cyber Security Centre (NCSC) published warnings that LLMs (the language models powering AI chatbots) can:

– Get things wrong and ‘hallucinate’ incorrect facts.

– Display bias and be “gullible” (in responding to leading questions, for example).

– Be “coaxed into creating toxic content and are prone to injection attacks.” 

For these and other reasons, the NCSC recommends not including sensitive information in queries to public LLMs, and not to submit queries to public LLMs that would lead to issues (if they were they made public).

It’s within this context of the recognised and documented imperfections of chatbots that we look at the risks that a new image dimension could present.

Image Input 

The new ‘Image input’ feature for ChatGPT, which had already been introduced by Google’s Bard, is intended to facilitate the usage the contents of images to better explain their questions, help troubleshoot, or for instance get an explanation of complex graph, or to generate other helpful responses based on the picture. In fact, it’s intended to act in situations (just as in real life), where it may be quicker and more effective to show something as picture of something rather than try and explain it. ChatGPT’s powerful image recognition abilities means that it can describe what’s in the uploaded images, answer questions about them and, even recognise specific people’s faces.

ChatGPT’s ‘Image input’ feature owes much to a collaboration (in March) between OpenAI and the ‘Be My Eyes’ platform which led to the creation of ‘Be My AI’, a new tool to describe the visual world for people who are blind or have low vision. In essence, the Be My Eyes Platform seems to have provided an ideal testing area to inform how GPT-4V could be deployed responsibly.

How To Use It 

The new Image input feature allows users to tap on the photo button to capture or choose an image, and to show/upload one or more images to ChatGPT, and even to using a drawing tool in the mobile app to focus on a specific part of an image.

Concerns About Image Input 

Although it’s obvious to see how Image input could be helpful, it’s been reported that OpenAI was reluctant to release GPT-4V / GPT-4 with ‘vision’ because of privacy issues over its facial recognition abilities, and over what it may ‘say’ about peoples’ faces.

Testing 

Open AI says that before releasing Image input, its “Red teamers” tested it relation to how it performed on areas of concern. These areas for testing give a good idea of the kinds of concerns about how Image input, a totally new vector for ChatGPT, could provide the wrong response or be manipulated.

For example, OpenAI says its teams tested the new feature in areas including scientific proficiency, medical advice, stereotyping and ungrounded inferences, disinformation risks, hateful content, and visual vulnerabilities. It also looked at its performance in areas like sensitive trait attribution across demographics (images of people for gender, age, and race recognition), person identification, ungrounded inference evaluation (inferences that are not justified by the information the user has provided), jailbreak evaluations (prompts that circumvent the safety systems in place to prevent malicious misuse), advice or encouragement for self-harm behaviours, and graphic material, CAPTCHA breaking and geolocation.

Concerns 

Following its testing, some of the concerns highlighted about the ‘vision’ aspect of ChatGPT in tests by Open AI, as detailed in its own September 25 technical paper include:

– Where “Hateful content” in images is concerned, GPT-4V was found to refuse to answer questions about hate symbols and extremist content in some instances but not all. For example, it can’t always recognise lesser-known hate group symbols.

– It shouldn’t be relied upon for accurate identifications for issues such as medical, or scientific analysis.

– In relation to stereotyping and ungrounded inferences, using GPT-4V for some tasks could generate unwanted or harmful assumptions that are not grounded in the information provided to the model.

Other Security, Privacy, And Legal Concerns 

OpenAI’s own assessments aside, major concerns raised by tech and security commentors about ChatGPT’s facial recognition capabilities in relation to the Image input feature are that:

– It could be used as a facial recognition tool by malicious actors. For example, it could be used in some way in conjunction with WormGPT, the AI chatbot trained on malware and designed to extort victims or used generally in identity fraud scams.

– It could say things about faces that that provide unsafe assessments, e.g. about their gender or emotional state.

– Its LLM risks producing incorrect results in potentially risky areas, such as identifying illegal drugs or safe-to-eat mushrooms and plants.

– The GPT-4V model may (as with the text version) give responses (both text and images) that could be used by some bad-actors to spread disinformation at scale.

– In Europe (operating under GDPR) it could cause legal issues, i.e. citizen consent is required to use their biometric data.

What Does This Mean For Your Business? 

This could be a legal minefield for OpenAI and may even pose risks to users, as OpenAI’s many testing categories show. It us unsurprising that OpenAI held back on the release of GPT-4V (GPT-4 with vision) over safety and privacy issues, e.g. in its facial recognition capabilities.

Certainly, adding new modalities like image inputs into LLMs expands the impact of language-only systems with new interfaces and capabilities, enabling the solving of new tasks and providing novel experiences for users, yet it’s hard to ignore the risks of facial recognition being abused. OpenAI has, of course, ‘red teamed’, tested, and introduced refusals and blocks where it can but, as is publicly known and admitted by OpenAI and others, chatbots are imperfect, still in their early stages of development, and are certainly capable of producing wrong (and potentially damaging) responses, while there are legal matters like consent (facial images are personal data) to consider.

The fact that a malicious version of ChatGPT has already been produced and circulated by criminals has highlighted concerns about threats posed by the technology and how an image aspect could elevate this threat in some way. Biometric data is now being used as a verification for devices, services, and accounts, and with convincing deepfake technology already being used, we don’t yet know what inventive ways cyber criminals could use image inputs in chatbots as part of a new landscape of scams.

It’s a fast-moving competitive market, however, as the big tech companies race to make their own chatbots as popular as possible and despite OpenAI’s initial reluctance, in order to stay competitive, it may have felt some pressure to get its image input feature out there now. The functionalities introduced recently to ChatGPT (such as image input) illustrate the fact that to make chatbots more useful and competitive, some lines must be crossed however tentatively, even though this could increase risks to users and to companies like OpenAI.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives