Tech-Trivia : Did You Know? This Week in Tech-History …
October 10, 2023 : “A Thousand-Year Name Extension”
Around a thousand years before electronics, a monk called Poppo was asked to prove his faith, because Sweyn Forkbeard was having doubts about his baptism. Legend has it that Poppo proved his faith by holding a red-hot metal glove, yet he remained unharmed.
Sweyn Forkbeard’s father (King Harald Blátǫnn Gormsson) had already converted from paganism to Christianity although his conversion wasn’t what he was famous for. His place in the history books had been assured by uniting Norway and Denmark in AD 958, quite possibly giving him cause to smile.
If you’d seen his smile, you may have noticed that he had an off-colour dead-tooth which the sagas say were stained from eating blueberries, for Blátǫnn is old norse for “Blue-Tooth”.
A couple of weeks ago on 10th October, Denmark officially authorised the creators of Bluetooth technology to use the name and symbol of the aforementioned Danish King Harald “Bluetooth” Gormsson for a period of 1,000 years.
The modern technological version of Bluetooth was devised and named in 1996 when, in a spirit of collaboration, Intel, Ericcson and Nokia held a meeting to standardise short-range radio communications between electronic devices. The name was only supposed to be temporary until their marketing departments came up with another.
Intel’s Jim Kardach said “King Harald Bluetooth was famous for uniting Scandinavia just as we intended to unite the PC and cellular industries with a short-range wireless link.”
Three Business Take-Aways That Spring To Mind :
1 – Collaboration. Cooperation between related businesses means that synergy from the collective can be more productive than the sum of the parts.
2 – Vision. The inventors of Bluetooth had a clear goal : a wireless communication protocol that could connect devices across different industries, brands, and functionalities.
3 – Branding. If the name sticks, use it! In this case, they’re good for the next thousand years, so there’s no rush to change it now.
As an aside, the Bluetooth logo is derived from Viking runes and it’s a bind-rune merging “Hagall” (associated with the forces of nature and the universe, symbolizing disruption, change, and challenges) and “Bjarkan” (associated with growth, rebirth, and new beginnings). Both runes correspond to the initials of the 10th-century Danish king.
That’s something to think about next time you’re going around in circles, trying to connect devices and not going mad.
Security Stop Press : iPhone Permanent Lock-Out Threat
As featured in a recent Wall Street Journal report, iPhone thieves are exploiting a security setting called the ‘recovery key’ to permanently lock owners out of their own iPhones and gain access to their financial apps.
The method, however, hinges first upon ‘shoulder surfing’, i.e. looking over the iPhone user’s shoulder to get the passcode, or finding a way to make the device’s owner share their passcode. Once the passcode has been obtained, the thief uses it to change the device’s Apple ID, turns off “Find my iPhone” and resets the 28-digit recovery code (which was intended to be a security measure), thereby locking the owner out of their own device.
The advice to iPhone owners is to use Face ID or Touch ID when unlocking the phone in public, set up an alphanumeric passcode that would be very difficult for thieves to figure out, consider using the iPhone’s Screen Time setting to set up a secondary password, and to regularly back up your iPhone via iCloud or iTunes.
Sustainability-in-Tech : Giant Solar Space Farm By 2035
Oxfordshire based Technology firm, Space Solar, says that giant solar panel farms could be in orbit and operational above the Earth by 2035.
The Challenge
There are significant energy and environmental challenges facing everyone, including the fact that global electricity demand is set to double by 2050. This, together requirements to swap fossil fuel reliance with new affordable, continuous, sustainable, flexible, and green energy generation technologies mean that the world is facing some major challenges to meet the Net Zero goal.
Space-Based Solar
Space Solar believes that its space-based solar power idea is a credible answer to these challenges and should take the form of 2km-long farms of solar panels, orbiting the earth and sending energy to receivers on earth in a similar way to how satellite broadband operates.
Advantages Over Ground Based Solar
Some of the main advantages of having the solar panels space is that there’s no footprint/no space taken up on earth (apart from the receivers). Additionally, there can be a constant (24/7) supply of clean solar power from space that is unaffected by the weather, seasons, or time of day. Furthermore, in space, solar panels could produce much more renewable energy than terrestrial equivalents for the reasons just given.
The lack of atmosphere and weather means that the sun’s rays are around ten times stronger in space than on earth. In fact, it’s been estimated that space-based solar would use half the land area of terrestrial solar farms (it would still need receivers), and one-tenth of the area of offshore wind farms but would produce 13 times more renewable energy.
European Space Agency (ESA) Plan
The idea of space-based solar has already received an endorsement in the form of the European Space Agency (ESA) unveiling its own plan for a space-based solar farm 36,000 km above the Earth. Announced last year, its SOLARIS proposal was intended as a way to test the feasibility of the concept of using giant solar panels to send solar energy (as supposedly ‘safe’ microwaves) to collecting ‘rectennas’ on Earth’s surface, so that Europe could make an informed decision in 2025 on whether to proceed with a space-based solar Power programme in the future (and to ensure that Europe becomes a key player).
The UK government is also reported to be investing £5m in an international project called CASSIOPeiA, aimed at studying space-based solar power.
Viable Technology
Space Solar and the European Space Agency (ESA) both believe that the technology appears to be viable (as confirmed by independent government-led studies), and with the help of re-usable space launches could be economically viable too. The company says its goal is to be able to “deliver 20 per cent of Earth’s energy supply using 600 satellites”.
Just 12 Years
Space Solar believes that its space solar farms will be ready by 2035, saying on its website: “In 12 years, Space Solar will deliver an affordable, scalable and fully renewable new baseload energy technology” adding that that this will “create a safer and more secure world where clean energy is available to everyone, for the benefit of all life on earth”.
Isn’t It Getting Crowded Up There?
It’s estimated that there are over 3,300 operational satellites orbiting Earth at any one time as well as 128 million pieces of debris smaller than 1 cm, around 900,000 pieces between 1-10 cm, and around 34,000 of those larger than 10 cm. For large space infrastructures like orbiting solar farms, for example, debris mitigation and protection measure would, therefore, be a crucial consideration.
What Does This Mean For Your Organisation?
The promise of viable nuclear fusion still appears many years away and the need to decarbonise our energy sources is becoming increasingly urgent.
Replacing fossil-fuels with a sustainable and affordable clean alternative such as space-based solar must surely appeal as one of the cleanest ideas.
With plenty of room up there (provided space junk can be avoided), together with the promise of 24/7 supplies being conveniently beamed to earth from solar farms (which produce 13 times more renewable energy than earth-bound versions) does indeed sound attractive.
There seems to be some consensus that it is technically (and hopefully economically) viable and if, as Space Solar believes, it could be ready in 12 years, this could be one way to plug the gap in clean energy requirements before nuclear fusion reaches viability. Space-based solar must be as close to zero-carbon (apart from the rocket launches) as you can get and, if adopted at scale, could aid the electrification of countries around the world, change the energy industry, change fossil fuel industries, and potentially boost many of the world’s economies.
Space-based solar could, therefore, not only help us to take a step closer in the journey to meeting Net-Zero global targets but could provide the world with a safe and effective way to harness the natural energy of the sun like never before.
It’d probably be wise not to get in the path of the microwaves though.
Tech Tip – How To Get A Full Long Page Screen Capture In Chrome
If you’d like to capture long web pages in their entirety, e.g. for use in documentation, presentations, or competitor analysis, Google Chrome has a lesser-known but built-in way for doing this. Here’s how it works:
– Go to the web page you’d like to capture.
– Press Ctrl + Shift + I (or Cmd + Option + I on Mac) to open Developer Tools, then Ctrl + Shift + P to open the Command Menu (right-hand side).
– In the search bar at the top (next to ‘Run >’) type “screenshot” and select “Capture full size screenshot”.
– The screenshot will be saved in your ‘Downloads’ folder as a PNG file
Featured Article : Bots To Bots : Google Offers Protection From AI-Related Lawsuits
Google Cloud has announced in a blog post that if customers are challenged on copyright grounds through using its generative AI products (Duet AI), Google will offer limited indemnity and assume responsibility for the potential legal risks involved.
Why?
With the many different generative AI services (such as AI chatbots and image generators) being powered by back-end neural networks / Large Language Models (LLMs) that have been trained using content from many different sources (without consent or payment), businesses that use their outputs face risks. For example, content creators like artists and writers may take legal action and seek compensation for LLMs using their work for training and which, as a result, can appear to copy their style in their output, thereby raising potential issues of copyright, lost income, devaluation of their work and more. Real examples include:
– In January this year, illustrators Sarah Andersen, Kelly McKernan, and Karla Ortiz filing a lawsuit against Midjourney Inc, DeviantArt Inc (DreamUp), and Stability AI, alleging that the text-to-image platforms have used their artworks, without consent or compensation, to train their algorithms.
– In February this year, Getty Images filing a lawsuit against Stability AI, alleging that it had copied 12 million images (without consent or permission) to train its AI model.
– Comedian Sarah Silverman joining lawsuits (in July 2023) accusing OpenAI and Meta of training their algorithms on her writing without permission.
– GitHub facing litigation over accusations that they have allegedly scraped artists’ work for their AI products.
– Microsoft, Microsoft’s GitHub, and OpenAI facing a lawsuit over alleged code copying by GitHub’s Copilot programming suggestion service.
Although all these relate to lawsuits against the AI companies themselves and not their customers, the AI companies realise that this is also a risky area for customers because of how their AI models have been trained and where they could get their outputs from.
What Are The AI Companies Saying In Their Defence?
Examples of the kinds of arguments that AI companies being accused of copyright infringement are using in their defence include:
– Some AI companies argue that the data used to train their models is under the principle of “fair use.” Fair use is a legal doctrine that promotes freedom of expression by allowing the unlicensed use of copyright-protected works in certain circumstances. For example, the argument is that the vast amount of data used to train models like ChatGPT’s GPT-4 is processed in a transformative manner, which AI companies like OpenAI may argue means the output generated is distinct and not a direct reproduction of the original content.
– Another defence revolves around the idea that AI models, especially large ones, aggregate and anonymise data to such an extent that individual sources become indistinguishable in the final model. This could mean that, while a model might be trained on vast amounts of text, it doesn’t technically “remember” or “store” specific books, articles, or other content in a retrievable form.
– Yet another-counter argument by some AI companies is that while an AI tool has the ‘potential’ for misuse, it is up to the end-users to use it responsibly and ethically. This means that AI companies can argue that because they often provide guidelines and terms of service that outline acceptable uses of their technology, plus they actively try and discourage/prevent uses that could lead to copyright infringement, they are therefore (ostensibly) encouraging responsible use.
Google’s Generative AI Indemnification
Like Microsoft’s September announcement that it would defend its paying customers if they faced any copyright lawsuits for using Copilot, Google has just announced for its Google Cloud customers (who are pay-as-you-go) that it will be offering them its own AI indemnification protection. Google says that since it has embedded the always-on ‘Duet AI’ across its products, it needs to put its customers first and in the spirit of “shared fate” it will “assume responsibility for the potential legal risks involved.”
A Two-Pronged Approach
Google says it will be taking a “two-pronged, industry-first approach” to this indemnification. This means that it will provide indemnity for both the training data used by Google for generative AI models, and for the generated output of its AI models – two layers of protection.
In relation to the training data, which has been a source of many lawsuits for AI companies and could be an area of risk for Google’s customers, Google says its indemnity will cover “any allegations that Google’s use of training data to create any of our generative models utilised by a generative AI service, infringes a third party’s intellectual property right.” For business users of Google Cloud and its Duet AI, this means they’ll be protected regardless against third parties claiming copyright infringement as a result of Google’s use of training data.
In relation to Google’s generated output indemnity, Google says it will apply to Duet AI in Google Workspace and to a range of Google Cloud services which it names as:
– Duet AI in Workspace, including generated text in Google Docs and Gmail and generated images in Google Slides and Google Meet.
– Duet AI in Google Cloud including Duet AI for assisted application development.
– Vertex AI Search.
– Vertex AI Conversation.
– Vertex AI Text Embedding API / Multimodal Embeddings.
– Visual Captioning / Visual Q&A on Vertex AI.
– Codey APIs.
Google says the generated output indemnity will mean that customers will be covered when using the above-named products against third-party IP claims, including copyright.
One Caveat – Responsible Practices
The one caveat that Google gives is that it won’t be able to cover customers where they have intentionally created or used generated output to infringe the rights of others. In other words, customers can’t expect Google to cover them if they ask Duet AI to deliberately copy another person’s work/content.
The Difference
Google says the difference between its AI indemnity protection and that offered by others (e.g. Microsoft), is essentially that it covers the training data aspect and not just the output of its generative AI tools.
Bots Talking To Each Other?
Interestingly, another twist in the complex and emerging world of generative AI last week were reports that companies are using “synthetic humans” (i.e. bots), each with characteristics drawn from ethnographic research on real people and using them to take part in conversations with other bots and real people to help generate new product and marketing ideas.
For example, Fantasy, a company that creates the ‘synthetic humans’ for conversations has reported that the benefits of using them include both the creation of novel ideas for clients, and prompting real humans included in their conversations to be more creative, i.e. stimulating more creative brainstorming. However, although it sounds useful, one aspect to consider is where the bots may get their ‘ideas’ from since they’re not able to actually think. Could they potentially use another company’s ideas?
What Does This Mean For Your Business?
Since the big AI investors like Google and Microsoft have committed so fully to AI and introduced ‘always-on’ AI assistants to services for their paying business customers (thereby encouraging them to use the AI without being able to restrict all the ways its used), it seems right that they’d need to offer some kind of cover, e.g. for any inadvertent copyright issues.
This is also a way for Google and Microsoft to reduce the risks and worries of their business customers (customer retention). Google, Microsoft, and other AI companies have also realised that they can feel relatively safe in offering indemnity at the moment as they know that many of the legal aspects of generative AI’s outputs and the training of its models are very complex areas that are still developing.
They may also feel that taking responsibility in this way at least gives them a chance to get involved in the cases, argue, and have a say in the cases (particularly with their financial and legal might) that will set the precedents that will guide the use of generative AI going forward. It’s also possible that many cases could take some time to be resolved due to the complexities of the new, developing, and often difficult to frontier of the digital world.
Some may also say that many of the services that Google’s offering indemnity for could mostly be classed as internal use services, whilst others may say that the company could be opening itself up to a potential tsunami of legal cases given the list of services it covers and the fact not all business users will be versed in what the nuances of responsible use in what is a developing area. Google and Microsoft may ultimately need to build-in legal protection and guidance of what can be used to the output of their generative AI.
As a footnote, it would be interesting to see whether ‘synthetic human’ bots could be used to discuss and sort out many of the complex legal areas around AI use (AI discussing the legal aspects of itself with people – perhaps with lawyers) and whether AI will be used in research for any legal cases over copyright?
Generative AI is clearly a fast developing and fascinating area with both benefits and challenges.
Tech Insight : How A Norwegian Company Is Tackling ‘AI Hallucinations’
Oslo-based startup Iris.ai has developed an AI Chat feature for its Researcher Workspace platform which it says can reduce ‘AI hallucinations’ to single-figure percentages.
What Are AI Hallucinations?
AI hallucinations, also known as ‘adversarial examples’ or ‘AI-generated illusions,’ are where AI systems generate or disseminate information that is inaccurate, misleading, or simply false. The fact that the information appears convincing and authoritative despite lacking any factual basis means that it can create problems for companies that use the information without verifying it.
Examples
A couple of high-profile examples of when AI hallucinations have occurred are:
– When Facebook / Meta demonstrated its Galactica LLM (designed for science researchers and students) and, when asked to draft a paper about creating avatars, the model cited a fake paper from a genuine author working on that subject.
– Back in February, when Google demonstrated its Bard chatbot in a promotional video, Bard gave incorrect information about which satellite first took pictures of a planet outside the Earth’s solar system. Although it happened before a presentation by Google, it was widely reported, resulting in Alphabet Inc losing $100 billion in market value on its shares.
Why Do AI Hallucinations Occur?
There are a number of reasons why chatbots (e.g. ChatGPT) generate AI hallucinations, including:
– Generalisation issues. AI models generalise from their training data, and this can sometimes result in inaccuracies, such as predicting incorrect years due to over-generalisation.
– No ground truth. LLMs don’t have a set “correct” output during training, differing from supervised learning. As a result, they might produce answers that seem right but aren’t.
– Model limitations and optimisation targets. Despite advances, no model is perfect. They’re trained to predict likely next words based on statistics, not always ensuring factual accuracy. Also, there has to be a trade-off between a model’s size, the amount of data it’s been trained on, its speed, and its accuracy.
What Problems Can AI Hallucinations Cause?
Using the information from AI hallucinations can have many negative consequences for individuals and businesses. For example:
– Reputational damage and financial consequences (as in the case of Google and Bard’s mistake in the video).
– Potential harm to individuals or businesses, e.g. through taking and using incorrect medical, business, or legal advice (although ChatGPT passed the Bar Examination and business school exams early this year).
– Legal consequences, e.g. through publishing incorrect information obtained from an AI chatbot.
– Adding to time and workloads in research, i.e. through trying to verify information.
– Hampering trust in AI and AI’s value in research. For example, an Iris.ai survey of 500 corporate R&D workers showed that although 84 per cent of workers use ChatGPT as their primary AI research support tool, only 22 per cent of them said they trust it and systems like it.
Iris.ai’s Answer
Iris.ai has therefore attempted to address these factuality concerns by creating a new system that has an AI engine for understanding scientific text. This is because the company developed it primarily for use in its Researcher Workspace platform (to which it’s been added as a chat feature) so that its (mainly large) clients, such as the Finnish Food Authority can use it confidently in research.
Iris.ai has reported that the inclusion of the system accelerated research on a potential avian flu crisis can essentially save 75 per cent of a researcher’s time (by not having to verify whether information is correct or made up).
How Does The Iris.ai System Reduce AI Hallucinations?
Iris.ai says its system is able to address the factuality concerns of AI using a “multi-pronged approach that intertwines technological innovation, ethical considerations, and ongoing learning.” This means using:
– Robust training data. Iris.ai says that it has meticulously curated training data from diverse, reputable sources to ensure accuracy and reduce the risk of spreading misinformation.
– Transparency and explainability. Iris.ai says using advanced NLP techniques, it can provide explainability for model outputs. Tools like the ‘Extract’ feature, for example, show confidence scores, allowing researchers to cross-check uncertain data points.
– The use of knowledge graphs. Iris.ai says it incorporates knowledge graphs from scientific texts, directing language models towards factual information and reducing the chance of hallucinations. The company says this is because this kind of guidance is more precise than merely predicting the next word based on probabilities.
Improving Factual Accuracy
Iris.ai’s techniques for improving factual accuracy in AI outputs, therefore, hinge upon using:
– Knowledge mapping, i.e. Iris.ai maps key knowledge concepts expected in a correct answer, ensuring the AI’s response contains those facts from trustworthy sources.
– Comparison to ground truth. The AI outputs are compared to a verified “ground truth.” Using the WISDM metric, semantic similarity is assessed, including checks on topics, structure, and vital information.
– Coherence examination. Iris.ai’s new system reviews the output’s coherence, ensuring it includes relevant subjects, data, and sources pertinent to the question.
These combined techniques set a standard for factual accuracy and the company says its aim has been to create a system that generates responses that align closely with what a human expert would provide.
What Does This Mean For Your Business?
It’s widely accepted (and publicly admitted by AI companies themselves) that AI hallucinations are an issue that can be a threat for companies (and individuals) who use the output of generative AI chatbots without verification. Giving false but convincing information highlights both one of the strengths of AI chatbots, i.e. how it’s able to present information, as well as one of its key weaknesses.
As Iris.ai’s own research shows, although most companies are now likely to be using AI chatbots in their R&D, they are aware that they may not be able to fully trust all outputs, thereby losing some of the potential time savings by having to verify as well as facing many potentially costly risks. Although Iris.ai’s new system was developed specifically for understanding scientific text with a view to including it as a useful tool for researchers who use its own site, the fact that it can reduce AI hallucinations to single-figure percentages is impressive. Its methodology may, therefore, have gone a long way toward solving one of the big drawbacks of generative AI chatbots and, if it weren’t so difficult to scale up for popular LLMs it may already have been more widely adopted.
As good as it appears to be, Iris.ai’s new system can still not solve the issue of people simply misinterpreting the results they receive.
Looking ahead, some tech commentators have suggested that methods like using coding language rather than the diverse range of data sources and collaborations with LLM-makers to build larger datasets may bring further reductions in AI hallucinations. For most businesses now, it’s a case of finding the balance of using generative AI outputs to save time and increase productivity while being aware that those results can’t always be fully trusted and conducting verification checks where appropriate and possible.