Tech News : Your AI Twin Might Save Your Life

A new study published in The Lancet shows how an AI tool called Foresight (which fully analyses patient health records and makes digital twins of patients) could be used to predict the future of your health.

What Is Foresight?

The Foresight tool is described by the researchers as a “generative transformer in temporal modelling of patient data, integrating both free text and structured formats.” In other words, it’s a sophisticated AI system that’s designed to analyse patient health records over time.

What Does It All Mean? 

The “generative transformer” type of AI is a machine learning / large language model (an ‘LLM’) that can generate new data based on what it has learned from previous data. The term “transformer” is a specific kind of model that’s very good at handling sequences of data, like sentences in a paragraph or a series of patient health records over time (temporal), i.e. a patient’s electronic health records (EHR).

Unlike other health prediction models, Foresight can use a much wider range of data in different formats. For example, Foresight can use everything from medical history, diagnosis, treatment plans, and outcomes, in both free text formats like (unorganised) doctors’ notes or radiology reports and more structured formats. These can include database entries or spreadsheets (with specific fields for patient age, diagnosis codes, or treatment dates).

 Why? 

The researchers say that the study is aimed to evaluate how effective Foresight is in the modelling of patient data and using it to predict a diverse array of future medical outcomes, such as disorders, substances (such as to do with medicines, allergies, or poisonings), procedures, and findings (including relating to observations, judgements, or assessments).

The Foresight Difference 

The researchers say that the difference between Foresight and existing approaches to model a patient’s health trajectory focus mostly on structured data and a subset of single-domain outcomes is that Foresight can take a lot more diverse types and formats of data into account.

Also, being an AI model, Foresight can easily scale to more patients, hospitals, or disorders with minimal or no modifications, and like other AI models that ‘learn,’ the more data it receives, the better it gets at using that data.

How Does It Work? (The Method) 

The method tested in a recent study involved Foresight working in several steps. In the research, the Foresight AI tool was tested across three different hospitals, covering both physical and mental health, and five clinicians performed an independent test by simulating patients and outcomes.

In the multistage process, the researchers trained the AI models on medical records and then fed Foresight new healthcare data to create virtual duplicates of patients, i.e. ‘digital twins’. The digital twins of patients could then be used to forecast different outcomes relating to their possible/likely disease development and medication needs, i.e. educated guesses were produced about any future health issues, like illnesses or treatments that might occur for a patient.

The Findings 

The main findings of the research were that the Foresight AI tool and the use of digital twins can be used for real-world risk forecasting, virtual trials, and clinical research to study the progression of disorders, to simulate interventions and counterfactuals, and for educational purposes. The researchers said that using this method, they demonstrated that Foresight can forecast multiple concepts into the future and generate whole patient timelines given just a short prompt.

What Does This Mean For Your Business? 

Using an AI tool that can take account of a wider range of patient health data than other methods, make a digital twin, produce simulations, and forecast possible health issues and treatments in the future, i.e. whole patient timelines until death could have many advantages. For example, as noted by the researchers, it could help medical students to engage in interactive learning experiences by simulating medical case studies. This could help them to practice clinical reasoning and decision-making in a safe environment, as well as helping them with ethical training by facilitating discussions on fairness and bias in medicine.

This kind of AI medical prediction-making could also be useful in helping doctors to alert patients to tests they may need to take to enable better disease-prevention as well as helping with issues such as medical resource planning.  However, as many AI companies say, feeding personal and private details (medical records) into AI is not without risk in terms of privacy and data protection. Also, the researchers noted that more tests are needed to validate and test the performance of the model on long simulations. One other important point to remember is that regardless of current testing of the model, Foresight is currently predicting things long into the future for patients and, as such, it’s not yet known how accurate its predictions are.

Following more testing (as long as issues like security, consent, and privacy are adequately addressed) a fully developed method of AI-based health issue prediction could prove to be very valuable to medical professionals and patients and could create new opportunities in areas and sectors related to health, such as fitness, wellbeing,  pharmaceuticals, insurance, and many more.

An Apple Byte : Serious Apple Chip Vulnerability Discovered

US researchers have reported discovering a hardware chip vulnerability inside Apple M1, M2, and M3 silicon chips. The unpatchable ‘GoFetch’ is a microarchitecture vulnerability and side-channel attack that reportedly affects all kinds of encryption algorithms, even the 2,048-bit keys that are hardened to protect against attacks from quantum computers.

This serious vulnerability renders the security effects of constant-time programming (a side-channel mitigation encryption algorithm) useless. This means that encryption software can be tricked by applications using GoFetch into putting sensitive data into the cache so it can be stolen.

Pending any fix advice from Apple, users are recommended to use the latest versions of software, and to perform updates regularly. Also, developers of cryptographic libraries should set the DOIT bit and DIT bit bits (disabling the DMP on some CPUs) and to use input blinding (cryptography). Users are also recommended to avoid hardware sharing to help maintain the security of cryptographic protocols.

Security Stop Press : Microsoft’s RSA Key Policy Change

Microsoft is making a security-focused policy change that will see RSA keys with lengths shorter than 2048 bits deprecated. RSA keys are algorithms used for secure data encryption and decryption in digital communications, i.e. to encrypt data for secure communications over an enterprise network.

However, with RSA encryption keys becoming vulnerable to advancing cryptographic techniques (driven by advancements in compute power) the decision by Microsoft to depreciate them is being seen as a way to stop organisations from using what is now seen as a weaker method of authentication.

Also, the move by Microsoft will help bring the industry in line with recommendations from the internet standards and regulatory bodies who banned the use of 1024-bit keys in 2013 and recommended that RSA keys should have a key length of 2048 bits or longer.

Sustainability-in-Tech : World’s First Bio-Circular Data Centre

French data centre company, Data4, says its new project will create a world-first way of reusing data centre heat and captured CO2 to grow algae which can then be used to power other data centres and create bioproducts.

Why? 

The R&D project, involving Data4 working with the University of Paris-Saclay, is an attempt to tackle the strategic challenge of how best to reuse and not to waste / lose the large amount of heat produced by data centres. For example, even the better schemes which use it to heat nearby homes only manage to exploit 20 per cent of the heat produced

Also, the growth of digital technology and the IoT, AI, and the amount of data stored in data centres (+35 per cent / year worldwide), mean that those in the data centre industry must up their game to reduce their carbon footprint and meet environmental targets.

Re-Using Heat To Grow Algae 

Data4’s project seeks to reuse the excess data centre heat productively in a novel new way. Data4’s plan is to use the heat to help reproduce a natural photosynthesis mechanism by using some of the captured CO2 to grow algae. This Algae can then be recycled as biomass to develop new sources of circular energy and reusing it in the manufacture of bioproducts for other industries (cosmetics, agri-food, etc.).

Super-Efficient 

Patrick Duvaut, Vice-President of the Université Paris-Saclay and President of the Fondation Paris-Saclay has highlighted how a feasibility study of this new idea has shown that the efficiency of this carbon capture “can be 20 times greater than that of a tree (for an equivalent surface area)” 

Meets Two Major Challenges 

Linda Lescuyer, Innovation Manager at Data4, has highlighted how using the data centre heat in this unique way means: “This augmented biomass project meets two of the major challenges of our time: food security and the energy transition.” 

How Much? 

The project has been estimated to cost around €5 million ($5.4 million), and Data4’s partnership with the university for the project is expected to run for 4 years. Data4 says it hopes to have a first prototype to show in the next 24 months.

What Does This Mean For Your Organisation? 

Whereas other plans for tackling the challenges of how best to deal with the excess heat from data centres have involved more singular visions such as simply using the heat in nearby homes or to experiment with better ways of cooling servers, Data4’s project offers a more unique, multi-benefit, circular perspective. The fact that it not only utilises the heat grow algae, but that the algae makes a biomass that can be used to solve 2 major world issues in a sustainable way – food security and the energy transition – makes it particularly promising. Also, this method offers additional spin-off benefits for other industries e.g., through manufacturing bioproducts for other industries. It can also help national economies where its operated and help and the environment by creating local employment, and by helping to develop the circular economy. Data4’s revolutionary industrial ecology project, therefore, looks as though it has the potential to offer a win/win for many different stakeholders, although there will be a two-year wait for a prototype.

Tech Tip – Use Task Scheduler to Automate Tasks in Windows

Automating routine tasks can save time and ensure that critical operations aren’t overlooked. The Windows Task Scheduler allows you to automate tasks such as daily backups, weekly disk cleanups, off-hours software updates, periodic service restarts, and sending reminder emails for events by setting them to occur at specific times or when certain events happen. Here’s how to use Task Scheduler:

– Search for Task Scheduler in the Windows search bar and open it.

– To create a new task, click on Create Basic Task or Create Task for more detailed options.

– Follow the wizard to specify when the task should run and what action it should perform, such as launching a program, sending an email, or displaying a message.

– After setting up your task, it will run automatically according to your specified schedule or event trigger.

Featured Article : Don’t Ask Gemini About The Election

Google has outlined how it will restrict the kinds of election-related questions that its Gemini AI chatbot will return responses to.

Why? 

With 2024 being an election year for at least 64 countries (including the US, UK, India, and South Africa) the risk of AI being misused to spread misinformation has grown dramatically. This problem extends to a lack of trust by various countries’ governments (e.g. India) around AI’s reliability being taken seriously. There are also worries about how AI could be abused by adversaries of the country holding the election, e.g. to influence the outcome.

Recently, for example, Google’s AI made the news for when its text-to-image AI tool was overly ‘woke’ and had to be paused and corrected following “inaccuracies.” For example, when Google Gemini was asked to generate images of the Founding Fathers of the US, it returned images of a black George Washington. Also, in another reported test, when asked to generate images of a 1943 German (Nazi) soldier, Google’s Gemini image generator returned pictures of people of clearly diverse nationalities (a black and an Asian woman) in Nazi uniforms.

Google also says that its restrictions of election-related responses are being used out of caution and as part of the company’s commitment to supporting the election process by “surfacing high-quality information to voters, safeguarding our platforms from abuse, and helping people navigate AI-generated content.” 

What Happens If You Ask The ‘Wrong’ Question? 

It’s been reported that Gemini is already refusing to answer questions about the US presidential election, where President Joe Biden and Donald Trump are the two contenders. If, for example, users ask Gemini a question that falls into its election-related restricted category, it’s been reported that they can expect Gemini’s response to go along the lines of: “I’m still learning how to answer this question. In the meantime, try Google Search.” 

India 

With India being the world’s largest democracy (about to undertake the world’s biggest election involving 970 million voters, taking 44 days), it’s not surprising that Google has addressed India’s AI concerns specifically in a recent blog post. Google says: “With millions of eligible voters in India heading to the polls for the General Election in the coming months, Google is committed to supporting the election process by surfacing high-quality information to voters, safeguarding our platforms from abuse and helping people navigate AI-generated content.” 

With its election due to start in April, the Indian government has already expressed its concerns and doubts about AI and has asked tech companies to seek its approval first before launching “unreliable” or “under-tested” generative AI models or tools. It has also warned tech companies that their AI products shouldn’t generate responses that could “threaten the integrity of the electoral process.” 

OpenAI Meeting 

It’s also been reported that representatives from ChatGPT’s developers, OpenAI, met with officials from the Election Commission of India (ECI) last month to look at how OpenAI’s ChatGPT tool could be used safely in the election.

OpenAI advisor and former India head at ‘X’/Twitter, Rishi Jaitly, is quoted from an email to the ECI (made public) as saying: “It goes without saying that we [OpenAI] want to ensure our platforms are not misused in the coming general elections”. 

Could Be Stifling 

However, Critics in India have said that clamping down too much on AI in this way could actually stifle innovation and could lead to the industry being suffocated by over-regulation.

Protection 

Google has highlighted a number of measures that it will be using to keep its products safe from abuse and thereby protect the integrity of elections. Measures it says it will be taking include enforcing its policies and using AI models to fight abuse at scale, enforcing policies and restrictions around who can run election-related advertising on its platforms, and working with the wider ecosystem on countering misinformation. This will include measures such as working with Shakti, India Election Fact-Checking Collective, a consortium of news publishers and fact-checkers in India.

What Does This Mean For Your Business? 

The combination of rapidly advancing and widely available generative AI tools, popular social media channels and paid online advertising look very likely to pose considerable challenges to the integrity of the large number of global elections this year.

Most notably, with India about to host the world’s largest election, the government there has been clear about its fears over the possible negative influence of AI, e.g. through convincing deepfakes designed to spread misinformation, or AI simply proving to be inaccurate and/or making it much easier for bad actors to exert an influence.

The Indian government has even met with OpenAI to seek reassurance and help. The AI companies such as Google (particularly since its embarrassment over its recent ‘woke’ inaccuracies, and perhaps after witnessing the accusations against Facebook after the last US election and UK Brexit vote), are very keen to protect their reputations and show what measures they’ll be taking to stop their AI and other products from being misused with potentially serious results.

Although governments’ fears about AI deepfake interference may well be justified, some would say that following the recent ‘election’ in Russia, misusing AI is less worrying than more direct forms of influence. Also, although protection against AI misuse in elections is needed, a balance must be struck so that AI is not over-regulated to the point where innovation is stifled.

Each week we bring you the latest tech news and tips that may relate to your business, re-written in an techy free style. 

Archives