LinkedIn, the professional networking giant owned by Microsoft, is under fire as a new lawsuit alleges the platform disclosed the private messages of its Premium customers to train generative AI models without consent.

The lawsuit, filed in California on behalf of Alessandro De La Torre and millions of other Premium subscribers, accuses LinkedIn of breaching contractual promises and violating US privacy laws.

The controversy centres on LinkedIn’s policy changes in 2024, which allowed user data to be used for AI training purposes. While LinkedIn exempted users in countries with stringent privacy regulations (e.g. the UK, EU, and Canada) from this practice, US users were automatically enrolled in the data-sharing programme unless they manually opted out. Crucially, the lawsuit alleges that LinkedIn extended this data-sharing to include the contents of private InMail messages, which often contain sensitive personal and professional information.

The lawsuit highlights the potential implications for users, stating that these private messages could include “life-altering information about employment, intellectual property, compensation, and other personal matters.” This, the plaintiff argues, breaches the LinkedIn Subscription Agreement (LSA), which explicitly assures Premium customers that their confidential information will not be disclosed to third parties. The complaint also points out that LinkedIn’s alleged failure to notify customers of these changes undermines user trust and constitutes a breach of the US Stored Communications Act.

LinkedIn has denied the allegations, labelling them as “false claims with no merit.” However, for many, the platform’s response to the privacy concerns raised last year casts a shadow over its denials. For example, in August 2024, LinkedIn introduced a setting allowing users to opt out of data-sharing for AI training, but this was enabled by default, raising questions about informed consent. Also, the platform discreetly updated its privacy policy in September 2024 to include the use of user data for AI training, with a notable caveat: opting out would not affect data already used to train models.

Some legal commentators have noted that this case could set a significant precedent for how social media platforms and tech companies handle user data in the age of AI. For example, as the plaintiff’s attorney, Rafey Balabanian, says: “This lawsuit underscores a growing tension between innovation and privacy,” and that “LinkedIn’s actions, if proven, represent a serious breach of trust, particularly given the sensitive nature of the information involved.”

The potential fallout for LinkedIn could extend beyond the courtroom. Premium customers, who pay up to $169.99 per month for features like InMail messaging and enhanced privacy, may, for example, choose to reconsider their subscriptions if these allegations prove true. Also, the case draws attention to the broader issue of how companies disclose and manage data for AI development, a concern that has already prompted regulatory scrutiny in regions like the UK and EU. Notably, the UK Information Commissioner’s Office (ICO) had earlier pressed LinkedIn to halt the use of UK user data for AI training, to which LinkedIn had agreed.

For users, this lawsuit serves as a reminder of the need to scrutinise privacy settings and policies. If successful, the plaintiffs seek damages, statutory penalties of $1,000 per affected user, and the deletion of any AI models trained using their data. With LinkedIn potentially facing financial and reputational damage, this case could act as a catalyst for greater transparency and accountability in the tech industry. Whether LinkedIn’s alleged actions were an oversight or a deliberate strategy to accelerate AI innovation, the coming months will undoubtedly shape the future of user privacy in the digital age.