LinkedIn has suspended the use of UK user data to train its artificial intelligence (AI) models after a regulator raised concerns.
The career-focused social networking site, owned by Microsoft, quietly saw users around the world opted into their data being used to train its AI models.
But the Information Commissioner’s Office (ICO) said on Friday that it was “pleased” LinkedIn had confirmed that it had paused on using UK users’ information.
LinkedIn said it welcomes the chance to engage with the ICO further.
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its UK users,” said the ICO’s executive director, Stephen Almond.
Many big tech firms, including LinkedIn, are looking to user-generated content on their platforms as a fresh source of data for training AI tools.
“Generative” AI tools, such as chatbots like OpenAI’s ChatGPT or image generators like Midjourney, learn from huge volumes of text and image data.
But a LinkedIn spokesperson told BBC News that the company believes users should have control over their data.
As such, it has given UK users a way to opt out of having their data used to train its AI models.
“We’ve always used some form of automation in LinkedIn products, and we’ve always been clear that users have the choice about how their data is used,” they added.
Social platforms where users post about their lives, or jobs, can provide rich material to help tools sound more natural.
“The reality of where we’re at today is a lot of people are looking for help to get that first draft of that resume… to help craft messages to recruiters to get that next career opportunity,” LinkedIn’s spokesperson said.
“At the end of the day, people want that edge in their careers and what our gen-AI services do is help give them that assist.”
The company says in its global privacy policy that user data will be used to help develop its AI services, and in a help article it states that it will also be processed when users interact with tools that offer post writing suggestions, for example.
This will now not apply to users in the UK, alongside those in the European Union (EU), European Economic Area and Switzerland.
Meta and X (formerly known as Twitter) are among platforms that, like LinkedIn, want to use content posted on their platforms to help develop their generative AI tools.
But they have faced regulatory hurdles in the UK and EU, with strict privacy rules placing limits on how and when personal data can be collected.
Meta halted its plans to use UK adults’ public posts, comments and images to train its AI tools in June following criticism, and concerns raised by the ICO.
The company recently began re-notifying UK users of Facebook and Instagram about its plans and clarified its process for opting-out after engaging with the data watchdog.
LinkedIn will now likely face a similar process before it can resume plans to train its tools with UK users’ data.
“In order to get the most out of generative AI and the opportunities it brings, it is crucial that the public can trust that their privacy rights will be respected from the outset,” said the ICO’s Mr Almond.
He said the regulator would “continue to monitor” developers such as Microsoft and LinkedIn to ensure they are protecting UK users’ data rights.