Slack Guide: "We will continue to feed AI your data, but don't worry"

Father

Professional
Messages
2,604
Reputation
4
Reaction score
622
Points
113
Which is more important: personalization or privacy? It seems that the company is not telling you something.

In response to the dissatisfaction of users who were outraged that Slack's privacy principles allowed their data to be used for AI training, the company made changes to its rules. Previously, the service's policy allowed analyzing client data for training AI models, unless otherwise specified. However, the changes made can hardly be called aimed at protecting the privacy of users.

Slack claims that data never leaves the platform and is not used for training third-party models. Machine learning models, according to the company, are used exclusively within the platform, to improve the work with channel recommendations, emojis and search results. At the same time, these models do not get access to the content of private messages, private and public channels, which excludes the possibility of storing or reproducing customer data.

In addition, the company uses third-party LLM models for its Slack AI product, without storing customer data. Models are hosted on their own AWS infrastructure, which provides additional protection for user data.

However, the updated privacy guidelines emphasize that customer data is analyzed exclusively to develop non-generative AI models, such as emoji and channel recommendations. A Slack spokesperson said that the company's policy has not changed, and the update was only aimed at improving clarity of wording.

However, the use of default data raises questions for regulators. To disable this feature, workspace owners must manually send a request to the Slack support service. The processing time for such requests is not specified.

The company explains that using data helps you better understand user requests, improve auto-completion, and suggest appropriate emojis. Such personalization is only possible when analyzing user interactions with the platform.

Aaron Mauer, one of the Slack engineers, said that the company does not train LLM data models based on customer data. However, the terms of use of the service indicate that this option still exists.

Matthew Hodgson, CEO of Element messenger on the Matrix protocol, called the use of personal customer data to train AI "brain-blowing." He noted that the very idea of transferring unencrypted data to AI models with opaque policies is alarming.

It's worth noting that Slack isn't the only company that uses customer data to train AI models. So, the social network Reddit recently also announced a collaboration with OpenAI, providing publicly available posts from the forum for use by ChatGPT. However, users of secure enterprise platforms like Slack may be extremely surprised to learn that their data is being used to train global models, if they don't opt out on their own.

Thus, this mini-scandal with Slack clearly demonstrates the need for modern users to closely monitor the use of their data, especially against the background of growing interest in artificial intelligence in the technology industry and its widespread adoption by various companies in completely different areas and industries.
 
Top