SoundCloud appears to have quietly changed its terms of use to allow the company to train AI on audio that users upload to its platform.
As spotted by tech ethicist Ed-Newton Rex, the latest version of SoundCloud’s terms include a provision giving the platform permission to use uploaded content to “inform, train, [or] develop” AI.
“You explicitly agree that your Content may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services as part of and for providing the services,” read the terms, which were last updated February 7.
The terms have a carve out for content under “separate agreements” with third-party rightsholders, such as record labels. SoundCloud has a number of licensing agreements with indie labels as well as major music publishers, including Universal Music and Warner Music Group.
TechCrunch wasn’t able to find an explicit opt-out option in the platform’s settings menu on the web. SoundCloud didn’t immediately respond to a request for comment.
SoundCloud, like many large creator platforms, is increasingly embracing AI.
Last year, SoundCloud partnered with nearly a dozen vendors to bring AI-powered tools for remixing, generating vocals, and creating custom samples to its platform. In a blog post last fall, SoundCloud said that these partners would receive access to content ID solutions to “ensure rights holders [sic] receive proper credit and compensation,” and it pledged to “uphold ethical and transparent AI practices that respect creators’ rights.”
Techcrunch event
Berkeley, CA
|
June 5
BOOK NOW
A number of content hosting and social media platforms have changed their policies in recent months to allow for first- and third-party AI training. In October, Elon Musk’s X updated its privacy policy to let outside companies train AI on user posts. Last September, LinkedIn amended its terms to allow it to scrape user data for training. And in December, YouTube began letting third parties train AI on user clips.
Many of these moves have prompted backlash from users who argue that AI training policies should be opt-in as opposed to opt-out, and who argue that they should be credited and paid for their contributions to AI training data sets.
Read the full article here