Powered by

EyeEm Sparking Controversy: Photos Now Fuel AI Training (Unless You Opt-Out)


Photo sharing platform EyeEm is facing a user backlash after it implemented a new policy allowing them to license user photos for training artificial intelligence models. The move comes after EyeEm’s acquisition by Spanish company Freepik following its bankruptcy last year.

The new policy involves adding a clause to the Terms & Conditions, essentially granting EyeEm the right to use user-uploaded content to “train, develop, and improve software, algorithms, and machine-learning models.” While this may benefit the development of AI, many users are unhappy with the way they were informed and the limited options provided.

The biggest point of contention is the opt-out process. EyeEm provided users with a mere 30-day window to opt-out by removing all their content from the platform. This timeframe seems unreasonable considering the sheer volume of photos some users might have uploaded over the years. Additionally, EyeEm acknowledges that complete deletion from partner platforms can take up to 180 days, further limiting user control over their work.

This has led many to criticize the policy as an unfair and inconsiderate way to acquire vast amounts of training data. Photographers argue they should have more explicit opt-in choices, allowing them to decide if their work contributes to AI development.

The controversy surrounding EyeEm’s policy raises important questions about user rights and data ownership in the digital age. As AI continues to evolve, it’s crucial to find a balance between technological progress and fair treatment of those who contribute the data that fuels it.

EyeEm Sparking Controversy: Photos Now Fuel AI Training (Unless You Opt-Out)

EyeEm Sparking Controversy: Photos Now Fuel AI Training (Unless You Opt-Out)

Photo sharing platform EyeEm is facing a user backlash after it implemented a new policy allowing them to license user photos for training artificial intelligence models. The move comes after EyeEm’s acquisition by Spanish company Freepik following its bankruptcy last year.

The new policy involves adding a clause to the Terms & Conditions, essentially granting EyeEm the right to use user-uploaded content to “train, develop, and improve software, algorithms, and machine-learning models.” While this may benefit the development of AI, many users are unhappy with the way they were informed and the limited options provided.

The biggest point of contention is the opt-out process. EyeEm provided users with a mere 30-day window to opt-out by removing all their content from the platform. This timeframe seems unreasonable considering the sheer volume of photos some users might have uploaded over the years. Additionally, EyeEm acknowledges that complete deletion from partner platforms can take up to 180 days, further limiting user control over their work.

This has led many to criticize the policy as an unfair and inconsiderate way to acquire vast amounts of training data. Photographers argue they should have more explicit opt-in choices, allowing them to decide if their work contributes to AI development.

The controversy surrounding EyeEm’s policy raises important questions about user rights and data ownership in the digital age. As AI continues to evolve, it’s crucial to find a balance between technological progress and fair treatment of those who contribute the data that fuels it.