Meta Talks About ‘Made with AI’ Images Annoying Instagram Users
Meta is reevaluating how it labels posts made with AI after users complained about excessive warnings filling their feeds.
Like other tech companies, Meta has allowed AI-generated posts on platforms like Instagram, typically accompanied by a label. This approach aims to inform users that the content may not be entirely real, while still enabling creators to utilize AI in their work.
Recently, concerns have arisen as even minor AI edits, such as altering a single pixel with a tool like Photoshop, trigger the “made with AI” label. This has led to frustrations among photographers and others, who worry that legitimate content may be unfairly mistrusted due to these labels.
In response to feedback, Meta acknowledges the issue and suggests potential changes to their labeling requirements in the future.
“Our goal has always been to inform users when they encounter AI-generated content. We’re listening to recent feedback and reviewing our approach to ensure our labels accurately reflect the level of AI involvement in each image,” stated a Meta spokesperson.
Meta employs industry-standard indicators, including metadata from editing apps like Adobe Photoshop, to detect AI edits. This data, while invisible to viewers, helps establish the authenticity and origin of edited images across their platforms. The company is actively collaborating with industry partners to refine these processes and align their labeling practices with their intended purpose.