The news: Elon Musk-owned platform X is introducing restrictions to its Grok AI model in response to backlash over the tool generating explicit deepfakes, including sexual images of children.
Grok’s past: Complaints over Grok’s lackluster content moderation capabilities are mounting. Grok stirred controversy in July when it generated antisemitic outputs, which led to the tool being briefly restricted to image generation. As a result, Turkey banned Grok and EU regulators pushed for stricter AI chat regulations.
The broader issue: Grok’s mishaps have been highly scrutinized, but the latest scandal highlights the broad problem that AI tools make it easy for users to produce harmful and explicit content.
Implications for marketers: AI tools frequently lack the security measures required to protect against harm. Ongoing volatility with tools like Grok could make marketers interested in piloting AI programs shy away without clear information about a tool’s transparency and safety guardrails.
The reputational risk associated with deepfake content could outweigh Grok’s current technical appeal to marketers, like plans to integrate ads into responses. Marketers must remain vigilant and prioritize platforms that provide AI tools with clear safety and governance measures.
You've read 0 of 2 free articles this month.
One Liberty Plaza9th FloorNew York, NY 100061-800-405-0844
1-800-405-0844sales@emarketer.com