Elon Musk’s Grok AI Restricts Image Editing to Paid Users After Deepfake Abuse

Elon Musk’s artificial intelligence chatbot, Grok, has limited its image editing and generation features to paid subscribers only following widespread criticism over the creation of non-consensual deepfake images.

Grok, which operates on the social media platform X (formerly Twitter), was reportedly used by users to manipulate photos of individuals without their consent.

The controversy erupted after the AI tool generated sexualised images, including altered pictures of women and, in some reported cases, minors.

The images sparked outrage among privacy advocates, child-safety groups, and the general public, raising concerns about the misuse of artificial intelligence and inadequate content moderation safeguards.

In response to the backlash, X restricted access to Grok’s image editing tools, making them available only to paying users.

Although the company has not directly linked the decision solely to deepfake abuse, the move is believed to be aimed at improving accountability, as paid users are tied to verified payment information.

The development has drawn the attention of regulators in Europe and the United Kingdom, with officials warning that some AI-generated content could breach existing laws on privacy, child protection, and the sharing of intimate images without consent.

Grok has since acknowledged gaps in its safety controls, as global debate intensifies ove. Al regulation and digital safety.