The European Commission is displeased that Grok, Elon Musk’s AI chatbot, makes it very easy to strip men, women, and children down to their underwear. “This isn’t racy, it’s illegal, and has no place in Europe,” the Commission responded.
Grok has been the subject of criticism for some time since users of Elon Musk’s xAI AI chatbot discovered they could use the artificial intelligence to edit existing photos of people into more revealing or sexually suggestive images. For example, you can have the program remove clothing or put people in bikinis.
Users can have photos edited without the permission of the people depicted in them. Sometimes the editing is done within Grok’s application, but in other cases the images end up on X (where Grok is integrated). Suddenly, those ‘spicy images’ become publicly visible. Critics and activists call this digital sexual exploitation or abuse, especially because it affects women without their consent.
In some cases, Grok has even generated sexualized AI images of minors after users requested them. For example, children are sometimes (by AI) shown in their underwear, and the resulting images circulate on social media.
In one particularly disturbing case, the tool was found to be removing clothing from photos of recognisable minors, including a 14-year-old actress from the television series “Stranger Things.” Legal experts have criticised the feature as potentially violating both US and European laws on child abuse material.
European Commission demands action
Grok and xAI are aware of the issues. Elon Musk’s company previously acknowledged the shortcomings and claimed they were urgently working on improving them.
Grok even called the feature “illegal and prohibited,” saying it will block users who generate such photos and videos. However, for now, it appears it remains very easy to generate such racy content.
The European Commission says it is “very aware” that X offers a so-called “spicy mode,” according to Thomas Regnier, the spokesman. “This is not spicy. This is illegal. This is shocking. This is disgusting. This is how we look at this, and this has no place in Europe,” he said sharply.
The British media watchdog is concerned
In the United Kingdom, media regulator Ofcom is also demanding an explanation from X. The regulator wants to know how Grok was able to make images in which people are undressed. Children are sexualised, and whether X has thereby breached its legal duty to protect users.
X has not yet responded to the European Commission or Ofcom’s statements. In an earlier response to Reuters, the company called them “lies by traditional media.” Elon Musk himself dismissed the criticism online by posting laughing emojis alongside edited images of celebrities who appeared to be depicted in bikinis.
Ofcom says it is concerned about the seriousness of the situation. “We have contacted X and xAI as a matter of urgency to understand what steps they are taking to meet their legal obligations and protect UK users,” a spokesperson said.
In the UK, creating or distributing non-consensual intimate images and child sexual abuse material, even when generated by AI, is a criminal offence. Tech platforms are also required to prevent users from encountering illegal content and to remove such content as soon as they become aware of it.
