Posts: 121   +3
Staff
Editor's take: It didn't take long for users to generate controversial images using Grok, sparking a debate about how these AI-generated pictures might influence public perception of politicians or celebrities. With the potential of misinformation impacting elections, it is fair to wonder about the responsibilities of developers and platforms in ensuring the integrity of information shared on their networks. Moreover, this initial wave of images could wind up being a cautionary tale if they are used to shape future regulations or guidelines for AI content creation tools.

With much fanfare and accompanied by great displays of imagination, Elon Musk's AI chatbot Grok has begun allowing users to create AI generated images from text prompts and post them on X.

Developed by Musk's xAI, Grok is powered by the Flux 1 AI model from Black Forest Labs and is currently available to X's Premium subscribers.

Black Forest Labs is a Germany-based AI image and video startup that launched on August 1, and they seem to adhere to the same school of thought that is fueling Musk's vision for Grok as an "anti-woke chatbot."

Users have quickly taken advantage of Grok's features to create and disseminate fake images of political figures and celebrities, often placing them in disturbing or controversial scenarios.

This rapid proliferation of potentially misleading content has raised significant concerns, particularly given the upcoming US presidential election. Unlike other AI image generation tools, Grok seems to lack comprehensive safeguards or restrictions, which has sparked fears about the potential spread of misinformation.

In contrast, other major tech companies have implemented measures to curb the misuse of their AI tools. For instance, OpenAI, Meta, and Microsoft have developed technologies or labels to help identify AI-generated images. Additionally, platforms like YouTube and Instagram have taken steps to label such content. While X does have a policy against sharing misleading manipulated media, its enforcement remains unclear.

Although Grok claims to have some limitations, such as refusing to generate nude images, these restrictions appear to be inconsistently enforced. Further experiments by users on X have shown that Grok's limitations can be easily circumvented, leading to the creation of highly inappropriate and graphic content.

Despite its purported safeguards against producing violent or pornographic images, users have managed to generate disturbing images, including depictions of Elon Musk and Mickey Mouse involved in violent acts, or content that could be considered child exploitation when manipulated with specific prompts.

It is hard to imagine how this would fly on other AI image generation tools, many of which have been met with criticism for their various shortcomings. Google's Gemini AI chatbot halted its feature after getting pushback for creating racially inaccurate portrayals. Similarly, Meta's AI image generator faced backlash due to difficulties in producing images of couples or friends from diverse racial backgrounds. And TikTok had to remove an AI video tool after it was revealed that users could create realistic videos of individuals making statements, including false claims about vaccines, without any identifying labels.

However, Musk, who has faced criticism for spreading election-related misinformation on X, is likely to remain unmoved when it comes to taking similar actions. He has praised Grok as "the most fun AI in the world," emphasizing its uncensored nature.

Permalink to story:

 
Humanity was always going to be the architect of it's own undoing. AI is beginning of the end. First it intriguied us. Now it serves us. Next it will learn us. And finally it will destroy us.
 
Humanity was always going to be the architect of it's own undoing. AI is beginning of the end. First it intriguied us. Now it serves us. Next it will learn us. And finally it will destroy us.
Not unless we get more electricity. China is building the coal plants and we are building windmeals. China will kill us before AI
 
Not unless we get more electricity. China is building the coal plants and we are building windmeals. China will kill us before AI
You've been suspended in life since more than 300 million centuries ago, this is merely a rehash of a prior simulation... Rest easy, so to speak.
 
You've been suspended in life since more than 300 million centuries ago, this is merely a rehash of a prior simulation... Rest easy, so to speak.
You must be more clear because I don't understand
 
Why does the AI images of humans' skin always have a shiny plastic look to them? Will this get better over time?
 
Very good quality pool image. Gj mr Grok.
 
Humanity was always going to be the architect of it's own undoing. AI is beginning of the end. First it intriguied us. Now it serves us. Next it will learn us. And finally it will destroy us.
LOLOL well I can easily tell the difference! may be they need to get better glasses???
 
Why does the AI images of humans' skin always have a shiny plastic look to them? Will this get better over time?

I imagine it’s due to many pictures online being 3D graphics / highly manipulated model fotos with lots of makeup. Some of it may also be that realistic subsurface scattering for some reason isn’t statistically likely within the models, when compared to base skin tones, removing realism in favour of the more ‘statistically likely’ pixel, ie. One of base skin tone.
 
Whatever, as long as its funny! Will the AI ever have a sense of humor?
 

Similar threads