In a bold move, Elon Musk’s X, formerly Twitter, has released Grok 2, a large language model and AI image generator with very few safeguards. This latest update to X’s chatbot allows premium users to generate nearly any image they can imagine – including deepfakes, copyrighted characters, and potentially offensive content.

Musk presents this as a win for free speech, but it’s raising alarm bells for legal experts. Unlike other popular AI tools, Grok 2 lacks robust content moderation and copyright protections. Users can freely generate high-quality images of public figures in compromising positions or copyrighted characters like Kamala Harris, Donald Trump, or Mickey Mouse engaged in questionable activities.

This unrestricted approach sets Grok 2 apart in the landscape of commercial AI services from major companies. While it gives users unprecedented creative freedom, it also opens the door to potential copyright infringement, defamation, deepfakes, and facilitates the spread of misinformation. A surge in copyright and personality rights case is likely to follow.

The release of Grok 2 also highlights the pressing need for clearer legal framework around AI-generated content. As regulators scramble to catch up, X’s AI approach is already facing scrutiny. The European Commission is investigating X for potential violations of the Digital Safety Act.

Employers are encouraged to review their Use of AI Policy in light of Grok 2’s capabilities, as its unrestricted image generation poses significant risks of creating copyrighted derivatives.