Artificial intelligence (AI) platforms like Midjourney that generate images from text prompts have implemented banned words lists to maintain community safety standards. However, while safeguarding users is crucial, banned words policies also raise thorny questions around censorship and free speech.
Why AI Platforms Filter Certain Words
Implementing banned words lists enables platforms to automatically filter out potentially objectionable content. Midjourney currently prohibits words related to violence, adult content, harassment, and more. This approach aims to foster an inclusive community and prevent harm.
However, critics note banned words policies inevitably censor certain types of expression. Marginalized groups may find their speech disproportionately blocked if algorithms fail to consider context. Blanket banning words like "queer" or "trans" would restrict how LGBTQ+ users can depict themselves.
Nuanced Content Moderation Alternatives
Rather than outright banning words, platforms might consider more nuanced content moderation approaches:
- Community flagging of harmful posts, with human review
- Algorithms detecting objectionable content from combinations of words, not individual terms
- Allowing certain words in identity-affirming contexts from underrepresented groups
No approach is perfect. But incorporating human wisdom through community participation may balance speech concerns better than pure automation.
Encouraging Open Dialogue on Complex Issues
Content moderation on AI platforms, as with social networks, involves complex tradeoffs. Banning problematic speech risks limiting marginalized expression, while allowing it may harm vulnerable users.
As AI generative models become more ubiquitous for both creators and consumers, promoting constructive discussion around these issues is vital. There are rarely easy universal answers. But seeking input from diverse constituencies and reconsidering decisions when harms outweigh benefits represent positive initial steps.
By encouraging ongoing transparent dialogue between companies and impacted communities, we allow space for growth and policy changes when inevitable mistakes occur. And in doing so, build nominally freer platforms that empower more voices.