Sign In

Content Moderation

As a platform, our goal is to foster an environment of openness and inclusivity while empowering users with tools to see the most interesting and relevant content to them. We do this through a combination of user controls, automated content labeling, and content moderation.

User Control

We prioritize user control and offer options to customize the content you see on our platform, including:

  • Tag-Based Filtering: Users can hide specific content based on tags. For instance, if a user doesn't want to view anime content, they can hide the anime tag.
  • Selective Hiding: Users can choose to hide specific images, models, or even all models created by a specific user.
  • Content Categories: Users can opt-into which categories of moderated content they wish to view, such as enabling nudity, but not explicit nudity. Furthermore, moderated content remains hidden until a user registers an account and sets their personal content visibility preferences. (Please see our Terms of Service for more on our content moderation.)

Automated Content Labeling and Moderation

We facilitate this high degree of user control through a combination of checkpoints:

  • Automated Content Labeling: We use Amazon Rekognition as well as an open-source image tagging system to automatically apply content labels and screen images for moderated content.
  • Community-Driven Moderation: Users can vote on the content labels applied to images, helping us refine our image classification system. We encourage users to report content violations and reward their efforts through a Guardian's leaderboard that highlights the most helpful community members.
  • Manual Reviews: Certain combinations of tags or resources trigger a manual review before the content becomes publicly visible.

As the platform grows, we are committed to continuous improvement and maintaining a user-oriented and safe platform. As part of our community, we encourage you to contribute to these efforts by accurately tagging the models you create, participating in tagging others' models where possible, and promptly reporting any content that violates our Terms of Service. Together we can better navigate the development of new AI content while upholding the openness and inclusivity of this platform and community.