API Documentation
Welcome to Modergator's API documentation. Our simple yet powerful API enables you to integrate content moderation capabilities directly into your applications. With just a single API endpoint, you can moderate images and videos with enterprise-grade accuracy.
Getting Started#
2. Get Your API Key
Generate an API key from your dashboard to start making requests.
3. Make Your First Request
Here are a few snippets to get you started!
Loading...
4. Check Out Your Results
Check out the moderation results on your logs page.
Track your API usage and billing in your settings page.
API Reference#
Authentication#
Secure your API requests using your Modergator API key. Each request must include your API key in the Authorization header. You can generate API keys from your dashboard.
AUTHENTICATION HEADER
Loading...
POST Moderation Job
Create a job to moderate a piece of media.
ENDPOINT
Loading...
REQUEST EXAMPLE
Loading...
Metadata
The metadata
field is an optional object that can contain any additional information you want to associate with the moderation request. This could include identifiers like userId
, postId
, or any other contextual data that helps you track and manage your moderation requests. The metadata will be preserved and returned with the moderation results.
Blob Files
Currently, our API currently does not support direct blob uploads. You must provide a public URL (i.e from your own S3 bucket) for the media you want to moderate. This may change in the future.
Response Format
After your POST, the API will asynchronously start moderation of your content. The response will contain a job object which you can use to check the moderation status when it completes.
RESPONSE EXAMPLE
Loading...
GET Moderation Job
Get a moderation job to see if its completed yet, and when it is, get the results of your content's moderation.
ENDPOINT
Loading...
Each moderation request has 2 status fields: status
and moderation_status
. The status
field indicates the overall request status, while the moderation_status
field indicates the moderation result.
RESPONSE EXAMPLE
Loading...
To understand how the API categorizes moderation results, refer to the Moderation Details section below.
Getting More Detailed Results
We do not included extra moderation details in the default response. If you need more detailed moderation results, simply include the includeDetails
query parameter in your request.
REQUEST EXAMPLE
Loading...
DETAILED RESPONSE EXAMPLE
Loading...
Safe Moderation Status
If the moderation status is SAFE
, the moderation_results
field will be an empty array and there will also be not confidence level. The result is marked as SAFE
when the confidence level of an item in the content is below 55%.
Moderation Details#
Our moderation API analyzes images and videos for inappropriate content, helping you maintain a safe and compliant platform. Our goal is to do as much of the thinking for you as possible - so you can focus on building great products and communities.
In support of that, the API is dead simple and our moderation results are slightly opinionated. In the future, we will allow you to configure the status tagging to better fit your needs, but for now, you can reference the following categories to understand how the API will automatically tag your content.
Generally, there are 4 high-level categories that the our API will always mark as UNSAFE:
- Explicit - Sexual content or nudity.
- Violence - Physical harm or injury.
- Visually Disturbing - Graphic or gory imagery.
- Hate Symbols - Offensive or discriminatory symbols.
The remaining categories are marked as the lower severity of FLAGGED:
- Non-Explicit Nudity of Intimate parts and Kissing
- Swimwear or Underwear
- Drugs & Tobacco
- Alcohol
- Rude Gestures
- Gambling