Filtering Videos

Available since 3.28.0

1. Filtering Videos

Filtering videos is easy with the new media filter

1.1. Prerequisites

  • You have contacted CleanSpeak to have the MediaFilter enabled for your license

1.2. Considerations

  • You will be billed per operation

    • Videos are billed with the following operations := number of filters enabled × (length of the video in seconds + 1)

    • Animated GIFs are billed using operations := number of filters enabled × number of frames in the GIF

  • All video filters are executed asynchronously and require that content storage is enabled in your application as well as a webhook setup in your integration

1.3. How to configure CleanSpeak to use this feature

1.3.1. Setup a Webhook

  1. If you already have a webhook configured for your application (with the content approval event enabled) then you can skip this section

  2. Navigate to the webhooks section under settings

  3. Create a new webhook

  4. Use a url that your integration is listening for webhooks on

  5. Enable the ContentApproval event

  6. Click the applications tab

  7. Select the application you intend to use with video moderation

1.3.2. Setup the video filter

  1. Navigate to your application configuration

  2. Enable store content

  3. Click the media tab

  4. Enable the video filter (if the video filter is missing then your license does not have the media filter feature enabled and you will need to contact sales)

  5. Enable at least one filter type [Nudity, Scam, Offensive, Weapon/Alcohol/Drugs]

  6. Select at least one filter action for the enabled filter (if everything is allow then it will treat the filter as disabled)

  7. Click save

1.3.3. Moderate some content

You can now submit content to this application via the moderate endpoint.

When you submit content to the moderate endpoint you will get a queuedForApproval response right away and the video will begin its moderation. After moderation is complete your application will get an event through the webhook you setup or the video will enter the moderation queue for manual moderation (depending on the filter action in the rule that the filter triggers).

1.4. Example usage of the filter

URI

POST /content/item/moderate

For persistent content you will need to provide a unique identifier (UUID) on the API request. When using this API with persistent content you’ll need to first ensure your Application configuration is set to Persistent Content Type. If you have not done this the API request will fail with an error message indicating you have not yet configured your Application.

URI

POST /content/item/moderate/{contentItemId}

Table 1. Request Parameters

contentItemId [UUID] Required

The Id of the Content Item.

Table 2. Request Body

content [Object] Required

The piece of content being moderated.

content.applicationId [UUID] Required

The Id of the application that corresponds to the content source.

content.createInstant [Long] Required

The instant that the content was generated.

content.location [String] Optional

Specifies the location within the application that the content was generated. For example, you might use a chat room name, area Id for a game, or a thread Id for a forum. This parameter is used by CleanSpeak to display conversational views of content with the Context View feature.

content.parts [Array] Required

An array that contains one to many content parts. If your content only has a single part, such as a chat message, the array will only contain a single text entry. The reason you would send in multiple content parts within a single request is to ensure the moderation decision affects each part.

For example, you may send in an image for moderation, the image has a title and a description field. You would send this to CleanSpeak with three content parts, the image and two text parts. If the content of the image title is rejected the entire content needs to be rejected wholesale, the title, description and the image. In this case it would make sense to treat all three parts as an atomic unit of content.

If individual content parts are not related and can be rejected or approved separately then you should send them to CleanSpeak as separate requests.

content.parts[x].content [String] Required

The content to be filtered. For videos this is a fully qualified S3 URL of the video to be displayed.

content.parts[x].name [String] Optional

The name of the content part. This value is optional an intended to better identify the context when you have more than one content part. For example this could be Title, Message Body or Image.

content.parts[x].type [String] Required

The type of this content part. Use video for this filter.

Other possible values:

  • text

  • bbcode

  • html Available Since 3.19.0

  • attribute

  • hyperlink

  • image

  • video

  • audio

content.receiverDisplayName [String] Optional

The display name of the user that received the content. This parameter should only be set if this piece of content was a private message between two users.

content.receiverId [UUID] Optional

The Id of the user that received of the content. This parameter should only be set if this piece of content was a private message between two users.

content.senderDisplayName [String] Optional

The display name (username or whatever is displayed to other users in the game/forum) that generated the content.

content.senderId [UUID] Required

The Id of the user that generated the content. This parameter is required so that CleanSpeak can analyze and associate the content with the user that generated it.

moderation [String] Optional

Tells CleanSpeak to forcibly put the content into a queue (Approval, User Alert or Content Alert). This overrides the configuration you have setup for the Application in the Management Interface.

  • requiresApproval

  • generatesAlert

  • generatesContentAlert

Example Request JSON
{
  "content": {
    "applicationId": "f81d4fae-7dec-11d0-a765-00a0c91e6bf6",
    "createInstant": 13405983821,
    "location": "page296",
    "parts": [
      {
        "content": "https://s3.us-east-2.amazonaws.com/public-resources-us-east.inversoft.com/video-moderation/sfw/rally.mp4",
        "name": "Body",
        "type": "video"
      }
    ],
    "senderDisplayName": "PapaSmurf",
    "senderId": "f6d3df91-ed4b-48ad-810f-05a367d328c2"
  }
}

1.5. Response

Table 3. Response Codes
Code Description

200

The request was successful. The response will contain a JSON body.

400

The request was invalid and/or malformed. The response will contain an Errors JSON Object with the specific errors.

401

You did not supply a valid Authorization header. The header was omitted or your API key was not valid. The response will be empty. See Authentication.

402

Your license has expired. The response will be empty. Contact sales@cleanspeak.com for assistance.

500

There was an internal error. A stack trace is provided and logged in the CleanSpeak log files. The response will be empty.

Table 4. Response Body

content [Object]

The piece of content being moderated.

content.id [UUID]

The id of the piece of content. This might have been passed in on the request or generated by CleanSpeak.

content.parts [Array]

The list of parts of content.

content.parts[x].mediaFilterResult.alcoholConfidence [Float]

The confidence that the image contains alcohol imagery.

content.parts[x].mediaFilterResult.drugsConfidence [Float]

The confidence that the image contains drug imagery.

content.parts[x].mediaFilterResult.offensiveResults[x].offensiveConfidence [Float]

The confidence that the image contains offensive content.

content.parts[x].mediaFilterResult.offensiveResults[x].offensiveTag [String]

The tag for the type of offensive content.

content.parts[x].mediaFilterResult.offensiveResults[x].position [Float]

The frame that the offensive content exists on.

content.parts[x].mediaFilterResult.partialNudityConfidence [Float]

The confidence that the image contains partial nudity.

content.parts[x].mediaFilterResult.partialNudityTag [String]

The associated tag for the type of partial nudity.

content.parts[x].mediaFilterResult.rawNudityConfidence [Float]

The confidence that the image contains raw nudity.

content.parts[x].mediaFilterResult.scamConfidence [Float]

The confidence that the image is a known scammer image.

content.parts[x].mediaFilterResult.weaponConfidence [Float]

The confidence that the image contains weapon.

content.parts[x].name [String]

The name of the part of the content.

content.parts[x].replacement [String]

The replacement text generated by CleanSpeak after applying the filter rules to this part.

contentAction [String]

The action that the client should take with the content. Possible values:

Content Actions

Action Description

allow

The content should be allowed.

authorOnly

The content should be allowed but only the author of the content should be allowed to view the content.

replace

The content should be allowed, but utilize the replacement text provided in the response.

queuedForApproval

The content has been queued for approval and should not be displayed to other users until it has been approved by a moderator.

reject

The content should be rejected.

This will always be queuedForApproval if the video filter is enabled. (You will get the result in the form of a moderator webhook event)

moderationAction [String]

The action that was taken on the content. Possible values:

  • requiresApproval

  • generatesAlert

  • generatesContentAlert

This will always be missing if the video filter is enabled. (A video may transition to requiresApproval after automatic moderation is finished and hits a Queue for approval rule)

stored [Boolean]

This value indicates if the content was stored by CleanSpeak. This will always be true for video moderation if the video filter is enabled.

Example Response JSON
{
  "content": {
    "id": "99f2c4e8-961a-4a34-b9b9-43fc3f3b43ec"
  },
  "contentAction": "allow",
  "stored": true
}