"Profanity" (Built-in Classifier)
Determine whether a given text contains profanity.
Classes

Examples
open allclose allBasic Examples (2)
Use the "Profanity" built-in classifier to return True if a text contains strong language and False otherwise:
Obtain the probabilities for the possible classes:
Obtain a ClassifierFunction for this classifier:
Scope (1)
Load the ClassifierFunction corresponding to the built-in classifier:
Options (3)
ClassPriors (1)
Use a custom ClassPriors to restrict the possible outputs:
IndeterminateThreshold (1)
Use a custom IndeterminateThreshold: