"Profanity" (Built-in Classifier)

Determine whether a given text contains profanity.

Classes

    Out[1]//Short=

Details

  • This classifier attempts to detect if the text contains offensive language.
  • The current version only works for the English language.

Examples

open allclose all

Basic Examples  (2)

Use the "Profanity" built-in classifier to return True if a text contains strong language and False otherwise:

In[1]:=
Click for copyable input
Out[1]=

Classify multiple examples:

In[2]:=
Click for copyable input
Out[2]=

Obtain the probabilities for the possible classes:

In[3]:=
Click for copyable input
Out[3]=

Obtain a ClassifierFunction for this classifier:

In[1]:=
Click for copyable input
Out[1]=

Apply the classifier to a list of texts:

In[2]:=
Click for copyable input
Out[2]=

Scope  (1)

Options  (3)

See Also

Classify  NetModel  TextCases  TextSentences  WolframLanguageData  NaiveBayes

Related Models