represents a tool for assessing whether answers are correct according to the key.


uses the specified answer comparison method.


uses the function f to compare answers with the key.


performs assessment using the custom assessment defined in the Association comp.


represents an assessment function that performs assessment using the CloudObject obj.


assesses the specified question within the CloudObject obj.

AssessmentFunction[] [answer]

gives an AssessmentResultObject representing the correctness of answer.

Details and Options

  • AssessmentFunction is commonly used within QuestionObject to define how to assess answers to a question.
  • The key accepts the following forms:
  • ansanswer matches the pattern ans
    {ans1,ans2,}answer is any of the ansi
    {{a1,a2,}}answer is the list {a1,a2,}
  • Each possible answer ansi can have the following forms:
  • pattpattern matching all correct responses
    pattscorepattern and corresponding score to be awarded
    pattansspecAssociation containing a complete answer specification
  • Using pattscore is equivalent to patt<|"Score"score|>.
  • The score should have either Boolean or numeric values. True and positive numeric scores denote correct answers, while False, zero and negative scores are incorrect.
  • The patti can be exact answer values or patterns against which the values of answer are compared.
  • In AssessmentFunction[{patt1,patt2,}], when no scores are provided, all the patti are treated as correct. If a single patti is set to True or a positive score, all other patti are treated as incorrect answers.
  • The full answer specification ansspec accepts the following keys:
  • "Score" (required)award given for matching answers
    "AnswerCorrect"whether the ans is considered correct
    "Category"category corresponding to the answer for sorting questions
    "Explanation"text to be provided to the user
  • Answer comparison methods supported in AssessmentFunction[key,"method"] include the following "method" values and use the corresponding distance functions to compare answers and solutions against the Tolerance, where None represents requiring an exact match.
  • "Number"Norm[#1-#2]&scalar numeric values
    "Item"Noneany expression
    "HeldExpression"Noneexpressions held without evaluation
    "GeoPosition"GeoDistancegeographic locations
    "Point"Norm[#1-#2]&geometric Point values
    "List"Nonean ordered list
    "UnorderedList"Nonelist where ordering is not important
    "Color"ColorDistancecolor values
    "Quantity"Norm[#1-#2]&quantity with magnitude and unit
  • In AssessmentFunction[key,comp], comp is an Association. The accepted keys are:
  • "Comparator"function to compare provided answer to each pattern in key
    "Selector"function to select matching pattern for provided answer
    "ElementwiseAssessment"whether list answers should be assessed elementwise with partial credit
    "ScoreCombiner"function to combine elementwise "Score" values
    "AnswerCorrectCombiner"function to combine elementwise "AnswerCorrect" values
  • AssessmentFunction[key,f] for a function f is equivalent to AssessmentFunction[key,<|"Comparator"f|>].
  • Only one of "Comparator" or "Selector" should be provided. Using "Comparator"compf computes compf[answer, patt] for each ans in the key in order and chooses the first ans to give True. Common comparators include MatchQ, Greater, StringMatchQ and SameQ.
  • A custom comparator that takes only the user's answer as input can be used without specifying a key. In this case, Automatic is accepted as a key.
  • Using "Selector"selectf computes selectf[{patt1,patt2,},answer] and returns the patt corresponding to the selected ans. Common selectors include SelectFirst, Composition[First,Nearest] and Composition[First,TakeLargestBy].
  • By default, the answer is assessed as a single complete answer. Using "ElementwiseAssessment"True assesses listed answers one element at a time. The "Score" and "AnswerCorrect" results for each element are combined using the "ScoreCombiner" and "AnswerCorrectCombiner" function, respectively. The "ScoreCombiner" and "AnswerCorrectCombiner" functions are only applied when "ElementwiseAssessment" is True.
  • AssessmentFunction accepts the following options:
  • DistanceFunction Automaticdistance metric to use
    Tolerance Automaticdistance to accept when matching answers
    MaxItems Infinitylimit on number of elements in elementwise assessment
  • AssessmentFunction[key] is equivalent to AssessmentFunction[key,Automatic] and infers an answer comparison type from key.
  • Each answer comparison type corresponds to a predefined comparator or selector function. Usually, when no built-in notion of distance exists for the comparison type a "Comparator" of MatchQ is used.
  • When a notion of distance does exist for a comparison type, AssessmentFunction uses a "Selector" of First@*Nearest and accepts Tolerance and DistanceFunction options.
  • For elementwise assessment, "AnswerCorrectCombiner" should take a list of Booleans representing the correctness of each element and return a single Boolean for the overall correctness of the answer. The default depends on the comparison method. The most common default value is AllTrue[#,TrueQ]&.
  • For elementwise assessment, "ScoreCombiner" should take a list of numeric values representing the score of each element and return a total numeric score for the answer. The default depends on the comparison method. The most common default value is Total.
  • When performing elementwise assessment, if the number of elements given is greater than the value of MaxItems, AssessmentFunction gives a Failure.
  • Information works on AssessmentFunction and accepts the following prop values:
  • "AnswerComparisonMethod"expected type for the values (i.e. "Number", "GeoPosition")
    "Key"key used to assess answers
  • Information[AssessmentFunction[],"Properties"] provides a full list of available prop values.
  • AssessmentFunction[CloudObject[]] performs the assessment remotely within the specified CloudObject. This prevents the modification of the assessment by the user providing the answers.


open allclose all

Basic Examples  (2)

Create an assessment function that will check for the answer "Dog":

Define an assessment function that gives 10 points for any answer over 100:

Scope  (17)

Create an assessment function that awards two points for an even number:

Apply the assessment to an answer:

Make an assessment function that checks if a user's code is equivalent to the answer key:

Code transformation rules attempt to determine if the code is equivalent:

Answers that are not equivalent are incorrect:

Create an assessment function that accepts any of a list of values:

Apply the assessment function:

See the results:

Assign scores for each available answer:

Negative scores are considered incorrect:

Specify answer information using an Association:

See the full assessment information:

Create an assessment that checks for a list of values as a single item:

Elements of the list are incorrect answers:

Only the full list will match:

Create an assessment function for a sorting question:

The provided answer and category must match to get a correct assessment:

Create an assessment for a question like "Name some Pokémon":

Any valid subset gives a correct answer with a score of one:

Use elementwise assessment to award points for each correct answer:

The score represents the number of correct answers:

Specific an answer comparison method:

Specify a "HeldExpression" comparison method using HoldPattern to define the answer key values:

Check an expression held by Hold. The expression does not evaluate:

Specify a comparator function to create a spelling test:

Specify a comparator function to create an assessment for geo locations near cities:

Assess a location and see the full assessment:

Define an assessment that depends only on the answer by giving Automatic as the key.

Define a selector function for the question "Name a city with a population closer to that of St. Louis than that of Chicago or Indianapolis":

Use elementwise assessment to assign partial credit:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Define custom combining functions for elementwise assessment:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Options  (3)

DistanceFunction  (1)

Define an assessment function for a question about the distance of the Sun:

The tolerance is applied linearly:

Specify a DistanceFunction to apply the tolerance logarithmically:

MaxItems  (1)

Elementwise assessment computes the assessment for each element in the answer; this can be slow for some comparators:

Limit the number of elements to assess with MaxItems:

Tolerance  (1)

Create an assessment function asking to name the value of Pi:

Approximate answers are marked as incorrect:

Use the Tolerance option to allow approximate answers:

The answer is marked as correct:

Properties & Relations  (3)

Answers not included in the key are considered incorrect:

Zero points are awarded:

Answer keys support patterns:

See the assessment results:

Extract information about an AssessmentFunction using Information:

Retrieve specific values:

Wolfram Research (2020), AssessmentFunction, Wolfram Language function,


Wolfram Research (2020), AssessmentFunction, Wolfram Language function,


Wolfram Language. 2020. "AssessmentFunction." Wolfram Language & System Documentation Center. Wolfram Research.


Wolfram Language. (2020). AssessmentFunction. Wolfram Language & System Documentation Center. Retrieved from


@misc{reference.wolfram_2022_assessmentfunction, author="Wolfram Research", title="{AssessmentFunction}", year="2020", howpublished="\url{}", note=[Accessed: 31-May-2023 ]}


@online{reference.wolfram_2022_assessmentfunction, organization={Wolfram Research}, title={AssessmentFunction}, year={2020}, url={}, note=[Accessed: 31-May-2023 ]}