AssessmentFunction

AssessmentFunction[key]

represents a tool for assessing whether answers are correct according to the key.

AssessmentFunction[key,method]

uses the specified answer comparison method.

AssessmentFunction[key,f]

uses the function f to compare answers with the key.

AssessmentFunction[key,comp]

performs assessment using the custom assessment defined in the Association comp.

AssessmentFunction[obj]

represents an assessment function that performs assessment using the CloudObject obj.

AssessmentFunction[{obj,id}]

assesses the specified question within the CloudObject obj.

AssessmentFunction[] [answer]

gives an AssessmentResultObject representing the correctness of answer.

Details and Options

  • AssessmentFunction is commonly used within QuestionObject to define how to assess answers to a question.
  • The key accepts the following forms:
  • ansanswer matches the pattern ans
    {ans1,ans2,}answer is any of the ansi
    {{a1,a2,}}answer is the list {a1,a2,}
  • Each possible answer ansi can have the following forms:
  • pattpattern matching all correct responses
    pattscorepattern and corresponding score to be awarded
    pattansspecAssociation containing a complete answer specification
  • Using pattscore is equivalent to patt<|"Score"score|>.
  • The score should have either Boolean or numeric values. True and positive numeric scores denote correct answers, while False, zero and negative scores are incorrect.
  • The patti can be exact answer values or patterns against which the values of answer are compared.
  • In AssessmentFunction[{patt1,patt2,}], when no scores are provided, all the patti are treated as correct. If a single patti is set to True or a positive score, all other patti are treated as incorrect answers.
  • The full answer specification ansspec accepts the following keys:
  • "Score" (required)award given for matching answers
    "AnswerCorrect"whether the ans is considered correct
    "Category"category corresponding to the answer for sorting questions
    "Explanation"text to be provided to the user
  • Answer comparison methods supported in AssessmentFunction[key,"method"] include the following "method" values and use the corresponding distance functions to compare answers and solutions against the Tolerance, where None represents requiring an exact match.
  • "Number"Norm[#1-#2]&scalar numeric values
    "String"EditDistancestrings
    "Item"Noneany expression
    "HeldExpression"Noneexpressions held without evaluation
    "ArithmeticResult"Noneanswers to arithmetic exercises
    "PolynomialResult"Noneanswers to polynomial exercises
    "CalculusResult"Noneanswers to calculus exercises
    "CodeEquivalence"Nonecode
    "Date"DateDifferencedates
    "GeoPosition"GeoDistancegeographic locations
    "Point"Norm[#1-#2]&geometric Point values
    "List"Nonean ordered list
    "UnorderedList"Nonelist where ordering is not important
    "Vector"Norm[#1-#2]&vector
    "Color"ColorDistancecolor values
    "Quantity"Norm[#1-#2]&quantity with magnitude and unit
  • In AssessmentFunction[key,comp], comp is an Association. The accepted keys are:
  • "Comparator"function to compare provided answer to each pattern in key
    "Selector"function to select matching pattern for provided answer
    "ListAssessment"specify the method for assessing listed answers
    "ScoreCombiner"function to combine elementwise "Score" values
    "AnswerCorrectCombiner"function to combine elementwise "AnswerCorrect" values
  • AssessmentFunction[key,f] for a function f is equivalent to AssessmentFunction[key,<|"Comparator"f|>].
  • Only one of "Comparator" or "Selector" should be provided. Using "Comparator"compf computes compf[answer, patt] for each ans in the key in order and chooses the first ans to give True. Common comparators include MatchQ, Greater, StringMatchQ and SameQ.
  • A custom comparator that takes only the user's answer as input can be used without specifying a key. In this case, Automatic is accepted as a key.
  • Using "Selector"selectf computes selectf[{patt1,patt2,},answer] and returns the patt corresponding to the selected ans. Common selectors include SelectFirst, Composition[First,Nearest] and Composition[First,TakeLargestBy].
  • When assessing listed answers AssessmentFunction[key,<|,"ListAssessment"method,|>][{elem1,elem2,}], the following values are supported by method:
  • "SeparatelyScoreElements"assess each element of the answer against the key separately and combine the results
    "AllElementsOrdered"check whether the elements of answer match the elements of key, with matching order
    "AllElementsOrderless"check whether the elements of answer match the elements of key, in any order
    "WholeList" (default)assess as an ordinary expression, applying the comparison to the full list {elem1,elem2,}
  • For "SeparatelyScoredElements", the patt in the key should correspond to individual elements of the answer. This allows assigning scores for each element as described below. For all other "ListAssessment" methods, each patt in the key should contain a list.
  • Using "ListAssessment""SeparatelyScoreElements" assesses listed answers one element at a time. The "Score" and "AnswerCorrect" results for each element are combined using the "ScoreCombiner" and "AnswerCorrectCombiner" functions, respectively. The "ScoreCombiner" and "AnswerCorrectCombiner" functions are only applied when "SeparatelyScoredElements" is used.
  • AssessmentFunction accepts the following options:
  • DistanceFunction Automaticdistance metric to use
    Tolerance Automaticdistance to accept when matching answers
    MaxItems Infinitylimit on number of elements in elementwise assessment
  • AssessmentFunction[key] is equivalent to AssessmentFunction[key,Automatic] and infers an answer comparison type from key.
  • Each answer comparison type corresponds to a predefined comparator or selector function. Usually, when no built-in notion of distance exists for the comparison type a "Comparator" of MatchQ is used.
  • When a notion of distance does exist for a comparison type, AssessmentFunction uses a "Selector" of First@*Nearest and accepts Tolerance and DistanceFunction options.
  • For separately scored elements, "AnswerCorrectCombiner" should take a list of Booleans representing the correctness of each element and return a single Boolean for the overall correctness of the answer. The default depends on the comparison method. The most common default value is AllTrue[#,TrueQ]&.
  • For separately scored elements, "ScoreCombiner" should take a list of numeric values representing the score of each element and return a total numeric score for the answer. The default depends on the comparison method. The most common default value is Total.
  • When separately scoring elements, if the number of elements given is greater than the value of MaxItems, AssessmentFunction gives a Failure.
  • Information works on AssessmentFunction and accepts the following prop values:
  • "DefaultQuestionInterface"user interface implied by the key (i.e. "MultipleChoice", "ShortAnswer")
    "AnswerComparisonMethod"expected type for the values (i.e. "Number", "GeoPosition")
    "Key"key used to assess answers
  • Information[AssessmentFunction[],"Properties"] provides a full list of available prop values.
  • AssessmentFunction[CloudObject[]] performs the assessment remotely within the specified CloudObject. This prevents the modification of the assessment by the user providing the answers.

Examples

open allclose all

Basic Examples  (3)

Create an assessment function that will check for the answer "Dog":

Define an assessment function that gives 10 points for any answer over 100:

Check the answer to a polynomial math question:

The factored form is marked incorrect:

An equivalent polynomial with reordered terms is correct:

Scope  (18)

Create an assessment function that awards two points for an even number:

Apply the assessment to an answer:

Define a grader for a calculus problem:

Code transformation rules attempt to determine if the code is equivalent:

Equivalent representations are also correct:

Attempting to give the unevaluated question as an answer gives incorrect:

Make an assessment function that checks if a user's code is equivalent to the answer key:

Code transformation rules attempt to determine if the code is equivalent:

Answers that are not equivalent are incorrect:

Create an assessment function that accepts any of a list of values:

Apply the assessment function:

See the results:

Assign scores for each available answer:

Negative scores are considered incorrect:

Specify answer information using an Association:

See the full assessment information:

Create an assessment that checks for a list of values as a single item:

Elements of the list are incorrect answers:

Only the full list will match:

Create an assessment function for a sorting question:

The provided answer and category must match to get a correct assessment:

Create an assessment for a question like "Name some Pokémon":

Any valid subset gives a correct answer with a score of one:

Award points for each correct answer by specifying a list assessment method:

The score represents the number of correct answers:

Specific an answer comparison method:

Specify a "HeldExpression" comparison method using HoldPattern to define the answer key values:

Check an expression held by Hold. The expression does not evaluate:

Specify a comparator function to create a spelling test:

Specify a comparator function to create an assessment for geo locations near cities:

Assess a location and see the full assessment:

Define an assessment that depends only on the answer by giving Automatic as the key.

Define a selector function for the question "Name a city with a population closer to that of St. Louis than that of Chicago or Indianapolis":

Use elementwise assessment to assign partial credit:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Define custom combining functions for combining the assessments of each element:

Apply the assessment function to a list of values to assess each element:

The assessment contains information on each element:

Options  (6)

DistanceFunction  (1)

Define an assessment function for a question about the distance of the Sun:

The tolerance is applied linearly:

Specify a DistanceFunction to apply the tolerance logarithmically:

MaxItems  (4)

Using "ListAssessment""SeparatelyScoreElements" computes the assessment for each element in the answer; this can be slow for some comparators:

Limit the number of elements to assess with MaxItems:

Create an assessment function that scores each element separately. Note that the key contains scores for each possible value of the elements:

Assess a listed answer:

See the full result. Note that "ElementInformation" contains assessments for each element and an overall "Score" and "AnswerCorrect" value are computed for the full answer:

Create an assessment function that assesses a listed answer by comparing each element of the answer to the corresponding element of the key:

Assess an answer. Note that the tolerance is applied to each element:

Changing the order of the elements gives an incorrect result:

Create an assessment function that assesses a listed answer by comparing each element of the answer to any element of the key:

Assess answers with different element orders. The answer is correct as long as each element in the answer matches a distinct element in the key:

Tolerance  (1)

Create an assessment function asking to name the value of Pi:

Approximate answers are marked as incorrect:

Use the Tolerance option to allow approximate answers:

The answer is marked as correct:

Applications  (1)

Create a QuestionObject for a polynomial exercise:

Properties & Relations  (6)

Answers not included in the key are considered incorrect:

Zero points are awarded:

Specify a multiple-choice question with one correct answer using a concise syntax:

Answer keys support patterns:

See the assessment results:

Extract information about an AssessmentFunction using Information:

Retrieve specific values:

See the comparisons that occur at the element level when using "ListAssessment""AllElementsOrdered". Comparisons are made using the elements of the answer and the elements of the key values:

MatchQ[1,_Integer]
True
MatchQ["hello",_String]
True
MatchQ[3,_Integer]
True

When using "ListAssessment""AllElementsOrderless", more comparisons are performed:

MatchQ[1,_Integer]
True
MatchQ[1,_String]
False
MatchQ[1,_Integer]
True
MatchQ["hello",_Integer]
False
MatchQ["hello",_String]
True
MatchQ["hello",_Integer]
False
MatchQ[3,_Integer]
True
MatchQ[3,_String]
False
MatchQ[3,_Integer]
True

When using "ListAssessment""SeparatelyScoreElements", comparisons are made to each value of key instead of their elements:

MatchQ[1,_?OddQ]
True
MatchQ["hello",_?OddQ]
False
MatchQ["hello",_String]
True
MatchQ[6,_?OddQ]
False
MatchQ[6,_String]
False
MatchQ[6,_?EvenQ]
True

Create an "AllElementsOrderless" assessment function with overlapping values in the key:

If each element of the answer does not match a distinct element of the key, it is incorrect:

When a distinct element of the key can match each element of the answer, it is correct:

Wolfram Research (2020), AssessmentFunction, Wolfram Language function, https://reference.wolfram.com/language/ref/AssessmentFunction.html (updated 2023).

Text

Wolfram Research (2020), AssessmentFunction, Wolfram Language function, https://reference.wolfram.com/language/ref/AssessmentFunction.html (updated 2023).

CMS

Wolfram Language. 2020. "AssessmentFunction." Wolfram Language & System Documentation Center. Wolfram Research. Last Modified 2023. https://reference.wolfram.com/language/ref/AssessmentFunction.html.

APA

Wolfram Language. (2020). AssessmentFunction. Wolfram Language & System Documentation Center. Retrieved from https://reference.wolfram.com/language/ref/AssessmentFunction.html

BibTeX

@misc{reference.wolfram_2023_assessmentfunction, author="Wolfram Research", title="{AssessmentFunction}", year="2023", howpublished="\url{https://reference.wolfram.com/language/ref/AssessmentFunction.html}", note=[Accessed: 28-March-2024 ]}

BibLaTeX

@online{reference.wolfram_2023_assessmentfunction, organization={Wolfram Research}, title={AssessmentFunction}, year={2023}, url={https://reference.wolfram.com/language/ref/AssessmentFunction.html}, note=[Accessed: 28-March-2024 ]}