Individual results

View docs

View in-depth performance of a single language model on a single test suite.

Region-by-region surprisal
Sample item for Filler-Gap Dependencies (extraction from prepositional phrase)
Item
Condition
prefixcompnp1verbnp2prepnp3end
ItemConditionprefixcompnp1verbnp2prepnp3end
1 what_nogap I know who our uncle grabbed the food in front of the guests at the holiday party
1 that_nogap I know that our uncle grabbed the food in front of the guests at the holiday party
1 what_gap I know who our uncle grabbed the food in front of at the holiday party
1 that_gap I know that our uncle grabbed the food in front of at the holiday party
Showing 1 to 4 of 4 entries
Prediction performance for GPT-2 XL on Filler-Gap Dependencies (extraction from prepositional phrase)
Accuracy
Formula
Description
AccuracyPredictionDescription
91.67% what_nogap.np3 > that_nogap.np3 We expect the NP3 to be less surprising in the that_no-gap condition than in the what_no-gap condition, because an upstream wh-word should set up an expectation for a gap.
91.67% what_gap.end < that_gap.end We expect the end region to be lower in the what_gap condition than in the that_gap condition, because gaps must be licensed by upstream wh words (such as “what”).
Showing 1 to 2 of 2 entries