Header logo is

Evaluating Language Models as Risk Scores

2024

Conference Paper

sf


Current question-answering benchmarks predominantly focus on accuracy in realizable prediction tasks. Conditioned on a question and answer-key, does the most likely token match the ground truth? Such benchmarks necessarily fail to evaluate language models' ability to quantify outcome uncertainty. In this work, we focus on the use of language models as risk scores for unrealizable prediction tasks. We introduce folktexts, a software package to systematically generate risk scores using large language models, and evaluate them against benchmark prediction tasks. Specifically, the package derives natural language tasks from US Census data products, inspired by popular tabular data benchmarks. A flexible API allows for any task to be constructed out of 28 census features whose values are mapped to prompt-completion pairs. We demonstrate the utility of folktexts through a sweep of empirical insights on 16 recent large language models, inspecting risk scores, calibration curves, and diverse evaluation metrics. We find that zero-shot risk sores have high predictive signal while being widely miscalibrated: base models overestimate outcome uncertainty, while instruction-tuned models underestimate uncertainty and generate over-confident risk scores.

Author(s): Cruz, André F and Hardt, Moritz and Mendler-Dünner, Celestine
Book Title: arXiv preprint arXiv:2407.14614
Year: 2024
Month: September

Department(s): Social Foundations of Computation
Bibtex Type: Conference Paper (conference)

State: Accepted

Links: ArXiv

BibTex

@conference{cruz2024evaluating,
  title = {Evaluating Language Models as Risk Scores},
  author = {Cruz, Andr{\'e} F and Hardt, Moritz and Mendler-D{\"u}nner, Celestine},
  booktitle = {arXiv preprint arXiv:2407.14614},
  month = sep,
  year = {2024},
  doi = {},
  month_numeric = {9}
}