Saltar al contenido principal

Benchmarks

Definición

Los benchmarks son conjuntos de datos estandarizados y protocolos de evaluación (por ej. GLUE, SuperGLUE for NLP; MMLU for broad knowledge; HumanEval for code). They enable comparison across models and over time.

They dependen de evaluation metrics y divisiones fijas para que los resultados sean comparables. El sobreajuste a benchmarks es un problema conocido; supplement with out-of-distribution and human eval when deploying LLMs or production systems.

Cómo funciona

A model is run on a benchmark dataset (prompts o entradas fijos, división estándar). Metrics (por ej. accuracy, pass@k) se calculan por tarea and often averaged; results are reported on a leaderboard or in papers. Protocols define what inputs to use, how to parse outputs, and which metrics to report. Reusing the same benchmark across time lets the community track progress. Care is needed: models can overfit to benchmark quirks, and benchmarks may not reflect real-world quality—use them as one signal among others.

Casos de uso

Benchmarks give a common yardstick to compare models and methods; use them together with task-specific and human evaluation.

  • Comparing NLP models (por ej. GLUE, SuperGLUE, MMLU)
  • Evaluating code generation (por ej. HumanEval) or razonamiento
  • Tracking model and method progress over time

Documentación externa

Ver también