trec_eval
« Back to VersTracker
Description:
Evaluation software used in the Text Retrieval Conference
Type: Formula  |  Tracked Since: Dec 28, 2025
Links: Homepage  |  formulae.brew.sh
Category: Developer tools
Tags: information-retrieval evaluation benchmarking search research
Install: brew install trec_eval
About:
TrecEval is the standard evaluation tool for information retrieval systems, developed for the Text Retrieval Conference (TREC). It calculates a wide range of performance metrics, such as precision and recall, by comparing system-generated rankings against human-judged relevance data. This tool is essential for benchmarking search algorithms and ensuring rigorous evaluation in research and production environments.
Key Features:
  • Computes standard IR metrics (Precision, Recall, MAP, NDCG)
  • Handles multi-level relevance judgments
  • Generates trec_eval output for detailed analysis
  • Supports multi-query evaluation
Use Cases:
  • Benchmarking search engine ranking algorithms
  • Academic research in information retrieval
  • Validating ML models for search relevance
Alternatives:
  • pytrec_eval – Python binding for trec_eval; faster for batch evaluation in Python pipelines
  • ranx – Modern Rust-based library with optimized metrics and Python bindings
Version History
Detected Version Rev Change Commit