MLCommons Science Benchmark Catalog
We are pleased to announce the launch of the MLCommons Science Benchmark Catalog. This resource provides a comprehensive, filterable, searchable, and sortable index of scientific benchmarks.
The catalog is designed to support the scientific community in documenting and classifying AI for science benchmarks, making it easier for researchers to discover existing benchmarks and contribute new ones. By providing a standardized way to track benchmark specifications and datasets, we aim to improve the reproducibility and comparability of AI models in scientific domains.
You can explore the catalog here: https://mlcommons-science.github.io/benchmark/