GIFT Eval
GIFT-Eval: A Benchmark for General Time Series Forecasting
GIFT-Eval: A Benchmark for General Time Series Forecasting
View and request speech models benchmark data
Submit and evaluate models on GAIA leaderboard
Measuring LLM capabilities to process Ukrainian texts
Leaderboard for watermarking models
Vision-Language-Action Models for Autonomous Driving: Past
Browse and analyze LLM evaluations on emotional intelligence tasks
Export Sentence Transformer models to accelerated backends
Leaderboard of LLMs based on detailed human feedback
It help you find the best medical and clinical NER models
MBench Leaderboard
Determine GPU requirements for running large language models
Track, rank and evaluate open LLMs and chatbots
Explore and compare QA and long doc benchmarks
Explore and compare multilingual LLM benchmarks
Convert Hugging Face model repos to safetensors files
Quantize and save Hugging Face models
NextGen Evaluation Benchmark and Leaderboard for Arabic LLMs
View and compare telecom LLM benchmarks
Push a ML model to Hugging Face Hub
Create a model card for Hugging Face Hub
Export models to ONNX using Hugging Face
Export models to OpenVINO format
Create reproducible ML pipelines with ZenML