--- configs: - config_name: default data_files: - split: test path: data/**/*.parquet - config_name: person data_files: - split: test path: data/person/*.parquet - config_name: sports data_files: - split: test path: data/sports/*.parquet - config_name: animal data_files: - split: test path: data/animal/*.parquet - config_name: misc data_files: - split: test path: data/misc/*.parquet - config_name: dance data_files: - split: test path: data/dance/*.parquet license: odc-by --- # Molmo2-VideoTrackEval Molmo2-VideoTrackEval is an evaluation benchmark for video point tracking, containing human-annotated ground truth expressions. It includes segmentation masks for evaluating whether predicted points fall within the correct object regions. Currently, there are five categories for evaluation: - animal - dance - sports - person - misc This benchmark is part of the [Molmo2 dataset collection](https://huggingface.co/collections/allenai/molmo2-data) and is used to evaluate the [Molmo2 family of models](https://huggingface.co/collections/allenai/molmo2) on video object tracking via point trajectories. Quick links: - 📃 [Paper](https://allenai.org/papers/molmo2) - 🎥 [Blog with Videos](https://allenai.org/blog/molmo2) ## Usage ```python from datasets import load_dataset # Load entire evaluation dataset ds = load_dataset("allenai/Molmo2-VideoTrackEval", split="test") # Load a specific benchmark subset by config name animal = load_dataset("allenai/Molmo2-VideoTrackEval", "animal", split="test") dance = load_dataset("allenai/Molmo2-VideoTrackEval", "dance", split="test") sports = load_dataset("allenai/Molmo2-VideoTrackEval", "sports", split="test") person = load_dataset("allenai/Molmo2-VideoTrackEval", "person", split="test") misc = load_dataset("allenai/Molmo2-VideoTrackEval", "misc", split="test") ``` ## Available Configs | Config | Dataset | Description | |--------|---------|-------------| | `default` | All | All evaluation data combined | | `animal` | APTv2 | Animal tracking benchmark | | `dance` | dancetrack | Dancer tracking benchmark | | `sports` | sportsmot | Sports player tracking benchmark | | `person` | personpath22 | Person tracking benchmark | | `misc` | sav | Misc Video benchmark | ## Data Format Each row contains tracking annotations for one or more objects in a video clip: | Field | Description | |-------|-------------| | `id` | Unique identifier for this annotation | | `video` | Video filename | | `clip` | trimmed clip id | | `video_dataset` | Source dataset name (e.g., 'dancetrack', 'sportsmot') | | `video_source` | Video directory path (can be ignored) | | `exp` | Text expression describing the tracked object(s) | | `obj_id` | List of object IDs per video | | `mask_id` | List of mask IDs corresponding to tracked objects starting from '0' | | `masks` | List of segmentation masks per object for evaluation. Each entry contains `object_id` and `masks` (used to verify if predicted points fall within the ground truth object region) | | `points` | List of point trajectories per object. Each entry contains `object_id` and `points` (list of [x, y] coordinates per frame) | | `segments` | List of segment annotations per object. Each entry contains `object_id` and `segments` | | `start_frame` | Starting frame index for this clip | | `end_frame` | Ending frame index for this clip | | `w` | Video width | | `h` | Video height | | `n_frames` | Number of frames in the clip | | `fps` | Frames per second | **Important:** `start_frame` and `end_frame` indicate which portion of the source video to use. You need to trim the video to this range — the annotations correspond to frames within `[start_frame, end_frame]`, not the entire video. ### Evaluation with Masks The `masks` field contains ground truth segmentation masks that can be used to evaluate tracking predictions. A predicted point is considered correct if it falls within the segmentation mask of the target object for that frame. ## Folder Structure ``` Molmo2-VideoTrackEval/ ├── README.md └── data/ ├── animal/ │ └── APTv2_point_tracks_with_masks.parquet ├── dance/ │ └── dancetrack_point_tracks_with_masks.parquet └── sports/ └── sportsmot_point_tracks_with_masks.parquet ├── person/ │ └── personpath22_point_tracks_with_masks.parquet ├── misc/ │ └── sav_point_tracks_with_masks.parquet ``` ## Video Sources The table below contains information on the sources of the third party datasets used or referenced in curating the benchmark data for Molmo2-VideoTrackEval. We do not provide video files or share the original raw data from datasets with restrictions on use and distribution according to the source license. | Dataset | Category | Download | Dataset License | |---------|----------|----------|-----------------| | APTv2 | Animals | [APTv2](https://github.com/ViTAE-Transformer/APTv2) | Apache 2.0 | dancetrack | Dancers | [DanceTrack](https://github.com/DanceTrack/DanceTrack?tab=readme-ov-file#dataset) | Non-commercial research use only | sportsmot | Sports | [SportsMOT](https://codalab.lisn.upsaclay.fr/competitions/12424#participate) | CC BY-NC 4.0 | personpath22 | Person | [PersonPath22](https://amazon-science.github.io/tracking-dataset/personpath22.html) | CC BY-NC 4.0 | sav | Misc | [SA-V](https://ai.meta.com/datasets/segment-anything-video/) (Frames sampled at 6 fps from 24 fps video) | CC BY 4.0 ## License This dataset is licensed under ODC-BY-1.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use). Please refer to the Video Sources section for the original datasets that provide the videos used to generate the segmentations and point tracks for this dataset. All use of the videos and original data from these datasets are subject to the licenses and terms of use provided by the sources. Please check the sources to determine if they are appropriate for your use case.