SAGE-MM-Qwen2.5-VL-7B-SFT_RL-GGUF
The SAGE-MM-Qwen2.5-VL-7B-SFT_RL from allenai is a 7B-parameter reinforcement learning (RL) refined vision-language model, further post-trained via RL from the SAGE-MM-Qwen2.5-VL-7B-SFT (itself fine-tuned from Qwen/Qwen2.5-VL-7B-Instruct), acting as the core decision-maker in the SAGE (Smart Any-Horizon Agent) system for long video reasoning with enhanced performance over SFT baselines through two-stage operation: Stage-1 assesses initial sampled frames/metadata to route queries as single-turn (direct answers) or multi-turn (tool-needed), while Stage-2 loops JSON-formatted tool calls for web-search, timestamped ASR transcription, event grounding, video frame/subclip extraction, and visual analysis to iteratively build context until resolution. Tailored for arbitrary-length video Q&A across sports, narratives, events, or timelines beyond fixed horizons, it demands the SAGE GitHub runtime for tool parsing/execution/observation feedback, delivering state-of-the-art results on benchmarks like MINERVA under Apache 2.0 for research/educational use per Ai2 guidelines, with GGUF quantizations enabling efficient deployment.
SAGE-MM-Qwen2.5-VL-7B-SFT_RL [GGUF]
| File Name | Quant Type | File Size | File Link |
|---|---|---|---|
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.IQ4_XS.gguf | IQ4_XS | 4.25 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q2_K.gguf | Q2_K | 3.02 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q3_K_L.gguf | Q3_K_L | 4.09 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q3_K_M.gguf | Q3_K_M | 3.81 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q3_K_S.gguf | Q3_K_S | 3.49 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q4_K_M.gguf | Q4_K_M | 4.68 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q4_K_S.gguf | Q4_K_S | 4.46 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q5_K_M.gguf | Q5_K_M | 5.44 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q5_K_S.gguf | Q5_K_S | 5.32 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q6_K.gguf | Q6_K | 6.25 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.Q8_0.gguf | Q8_0 | 8.1 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.f16.gguf | F16 | 15.2 GB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.mmproj-Q8_0.gguf | mmproj-Q8_0 | 856 MB | Download |
| SAGE-MM-Qwen2.5-VL-7B-SFT_RL.mmproj-f16.gguf | mmproj-f16 | 1.35 GB | Download |
Quants Usage
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):
- Downloads last month
- 617
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for prithivMLmods/SAGE-MM-Qwen2.5-VL-7B-SFT_RL-GGUF
Base model
Qwen/Qwen2.5-VL-7B-Instruct