About

static quants of https://huggingface.co/internlm/CapRL-Qwen3VL-2B

weighted/imatrix quants are available at https://huggingface.co/internlm/CapRL-Qwen3VL-2B-GGUF

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF mmproj-Q8_0 0.5 multi-modal supplement
GGUF mmproj-f16 0.9 multi-modal supplement
GGUF Q2_K 0.8
GGUF Q4_K_S 1.2 fast, recommended
GGUF Q4_K_M 1.2 fast, recommended
GGUF Q6_K 1.6 very good quality
GGUF Q8_0 2.2 fast, best quality
GGUF f16 4.1 16 bpw, overkill

Citation

If you find this project useful, please cite:

@article{xing2025caprl,
  title={{CapRL}: Stimulating Dense Image Caption Capabilities via Reinforcement Learning},
  author={Xing, Long and Dong, Xiaoyi and Zang, Yuhang and Cao, Yuhang and Liang, Jianze and Huang, Qidong and Wang, Jiaqi and Wu, Feng and Lin, Dahua},
  journal={arXiv preprint arXiv:2509.22647},
  year={2025}
}
Downloads last month
333
GGUF
Model size
2B params
Architecture
qwen3vl
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for internlm/CapRL-Qwen3VL-2B-GGUF

Quantized
(5)
this model

Dataset used to train internlm/CapRL-Qwen3VL-2B-GGUF

Space using internlm/CapRL-Qwen3VL-2B-GGUF 1

Collection including internlm/CapRL-Qwen3VL-2B-GGUF