QTuneVL1.5-3B developed by the Reconova AI Lab (Leader: Jia Baozhi; Team members: Wang Hanchao, Chen Mingmu, Lin Bingqi, et al.) && BDAA-Lab

Introduction

We are pleased to introduce QTuneVL1.5-3B, the latest addition to Reconova AI Lab's series of multimodal large language models. Built upon Qwen2.5-VL-3B, the model's capabilities have been further enhanced through RLVR training using the latest GSPO algorithm.

The model is mainly trained on reasoning datasets, but still maintains proficiency in various general tasks, achieving an overall performance superior to the base model.

Architecture:

  • ViT: QwenViT
  • Projector: 2-layer MLP
  • LLM: Qwen2.5-3B

Evaluation

We evaluate on eight benchmarks specified in the OpenCompass leaderboard using VLMEvalKit, including:

MMBench_TEST_EN/CN_V11, MMStar, MMMU_VAL, MathVista_MINI, HallusionBench, AI2D_TEST, OCRBench, MMVet. The results are shown below:

Avg MMBench v1.1 MMStar MMMU MathVista HallusionBench AI2D OCRBench MMVet
Qwen2.5-VL-3B 64.8 77.1 55.3 51.2 60.1 48.6 81.5 83.2 61.4
QTuneVL1-3B 66.1(+1.3) 77.3(+0.2) 57.3(+2.0) 53.6(+2.4) 63.7(+3.6) 49.4(+0.8) 81.3 83.8(0.6) 62.5(+1.1)

The reported results are based on our local implementations and may slightly differ from the official ones.

Copyright

We welcome suggestions to help us improve the QTuneVL. For any query, please contact HanChao Wang: wanghanchao@reconova.com. If you find something interesting, please also feel free to share with us through email or open an issue.

Downloads last month
9
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for hanchaow/QTuneVL1_5-3B

Finetuned
(750)
this model
Quantizations
1 model

Paper for hanchaow/QTuneVL1_5-3B