Instructions to use aliangdw/libero_ablation_prog_only_lora_ft_4frames with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use aliangdw/libero_ablation_prog_only_lora_ft_4frames with Transformers:
# Load model directly from transformers import AutoProcessor, RFM processor = AutoProcessor.from_pretrained("aliangdw/libero_ablation_prog_only_lora_ft_4frames") model = RFM.from_pretrained("aliangdw/libero_ablation_prog_only_lora_ft_4frames") - Notebooks
- Google Colab
- Kaggle
aliangdw/libero_ablation_prog_only_lora_ft_4frames
Model Details
- Base Model: Qwen/Qwen3-VL-4B-Instruct
- Model Type: qwen3_vl
Training Run
- Wandb Run: libero_ablation_prog_only_lora_ft_4frames_2000steps
- Wandb ID:
fvost6gl - Project: rfm
- Notes: libero prog only lora ft
Citation
If you use this model, please cite:
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for aliangdw/libero_ablation_prog_only_lora_ft_4frames
Base model
Qwen/Qwen3-VL-4B-Instruct