Papers
arxiv:2603.14456

PARSA-Bench: A Comprehensive Persian Audio-Language Model Benchmark

Published on Mar 15
ยท Submitted by
Mohammad Ranjbar
on Mar 20
Authors:
,
,
,

Abstract

PARSA-Bench presents the first benchmark for evaluating large audio-language models on Persian language and culture, featuring 16 tasks with over 8,000 samples covering speech understanding, paralinguistic analysis, and cultural audio comprehension.

AI-generated summary

Persian poses unique audio understanding challenges through its classical poetry, traditional music, and pervasive code-switching - none captured by existing benchmarks. We introduce PARSA-Bench (Persian Audio Reasoning and Speech Assessment Benchmark), the first benchmark for evaluating large audio-language models on Persian language and culture, comprising 16 tasks and over 8,000 samples across speech understanding, paralinguistic analysis, and cultural audio understanding. Ten tasks are newly introduced, including poetry meter and style detection, traditional Persian music understanding, and code-switching detection. Text-only baselines consistently outperform audio counterparts, suggesting models may not leverage audio-specific information beyond what transcription alone provides. Culturally-grounded tasks expose a qualitatively distinct failure mode: all models perform near random chance on vazn detection regardless of scale, suggesting prosodic perception remains beyond the reach of current models. The dataset is publicly available at https://huggingface.co/datasets/MohammadJRanjbar/PARSA-Bench

Community

Paper author Paper submitter

We introduce PARSA-Bench, the first benchmark for evaluating Large Audio-Language Models (LALMs) on Persian language and culture โ€” 16 tasks, 8,000+ samples, across speech understanding, paralinguistic analysis, and Persian cultural audio understanding.

Key findings:
๐Ÿ”‡ Audio processing is the dominant bottleneck โ€” text-only baselines consistently outperform audio counterparts across all tasks
๐ŸŽผ Poetry meter detection (vazn) is effectively unsolved at any scale โ€” all models score near random chance, including GPT-4o and Gemini-2.5-Flash
๐ŸŽ™๏ธ Poetry style (sabk) is the only task where audio beats text โ€” vocal recitation carries genuine style-discriminative signal
๐Ÿ“‰ Proprietary scale offers no advantage on cultural tasks
๐ŸŒ Cultural audio understanding for non-Western languages like Persian remains a wide-open problem โ€” current models lack the prosodic and cultural grounding needed, and no amount of scale fixes this

10 of the 16 tasks are newly introduced with no prior equivalent in any language, including Persian poetry meter/style detection, Dastgah music classification, and code-switching detection.

Dataset: https://huggingface.co/datasets/MohammadJRanjbar/PARSA-Bench

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.14456 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.14456 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.