Improve dataset card: Add description, links, task categories, tags, abstract, and sample usage

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +69 -3
README.md CHANGED
@@ -1,3 +1,69 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ - image-to-text
6
+ language: en
7
+ tags:
8
+ - benchmark
9
+ - llm-evaluation
10
+ - spatial-reasoning
11
+ - multimodal
12
+ ---
13
+
14
+ # LTD-Bench: Evaluating Large Language Models by Letting Them Draw
15
+
16
+ This repository contains **LTD-Bench**, a breakthrough benchmark presented in the paper [LTD-Bench: Evaluating Large Language Models by Letting Them Draw](https://huggingface.co/papers/2511.02347). LTD-Bench transforms LLM evaluation from abstract scores to directly observable visual outputs by requiring models to generate drawings through dot matrices or executable code, making spatial reasoning limitations immediately apparent.
17
+
18
+ LTD-Bench implements a comprehensive methodology with complementary generation tasks (testing spatial imagination) and recognition tasks (assessing spatial perception) across three progressively challenging difficulty levels, methodically evaluating both directions of the critical language-spatial mapping.
19
+
20
+ **Paper:** [LTD-Bench: Evaluating Large Language Models by Letting Them Draw](https://huggingface.co/papers/2511.02347)
21
+ **Code:** https://github.com/walktaster/LTD-Bench
22
+
23
+ ## Abstract
24
+
25
+ Current evaluation paradigms for large language models (LLMs) represent a critical blind spot in AI research--relying on opaque numerical metrics that conceal fundamental limitations in spatial reasoning while providing no intuitive understanding of model capabilities. This deficiency creates a dangerous disconnect between reported performance and practical abilities, particularly for applications requiring physical world understanding. We introduce LTD-Bench, a breakthrough benchmark that transforms LLM evaluation from abstract scores to directly observable visual outputs by requiring models to generate drawings through dot matrices or executable code. This approach makes spatial reasoning limitations immediately apparent even to non-experts, bridging the fundamental gap between statistical performance and intuitive assessment. LTD-Bench implements a comprehensive methodology with complementary generation tasks (testing spatial imagination) and recognition tasks (assessing spatial perception) across three progressively challenging difficulty levels, methodically evaluating both directions of the critical language-spatial mapping. Our extensive experiments with state-of-the-art models expose an alarming capability gap: even LLMs achieving impressive results on traditional benchmarks demonstrate profound deficiencies in establishing bidirectional mappings between language and spatial concept--a fundamental limitation that undermines their potential as genuine world models. Furthermore, LTD-Bench's visual outputs enable powerful diagnostic analysis, offering a potential approach to investigate model similarity.
26
+
27
+ ## Sample Usage
28
+
29
+ To get started with LTD-Bench, follow these steps for environment setup and running the benchmark.
30
+
31
+ ### Setup
32
+ Before running LTD-Bench, please ensure that your Linux environment has already installed Xvfb, as it may be required for the Hard-level generation tasks.
33
+
34
+ You can install it using the following command.
35
+ ```bash
36
+ apt-get install xvfb
37
+ apt-get install ghostscript
38
+ ```
39
+ or
40
+ ```bash
41
+ yum install xorg-x11-server-Xvfb
42
+ yum install ghostscript
43
+ ```
44
+
45
+ Then you need to run Xvfb
46
+ ```bash
47
+ Xvfb :1 -screen 0 800x600x24&
48
+ ```
49
+
50
+ Setup your Python environment
51
+ ```bash
52
+ pip install -r requirements.txt
53
+ ```
54
+
55
+ ### Run
56
+ Set up the model configuration in "run.sh" file, including your model_id, API_BASE_URL and API_KEY.
57
+
58
+ Then you can start running model inference!
59
+ ```bash
60
+ sh run.sh
61
+ ```
62
+
63
+ ### Evaluation
64
+ Set up your GPT-4.1 configuration in "run_eval.sh" file, including your OPENAI_BASE_URL and OPENAIL_KEY.
65
+
66
+ Then you can run GPT-4.1 automatic evaluation.
67
+ ```bash
68
+ sh run_eval.sh
69
+ ```