semiotic/SynQL-Spider-Train
Viewer • Updated • 115k • 76 • 2
How to use semiotic/T5-3B-SynQL-Spider-Train-Run-00 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="semiotic/T5-3B-SynQL-Spider-Train-Run-00") # Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("semiotic/T5-3B-SynQL-Spider-Train-Run-00")
model = AutoModelForSeq2SeqLM.from_pretrained("semiotic/T5-3B-SynQL-Spider-Train-Run-00")How to use semiotic/T5-3B-SynQL-Spider-Train-Run-00 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "semiotic/T5-3B-SynQL-Spider-Train-Run-00"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "semiotic/T5-3B-SynQL-Spider-Train-Run-00",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/semiotic/T5-3B-SynQL-Spider-Train-Run-00
How to use semiotic/T5-3B-SynQL-Spider-Train-Run-00 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "semiotic/T5-3B-SynQL-Spider-Train-Run-00" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "semiotic/T5-3B-SynQL-Spider-Train-Run-00",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "semiotic/T5-3B-SynQL-Spider-Train-Run-00" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "semiotic/T5-3B-SynQL-Spider-Train-Run-00",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use semiotic/T5-3B-SynQL-Spider-Train-Run-00 with Docker Model Runner:
docker model run hf.co/semiotic/T5-3B-SynQL-Spider-Train-Run-00
Example metadata can be found below, context represents the prompt that is presented to the model. Database schemas follow the encoding method proposed by Shaw et al (2020).
"query": "SELECT count(*) FROM singer",
"question": "How many singers do we have?",
"context": "How many singers do we have? | concert_singer | stadium : stadium_id, location, name, capacity, highest, lowest, average | singer : singer_id, name, country, song_name, song_release_year, age, is_male | concert : concert_id, concert_name, theme, stadium_id, year | singer_in_concert : concert_id, singer_id",
"db_id": "concert_singer",
Evaluation set: Spider/dev
Evaluation metrics: [Test-Suite-Execution, Execution Accuracy]
| Model | Data | Run | Execution Accuracy | Test-Suite Execution Accuracy |
|---|---|---|---|---|
| T5-3B | semiotic/SynQL-Spider-Train | 00 | 0.7021 | 0.5996 |
| T5-3B | semiotic/SynQL-Spider-Train | 01 | 0.6992 | 0.5464 |
| T5-3B | semiotic/SynQL-Spider-Train | 02 | 0.7002 | 0.5861 |
Base model
google-t5/t5-3b