Datasets:
gender
stringclasses 2
values | speaker_id
stringclasses 40
values | sentence_id
stringclasses 100
values | text
stringclasses 1
value | duration
float32 3.18
26.3
| audio.throat_microphone
audioduration (s) 3.18
26.3
| audio.acoustic_microphone
audioduration (s) 3.18
26.3
|
|---|---|---|---|---|---|---|
female
|
p01
|
u00
| 6.0285
| |||
female
|
p01
|
u01
| 11.06125
| |||
female
|
p01
|
u02
| 11.171875
| |||
female
|
p01
|
u03
| 7.653625
| |||
female
|
p01
|
u04
| 11.89825
| |||
female
|
p01
|
u05
| 5.963625
| |||
female
|
p01
|
u06
| 7.766375
| |||
female
|
p01
|
u07
| 5.56875
| |||
female
|
p01
|
u08
| 14.994375
| |||
female
|
p01
|
u09
| 7.83575
| |||
female
|
p01
|
u10
| 7.23525
| |||
female
|
p01
|
u11
| 7.5725
| |||
female
|
p01
|
u12
| 7.780875
| |||
female
|
p01
|
u13
| 11.45175
| |||
female
|
p01
|
u14
| 4.6665
| |||
female
|
p01
|
u15
| 9.484125
| |||
female
|
p01
|
u16
| 6.623
| |||
female
|
p01
|
u17
| 5.2165
| |||
female
|
p01
|
u18
| 7.47925
| |||
female
|
p01
|
u19
| 11.64125
| |||
female
|
p01
|
u20
| 8.684
| |||
female
|
p01
|
u21
| 8.664875
| |||
female
|
p01
|
u22
| 5.425625
| |||
female
|
p01
|
u23
| 6.817625
| |||
female
|
p01
|
u24
| 7.896125
| |||
female
|
p01
|
u25
| 5.472875
| |||
female
|
p01
|
u26
| 10.403125
| |||
female
|
p01
|
u27
| 7.51925
| |||
female
|
p01
|
u28
| 9.746
| |||
female
|
p01
|
u29
| 5.4225
| |||
female
|
p01
|
u30
| 7.74225
| |||
female
|
p01
|
u31
| 6.56225
| |||
female
|
p01
|
u32
| 8.161
| |||
female
|
p01
|
u33
| 7.07225
| |||
female
|
p01
|
u34
| 9.55475
| |||
female
|
p01
|
u35
| 8.881
| |||
female
|
p01
|
u36
| 6.08425
| |||
female
|
p01
|
u37
| 6.013
| |||
female
|
p01
|
u38
| 7.026875
| |||
female
|
p01
|
u39
| 6.25325
| |||
female
|
p01
|
u40
| 5.618125
| |||
female
|
p01
|
u41
| 5.9185
| |||
female
|
p01
|
u42
| 7.741
| |||
female
|
p01
|
u43
| 7.404125
| |||
female
|
p01
|
u44
| 10.541625
| |||
female
|
p01
|
u45
| 11.061
| |||
female
|
p01
|
u46
| 9.948
| |||
female
|
p01
|
u47
| 10.618375
| |||
female
|
p01
|
u48
| 5.9795
| |||
female
|
p01
|
u49
| 11.829125
| |||
female
|
p01
|
u50
| 9.635
| |||
female
|
p01
|
u51
| 5.29225
| |||
female
|
p01
|
u52
| 11.203125
| |||
female
|
p01
|
u53
| 10.152
| |||
female
|
p01
|
u54
| 10.818625
| |||
female
|
p01
|
u55
| 12.3605
| |||
female
|
p01
|
u56
| 9.904375
| |||
female
|
p01
|
u57
| 9.206125
| |||
female
|
p01
|
u58
| 9.0105
| |||
female
|
p01
|
u59
| 12.3045
| |||
female
|
p01
|
u60
| 8.598875
| |||
female
|
p01
|
u61
| 11.822
| |||
female
|
p01
|
u62
| 10.00925
| |||
female
|
p01
|
u63
| 11.319875
| |||
female
|
p01
|
u64
| 4.864
| |||
female
|
p01
|
u65
| 6.293375
| |||
female
|
p01
|
u66
| 9.54075
| |||
female
|
p01
|
u67
| 8.720625
| |||
female
|
p01
|
u68
| 6.16625
| |||
female
|
p01
|
u69
| 10.760375
| |||
female
|
p01
|
u70
| 11.085875
| |||
female
|
p01
|
u71
| 5.86525
| |||
female
|
p01
|
u72
| 11.7195
| |||
female
|
p01
|
u73
| 8.623375
| |||
female
|
p01
|
u74
| 7.34325
| |||
female
|
p01
|
u75
| 7.78925
| |||
female
|
p01
|
u76
| 12.6935
| |||
female
|
p01
|
u77
| 9.783875
| |||
female
|
p01
|
u78
| 5.778875
| |||
female
|
p01
|
u79
| 10.57925
| |||
female
|
p01
|
u80
| 6.784625
| |||
female
|
p01
|
u81
| 13.771125
| |||
female
|
p01
|
u82
| 7.41775
| |||
female
|
p01
|
u83
| 8.12925
| |||
female
|
p01
|
u84
| 10.72025
| |||
female
|
p01
|
u85
| 7.54075
| |||
female
|
p01
|
u86
| 9.01325
| |||
female
|
p01
|
u87
| 8.141375
| |||
female
|
p01
|
u88
| 8.717625
| |||
female
|
p01
|
u89
| 7.910125
| |||
female
|
p01
|
u90
| 9.30025
| |||
female
|
p01
|
u91
| 9.84025
| |||
female
|
p01
|
u92
| 6.492375
| |||
female
|
p01
|
u93
| 5.896875
| |||
female
|
p01
|
u94
| 5.0035
| |||
female
|
p01
|
u95
| 5.19625
| |||
female
|
p01
|
u96
| 5.812875
| |||
female
|
p01
|
u97
| 5.0385
| |||
female
|
p01
|
u98
| 8.753375
| |||
female
|
p01
|
u99
| 9.655875
|
TAPS: Throat and Acoustic Paired Speech Dataset
1. DATASET SUMMARY
The Throat and Acoustic Paired Speech (TAPS) dataset is a standardized corpus designed for deep learning-based speech enhancement, specifically targeting throat microphone recordings. Throat microphones effectively suppress background noise but suffer from high-frequency attenuation due to the low-pass filtering effect of the skin and tissue. The dataset provides paired recordings from 60 native Korean speakers, captured simultaneously using a throat microphone (accelerometer-based) and an acoustic microphone. This dataset facilitates speech enhancement research by enabling the development of models that recover lost high-frequency components and improve intelligibility. Additionally, we introduce a mismatch correction technique to align signals from the two microphones, which enhances model training.
2. Dataset Usage
To use the TAPS dataset, follow the steps below:
2.1 Loading the dataset
You can load the dataset from Hugging Face as follows:
from datasets import load_dataset
dataset = load_dataset("yskim3271/Throat_and_Acoustic_Pairing_Speech_Dataset")
print(dataset)
Example output:
DatasetDict({
train: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'],
num_rows: 4000
})
dev: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'],
num_rows: 1000
})
test: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration', 'audio.throat_microphone', 'audio.acoustic_microphone'],
num_rows: 1000
})
})
2.2 Accessing a sample
Each dataset entry consists of metadata and paired audio recordings. You can access a sample as follows:
sample = dataset["train"][0] # Get the first sample
print(f"Gender: {sample['gender']}")
print(f"Speaker ID: {sample['speaker_id']}")
print(f"Sentence ID: {sample['sentence_id']}")
print(f"Text: {sample['text']}")
print(f"Duration: {sample['duration']} sec")
print(f"Throat Microphone Audio Path: {sample['audio.throat_microphone']['path']}")
print(f"Acoustic Microphone Audio Path: {sample['audio.acoustic_microphone']['path']}")
2.3 Using the with_normalized_text Configuration
The dataset provides an additional configuration that includes normalized text transcriptions for the test set.
Loading the configuration:
from datasets import load_dataset
# Load with normalized_text config
dataset = load_dataset(
"yskim3271/Throat_and_Acoustic_Pairing_Speech_Dataset",
name="with_normalized_text"
)
Example output:
DatasetDict({
train: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration',
'audio.throat_microphone', 'audio.acoustic_microphone', 'normalized_text'],
num_rows: 4000
})
dev: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration',
'audio.throat_microphone', 'audio.acoustic_microphone', 'normalized_text'],
num_rows: 1000
})
test: Dataset({
features: ['gender', 'speaker_id', 'sentence_id', 'text', 'duration',
'audio.throat_microphone', 'audio.acoustic_microphone', 'normalized_text'],
num_rows: 1000
})
})
Normalized Text Details:
- Test set: Contains normalized Korean text with numbers spelled out (e.g., "7์ผ" โ "์น ์ผ", "120๋ช " โ "๋ฐฑ ์ด์ญ ๋ช ") and punctuation normalized (quotation marks removed)
- Train/Dev sets: Contains empty strings (
"") for schema consistency
Example usage:
sample = dataset["test"][0]
print(f"Original text: {sample['text']}")
print(f"Normalized text: {sample['normalized_text']}")
# Output:
# Original text: ๋ถ์ฐ์๋ 7์ผ ์คํ ์๋ฏผ๋ค์ด ์์ ํ๊ฒ ๋์ค๊ตํต์ ์ด์ฉํ๊ธฐ ์ํด '๋์ค๊ตํต ํผ์ก๋ ์์ ๊ด๋ฆฌ ๋์ฑ
ํ์'๋ฅผ ์ด๊ณ ์ ์ฑ
์ ๊ฒ์ ๋์ฐ๋ค
# Normalized text: ๋ถ์ฐ์๋ ์น ์ผ ์คํ ์๋ฏผ๋ค์ด ์์ ํ๊ฒ ๋์ค๊ตํต์ ์ด์ฉํ๊ธฐ ์ํด ๋์ค๊ตํต ํผ์ก๋ ์์ ๊ด๋ฆฌ ๋์ฑ
ํ์๋ฅผ ์ด๊ณ ์ ์ฑ
์ ๊ฒ์ ๋์ฐ๋ค
3. Links and Details
- Project website: Link
- Point of contact: Yunsik Kim ([email protected])
- Collected by: Intelligent Semiconductor and Wearable Devices (ISWD) of the Pohang University of Science and Technology (POSTECH)
- Language: Korean
- Download size: 7.03 GB
- Total audio duration: 15.3 hours
- Number of speech utterances: 6,000
4. Citataion
The BibTeX entry for the dataset is currently being prepared.
5. DATASET STRUCTURE & STATISTICS
- Training Set (40 speakers, 4,000 utterances, 10.2 hours)
- Development Set (10 speakers, 1,000 utterances, 2.5 hours)
- Test Set (10 speakers, 1,000 utterances, 2.6 hours)
- Each set is gender-balanced (50% male, 50% female).
- No speaker overlap across train/dev/test sets.
| Dataset Type | Train | Dev | Test |
|---|---|---|---|
| Number of Speakers | 40 | 10 | 10 |
| Number of male speakers | 20 | 5 | 5 |
| Mean / standard deviation of the speaker age | 28.5 / 7.3 | 25.6 / 3.0 | 26.2 / 1.4 |
| Number of utterances | 4,000 | 1,000 | 1,000 |
| Total length of utterances (hours) | 10.2 | 2.5 | 2.6 |
| Max / average / min length of utterances (s) | 26.3 / 9.1 / 3.2 | 17.9 / 9.0 / 3.3 | 16.6 / 9.3 / 4.2 |
6. DATA FIELDS
6.1 Default Configuration
Each dataset entry contains:
gender: Speaker's gender (male/female).speaker_id: Unique speaker identifier (e.g., "p01").sentence_id: Utterance index (e.g., "u30").text: Transcription (provided only for test set).duration: Length of the audio sample.audio.throat_microphone: Throat microphone signal.audio.acoustic_microphone: Acoustic microphone signal.
6.2 with_normalized_text Configuration
In addition to all fields from the default configuration, this configuration includes:
normalized_text: Normalized transcription with:- Numbers spelled out in Korean (e.g., "7" โ "์น ", "120" โ "๋ฐฑ ์ด์ญ")
- Normalized punctuation (quotation marks removed)
- Available only for test set (1,000 utterances)
7. DATASET CREATION
7.1 Hardware System for Audio Data Collection
The hardware system simultaneously records signals from a throat microphone and an acoustic microphone, ensuring synchronization.
- Throat microphone: The TDK IIM-42652 MEMS accelerometer captures neck surface vibrations (8 kHz, 16-bit resolution).
- Acoustic microphone: The CUI Devices CMM-4030D-261 MEMS microphone records audio (16 kHz, 24-bit resolution) and is integrated into a peripheral board.
- MCU and data transmission: The STM32F301C8T6 MCU processes signals via SPI (throat microphone) and IยฒS (acoustic microphone). Data is transmitted to a laptop in real-time through UART communication.
7.2 Sensors Positioning and Recording Environment
- Throat microphone placement: Attached to the supraglottic area of the neck.
- Acoustic microphone position: 30 cm in front of the speaker.
- Recording conditions: Conducted in a controlled, semi-soundproof environment to minimize ambient noise.
- Head rest: A headrest was used to maintain a consistent head position during recording.
- Nylon filter: A nylon pop filter was placed between the speaker and the acoustic microphone to minimize plosive sounds.
- Scripts for Utterances: Sentences were displayed on a screen for participants to read.
7.3 Python-based Software for Data Recordings
The custom-built software facilitates real-time data recording, monitoring, and synchronization of throat and acoustic microphone signals.
- Interface overview: The software displays live waveforms, spectrograms, and synchronization metrics (e.g., SNR, shift values).
- Shift analysis: Visualizations include a shift plot to monitor synchronization between the microphones and a shift histogram for statistical analysis.
- Recording control: Users can manage recordings using controls for file navigation (e.g., Prev, Next, Skip).
- Real-time feedback: Signal quality is monitored with metrics like SNR and synchronization shifts.
7.4 Recorded Audio Data Post-Processing
- Noise reduction: Applied Demucs model to suppress background noise in the acoustic microphone recordings.
- Mismatch correction: Aligned throat and acoustic microphone signals using cross-correlation.
- Silent segment trimming: Removed leading/trailing silence.
7.5 Personal and Sensitive Information
- No personally identifiable information is included.
- Ethical approval: Institutional Review Board (IRB) approval from POSTECH.
- Consent: All participants provided written informed consent.
8. POTENTIAL APPLICATIONS OF THE DATASET
The TAPS dataset enables various speech processing tasks, including:
- Speech enhancement: Improving the intelligibility and quality of throat microphone recordings by recovering high-frequency components.
- Automatic speech recognition (ASR): Enhancing throat microphone speech for better transcription accuracy in noisy environments.
- Speaker Verification: Exploring the effectiveness of throat microphone recordings for identity verification in challenging acoustic environments.
- Downloads last month
- 348