You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We release the Expressive Anechoic Recordings of Speech (EARS) dataset.
If you use the dataset or any derivative of it, please cite our Paper
@inproceedings{richter2024ears,
title={{EARS}: An Anechoic Fullband Speech Dataset Benchmarked for Speech Enhancement and Dereverberation},
author={Richter, Julius and Wu, Yi-Chiao and Krenn, Steven and Welker, Simon and Lay, Bunlong and Watanabe, Shinjii and Richard, Alexander and Gerkmann, Timo},
booktitle={Interspeech},
year={2024}
}
For audio samples or scripts to generate the speech enhancement benchmarks, please visit the project page.
Highlights
100 h of speech data from 107 speakers
high-quality recordings at 48 kHz in an anechoic chamber
high speaker diversity with speakers from different ethnicities and age range from 18 to 75 years
full dynamic range of human speech, ranging from whispering to yelling
18 minutes of freeform monologues per speaker
sentence reading in 7 different reading styles (regular, loud, whisper, high pitch, low pitch, fast, slow)
emotional reading and freeform tasks covering 22 different emotions for each speaker
Download EARS Dataset
using bash
for X in $(seq -w 001 107); do
curl -L https://github.com/facebookresearch/ears_dataset/releases/download/dataset/p${X}.zip -o p${X}.zip
unzip p${X}.zip
rm p${X}.zip
done