Hebrew Speech Recognition Leaderboard
Welcome to the Hebrew Speech Recognition Leaderboard! This is a community-driven effort to track and compare the performance of various speech recognition models on Hebrew language tasks.
This leaderboard is maintained by ivrit.ai, a project dedicated to advancing Hebrew language AI technologies. You can find our work on GitHub and Hugging Face.
Motivation
Hebrew presents unique challenges for speech recognition due to its rich morphology, absence of written vowels, and diverse dialectal variations. This leaderboard aims to:
- Provide standardized benchmarks for Hebrew ASR evaluation
- Track progress in Hebrew speech recognition technology
- Foster collaboration in the Hebrew NLP community
- Make Hebrew speech technology more accessible
Benchmarks
The following datasets are used in our evaluation:
ivrit-ai/eval-d1
- Size: 2 hours
- Domain: Manual transcription of a single podcast episode featuring an informal conversation between two speakers (male and female). Audio is segmented into approximately 5-minute chunks.
- Source: Part of the ivrit.ai corpus. Selected episode has been manually transcribed to golden standard quality to serve as a high-quality evaluation benchmark.
SASpeech
- Size: 4 hours (manually corrected portion of the corpus)
- Domain: Economic and political podcast content, containing both read speech and conversational segments. Segments are several seconds in length.
- Source: Derived from the Robo-Shaul project and published in the paper "SASPEECH: A Hebrew Single Speaker Dataset for Text To Speech and Voice Conversion" (Sharoni, O., Shenberg, R., Cooper, E. (2023) SASPEECH: A Hebrew Single Speaker Dataset for Text To Speech and Voice Conversion. Proc. INTERSPEECH 2023,)
google/fleurs/he
- Size: 2 hours (test set of the corpus)
- Domain: Read speech covering common topics and phrases in Hebrew
- Source: Created as part of Google's FLEURS project, designed for multilingual speech tasks and evaluation. Data collected through crowdsourcing from Hebrew speakers.
mozilla-foundation/common_voice_17_0/he
- Size: 2 hours (validated set of the corpus)
- Domain: Read sentences in Hebrew from various texts.
- Source: Collected through Mozilla's Common Voice initiative, where volunteers contribute recordings and validate other speakers' contributions
imvladikon/hebrew_speech_kan
- Size: 1.7 hours (validation set of the corpus)
- Domain: Varied content types from the Kan (Israeli Public Broadcasting Corporation) youtube channel
- Source: Published by Vladimir Gurevich. Scraped audio and subtitles data from YouTube channel "כאן" (Kan).
amazon-transcribe | ivrit-ai/whisper-large-v3-turbo-ct2 | 0.062 | 0.074 | 0.208 | 0.172 | 0.094 |
How it works
Models are evaluated using Word Error Rate (WER) on each benchmark dataset. The final score is an average of WER across all benchmarks, with lower scores indicating better performance.
Specifically, evaluation is done using the jiwer library. Source code for the evaluation can be found here.
Reproducibility
To evaluate your model on these benchmarks, you can use our evaluation script as follows:
./evaluate_model.py --engine <engine> --model <model> --dataset <dataset:split:column> [--name <name>] [--workers <num_workers>]
For example, here's how to evaluate ivrit-ai/faster-whisper-v2-d4 on the google/fleurs/he dataset:
./evaluate_model.py --engine faster-whisper --model ivrit-ai/faster-whisper-v2-d4 --name he_il --dataset google/fleurs:test:transcription --workers 1