Disfluency detection models now approach high accuracy on English text. However, little exploration has been done in im- proving the size and inference time of the model. At the same time, automatic speech recognition (ASR) models are moving from server-side inference to local, on-device inference. Sup- porting models in the transcription pipeline (like disfluency de- tection) must follow suit. In this work we concentrate on the dis- fluency detection task, focusing on small, fast, on-device mod- els based on the BERT architecture. We demonstrate it is pos- sible to train disfluency detection models as small as 1.3 MiB, while retaining high performance. We build on previous work that showed the benefit of data augmentation approaches such as self-training. Then, we evaluate the effect of domain mismatch between conversational and written text on model performance. We find that domain adaptation and data augmentation strate- gies have a more pronounced effect on these smaller models, as compared to conventional BERT models.

arxiv.org/pdf/2104.10769.pdf


Ontdek meer van Djimit van data naar doen.

Abonneer je om de nieuwste berichten naar je e-mail te laten verzenden.

Categories: Data