Similarity based smoothing in language modeling

  • Zoltán Szamonek
  • István Biró

Abstract

In this paper, we improve our previously proposed Similarity Based Smoothing (SBS) algorithm. The idea of the SBS is to map words or part of sentences to an Euclidean space, and approximate the language model in that space. The bottleneck of the original algorithm was to train a regularized logistic regression model, which was incapable to deal with real world data. We replace the logistic regression by regularized maximum entropy estimation and a Gaussian mixture approach to model the language in the Euclidean space, showing other possibilities to use the main idea of SBS. We show that the regularized maximum entropy model is flexible enough to handle conditional probability density estimation, thus enable parallel computation tasks with significantly decreased iteration steps. The experimental results demonstrate the success of our method, we achieve 14% improvement on a reail world corpus.

Downloads

Download data is not yet available.
Published
2007-01-01
How to Cite
Szamonek, Z., & Biró, I. (2007). Similarity based smoothing in language modeling. Acta Cybernetica, 18(2), 303-314. Retrieved from https://cyber.bibl.u-szeged.hu/index.php/actcybern/article/view/3721
Section
Regular articles