01050nas a2200133 4500008004100000245008200041210006900123520062800192100001900820700001300839700001300852700001800865856003300883 2004 eng d00aLearning Mixture Models with the Regularized Latent Maximum Entropy Principle0 aLearning Mixture Models with the Regularized Latent Maximum Entr3 aWe present a new approach to estimating mixture models based on a new inference principle we have proposed: the latent maximum entropy principle (LME). LME is different both from Jaynes' maximum entropy principle and from standard maximum likelihood estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the EM algorithm can be developed. Our experiments show that estimation based on LME generally yields better results than maximum likelihood estimation, particularly when inferring latent variable models from small amounts of data.1 aSchuurmans, D.1 aPeng, F.1 aZhao, Y.1 aWang, Shaojun uhttp://knoesis.org/node/1458