Titre : | Bayesian speech and language processing | Type de document : | texte imprimé | Auteurs : | Shinji Watanabe, Auteur ; Jen-Tzung Chien, Auteur | Editeur : | Cambridge University Press | Année de publication : | cop. 2015 | Importance : | xxi, 424 pages | Format : | 26 cm | ISBN/ISSN/EAN : | 978-1-10-705557-5 | Note générale : | Bibliogr. p. 405-421. Index | Langues : | Anglais (eng) | Catégories : | [Thesaurus]Sciences et Techniques:Sciences:Informatique:Application de l'informatique:Intelligence artificielle
| Index. décimale : | 006.35 Traitement du langage naturel dans les systèmes experts | Résumé : | "With this comprehensive guide you will learn how to apply Bayesian machine learning techniques systematically to solve various problems in speech and language processing. A range of statistical models is detailed, from hidden Markov models to Gaussian mixture models, n-gram models and latent topic models, along with applications including automatic speech recognition, speaker verification, and information retrieval. Approximate Bayesian inferences based on MAP, Evidence, Asymptotic, VB, and MCMC approximations are provided as well as full derivations of calculations, useful notations, formulas, and rules. The authors address the difficulties of straightforward applications and provide detailed examples and case studies to demonstrate how you can successfully use practical Bayesian inference methods to improve the performance of information systems. This is an invaluable resource for students, researchers, and industry practitioners working in machine learning, signal processing, and speech and language processing" ; "In general, speech and language processing involves extensive knowledge of statistical models. The acoustic model using hidden Markov models and language model using n-grams are mainly introduced. Both acoustic and language models are important parts of modern speech recognition systems where the learned models from real-world data are full of complexity, ambiguity and uncertainty. The uncertainty modeling is crucial to tackle the lack of robustness for speech and language processing" | Note de contenu : | Part I. General Discussion: 1. Introduction; 2. Bayesian approach; 3. Statistical models in speech and language processing; Part II. Approximate Inference: 4. Maximum a posteriori approximation; 5. Evidence approximation; 6. Asymptotic approximation; 7. Variational Bayes; 8. Markov chain Monte Carlo. |
Bayesian speech and language processing [texte imprimé] / Shinji Watanabe, Auteur ; Jen-Tzung Chien, Auteur . - [S.l.] : Cambridge University Press, cop. 2015 . - xxi, 424 pages ; 26 cm. ISBN : 978-1-10-705557-5 Bibliogr. p. 405-421. Index Langues : Anglais ( eng) Catégories : | [Thesaurus]Sciences et Techniques:Sciences:Informatique:Application de l'informatique:Intelligence artificielle
| Index. décimale : | 006.35 Traitement du langage naturel dans les systèmes experts | Résumé : | "With this comprehensive guide you will learn how to apply Bayesian machine learning techniques systematically to solve various problems in speech and language processing. A range of statistical models is detailed, from hidden Markov models to Gaussian mixture models, n-gram models and latent topic models, along with applications including automatic speech recognition, speaker verification, and information retrieval. Approximate Bayesian inferences based on MAP, Evidence, Asymptotic, VB, and MCMC approximations are provided as well as full derivations of calculations, useful notations, formulas, and rules. The authors address the difficulties of straightforward applications and provide detailed examples and case studies to demonstrate how you can successfully use practical Bayesian inference methods to improve the performance of information systems. This is an invaluable resource for students, researchers, and industry practitioners working in machine learning, signal processing, and speech and language processing" ; "In general, speech and language processing involves extensive knowledge of statistical models. The acoustic model using hidden Markov models and language model using n-grams are mainly introduced. Both acoustic and language models are important parts of modern speech recognition systems where the learned models from real-world data are full of complexity, ambiguity and uncertainty. The uncertainty modeling is crucial to tackle the lack of robustness for speech and language processing" | Note de contenu : | Part I. General Discussion: 1. Introduction; 2. Bayesian approach; 3. Statistical models in speech and language processing; Part II. Approximate Inference: 4. Maximum a posteriori approximation; 5. Evidence approximation; 6. Asymptotic approximation; 7. Variational Bayes; 8. Markov chain Monte Carlo. |
| |