Log-Linear Models, Extensions, and Applications

Paperback
$95.00 US
| $129.99 CAN
On sale Dec 03, 2024 | 214 Pages | 9780262553469
Advances in training models with log-linear structures, with topics including variable selection, the geometry of neural nets, and applications.

Log-linear models play a key role in modern big data and machine learning applications. From simple binary classification models through partition functions, conditional random fields, and neural nets, log-linear structure is closely related to performance in certain applications and influences fitting techniques used to train models. This volume covers recent advances in training models with log-linear structures, covering the underlying geometry, optimization techniques, and multiple applications. The first chapter shows readers the inner workings of machine learning, providing insights into the geometry of log-linear and neural net models. The other chapters range from introductory material to optimization techniques to involved use cases. The book, which grew out of a NIPS workshop, is suitable for graduate students doing research in machine learning, in particular deep learning, variable selection, and applications to speech recognition. The contributors come from academia and industry, allowing readers to view the field from both perspectives.

Contributors
Aleksandr Aravkin, Avishy Carmi, Guillermo A. Cecchi, Anna Choromanska, Li Deng, Xinwei Deng, Jean Honorio, Tony Jebara, Huijing Jiang, Dimitri Kanevsky, Brian Kingsbury, Fabrice Lambert, Aurélie C. Lozano, Daniel Moskovich, Yuriy S. Polyakov, Bhuvana Ramabhadran, Irina Rish, Dimitris Samaras, Tara N. Sainath, Hagen Soltau, Serge F. Timashev, Ewout van den Berg

About

Advances in training models with log-linear structures, with topics including variable selection, the geometry of neural nets, and applications.

Log-linear models play a key role in modern big data and machine learning applications. From simple binary classification models through partition functions, conditional random fields, and neural nets, log-linear structure is closely related to performance in certain applications and influences fitting techniques used to train models. This volume covers recent advances in training models with log-linear structures, covering the underlying geometry, optimization techniques, and multiple applications. The first chapter shows readers the inner workings of machine learning, providing insights into the geometry of log-linear and neural net models. The other chapters range from introductory material to optimization techniques to involved use cases. The book, which grew out of a NIPS workshop, is suitable for graduate students doing research in machine learning, in particular deep learning, variable selection, and applications to speech recognition. The contributors come from academia and industry, allowing readers to view the field from both perspectives.

Contributors
Aleksandr Aravkin, Avishy Carmi, Guillermo A. Cecchi, Anna Choromanska, Li Deng, Xinwei Deng, Jean Honorio, Tony Jebara, Huijing Jiang, Dimitri Kanevsky, Brian Kingsbury, Fabrice Lambert, Aurélie C. Lozano, Daniel Moskovich, Yuriy S. Polyakov, Bhuvana Ramabhadran, Irina Rish, Dimitris Samaras, Tara N. Sainath, Hagen Soltau, Serge F. Timashev, Ewout van den Berg