click to view more

by

$104.92

add to favourite
  • In Stock - Guaranteed to ship in 24 hours with Free Online tracking.
  • FREE DELIVERY by Friday, April 25, 2025 2:17:44 AM UTC
  • 24/24 Online
  • Yes High Speed
  • Yes Protection
Last update:

Description

This book studies mathematical theories of machine learning. The first part of the book explores the optimality and adaptivity of choosing step sizes of gradient descent for escaping strict saddle points in non-convex optimization problems. In the second part, the authors propose algorithms to find local minima in nonconvex optimization and to obtain global minima in some degree from the Newton Second Law without friction. In the third part, the authors study the problem of subspace clustering with noisy and missing data, which is a problem well-motivated by practical applications data subject to stochastic Gaussian noise and/or incomplete data with uniformly missing entries. In the last part, the authors introduce an novel VAR model with Elastic-Net regularization and its equivalent Bayesian model allowing for both a stable sparsity and a group selection.

Last updated on

Product Details

  • Jun 26, 2019 Pub Date:
  • 9783030170752 ISBN-13:
  • 3030170756 ISBN-10:
  • 154.0 pages Hardcover
  • English Language