詳細書目資料

1
0
0
0
0

Probabilistic machine learning : advanced topics / [electronic resource]

  • 作者: Murphy, Kevin P., 1970- author.
  • 其他題名:
    • Adaptive Computation and Machine Learning Series.
  • 出版: Cambridge, Massachusetts : The MIT Press
  • 叢書名: Adaptive Computation and Machine Learning Series
  • 主題: Machine learning. , Probabilities. , Apprentissage automatique. , Probabilites. , probability. , Machine learning , Probabilities
  • ISBN: 9780262376006 (electronic bk.) 、 0262376008 、 9780262375993 、 0262375990
  • FIND@SFXID: CGU
  • 資料類型: 電子書
  • 內容註: Includes bibliographical references and index. Intro -- Copyright -- Preface -- 1 Introduction -- I Fundamentals -- 2 Probability -- 2.1 Introduction -- 2.1.1 Probability space -- 2.1.2 Discrete random variables -- 2.1.3 Continuous random variables -- 2.1.4 Probability axioms -- 2.1.5 Conditional probability -- 2.1.6 Bayes' rule -- 2.2 Some common probability distributions -- 2.2.1 Discrete distributions -- 2.2.2 Continuous distributions on ℝ -- 2.2.3 Continuous distributions on ℝ+ -- 2.2.4 Continuous distributions on [0, 1] -- 2.2.5 Multivariate continuous distributions -- 2.3 Gaussian joint distributions -- 2.3.1 The multivariate normal 2.3.2 Linear Gaussian systems -- 2.3.3 A general calculus for linear Gaussian systems -- 2.4 The exponential family -- 2.4.1 Definition -- 2.4.2 Examples -- 2.4.3 Log partition function is cumulant generating function -- 2.4.4 Canonical (natural) vs mean (moment) parameters -- 2.4.5 MLE for the exponential family -- 2.4.6 Exponential dispersion family -- 2.4.7 Maximum entropy derivation of the exponential family -- 2.5 Transformations of random variables -- 2.5.1 Invertible transformations (bijections) -- 2.5.2 Monte Carlo approximation -- 2.5.3 Probability integral transform -- 2.6 Markov chains 2.6.1 Parameterization -- 2.6.2 Application: language modeling -- 2.6.3 Parameter estimation -- 2.6.4 Stationary distribution of a Markov chain -- 2.7 Divergence measures between probability distributions -- 2.7.1 f-divergence -- 2.7.2 Integral probability metrics -- 2.7.3 Maximum mean discrepancy (MMD) -- 2.7.4 Total variation distance -- 2.7.5 Density ratio estimation using binary classifiers -- 3 Statistics -- 3.1 Introduction -- 3.2 Bayesian statistics -- 3.2.1 Tossing coins -- 3.2.2 Modeling more complex data -- 3.2.3 Selecting the prior -- 3.2.4 Computational issues 3.2.5 Exchangeability and de Finetti's theorem -- 3.3 Frequentist statistics -- 3.3.1 Sampling distributions -- 3.3.2 Bootstrap approximation of the sampling distribution -- 3.3.3 Asymptotic normality of the sampling distribution of the MLE -- 3.3.4 Fisher information matrix -- 3.3.5 Counterintuitive properties of frequentist statistics -- 3.3.6 Why isn't everyone a Bayesian? -- 3.4 Conjugate priors -- 3.4.1 The binomial model -- 3.4.2 The multinomial model -- 3.4.3 The univariate Gaussian model -- 3.4.4 The multivariate Gaussian model -- 3.4.5 The exponential family model 3.4.6 Beyond conjugate priors -- 3.5 Noninformative priors -- 3.5.1 Maximum entropy priors -- 3.5.2 Jeffreys priors -- 3.5.3 Invariant priors -- 3.5.4 Reference priors -- 3.6 Hierarchical priors -- 3.6.1 A hierarchical binomial model -- 3.6.2 A hierarchical Gaussian model -- 3.6.3 Hierarchical conditional models -- 3.7 Empirical Bayes -- 3.7.1 EB for the hierarchical binomial model -- 3.7.2 EB for the hierarchical Gaussian model -- 3.7.3 EB for Markov models (n-gram smoothing) -- 3.7.4 EB for non-conjugate models -- 3.8 Model selection -- 3.8.1 Bayesian model selection
  • 摘要註: "An advanced book for researchers and graduate students working in machine learning and statistics that reflects the influence of deep learning"--
  • 讀者標籤:
  • 引用連結:
  • Share:
  • 系統號: 005527710 | 機讀編目格式
  • 館藏資訊

    回到最上