詳細書目資料

資料來源: Google Book
3
0
0
0
0

Simulation-based optimization parametric optimization techniques and reinforcement learning / [electronic resource] :

  • 作者: Gosavi, Abhijit.
  • 其他作者:
  • 其他題名:
    • Operations research/computer science interfaces series
  • 出版: Boston, MA : Springer US :Imprint: Springer
  • 版本:2nd ed.
  • 叢書名: Operations research/computer science interfaces seriesv.55
  • 主題: Probabilities , Mathematical optimization , Economics/Management Science. , Operation Research/Decision Theory. , Operations Research, Management Science. , Simulation and Modeling.
  • ISBN: 9781489974914 (electronic bk.) 、 9781489974907 (paper)
  • FIND@SFXID: CGU
  • 資料類型: 電子書
  • 內容註: Background -- Simulation basics -- Simulation optimization: an overview -- Response surfaces and neural nets -- Parametric optimization -- Dynamic programming -- Reinforcement learning -- Stochastic search for controls -- Convergence: background material -- Convergence: parametric optimization -- Convergence: control optimization -- Case studies.
  • 摘要註: Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search, and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search, and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online), and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters Static Simulation Optimization, Reinforcement Learning, and Convergence Analysis this book is written for researchers and students in the fields of engineering (industrial, systems, electrical, and computer), operations research, computer science, and applied mathematics.
  • 讀者標籤:
  • 引用連結:
  • Share:
  • 系統號: 005128341 | 機讀編目格式
  • 館藏資訊

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduce the evolving area of static and dynamic simulation-based optimization. Covered in detail are model-free optimization techniques – especially designed for those discrete-event, stochastic systems which can be simulated but whose analytical models are difficult to find in closed mathematical forms. Key features of this revised and improved Second Edition include: · Extensive coverage, via step-by-step recipes, of powerful new algorithms for static simulation optimization, including simultaneous perturbation, backtracking adaptive search and nested partitions, in addition to traditional methods, such as response surfaces, Nelder-Mead search and meta-heuristics (simulated annealing, tabu search, and genetic algorithms) · Detailed coverage of the Bellman equation framework for Markov Decision Processes (MDPs), along with dynamic programming (value and policy iteration) for discounted, average, and total reward performance metrics · An in-depth consideration of dynamic simulation optimization via temporal differences and Reinforcement Learning: Q-Learning, SARSA, and R-SMART algorithms, and policy search, via API, Q-P-Learning, actor-critics, and learning automata · A special examination of neural-network-based function approximation for Reinforcement Learning, semi-Markov decision processes (SMDPs), finite-horizon problems, two time scales, case studies for industrial tasks, computer codes (placed online) and convergence proofs, via Banach fixed point theory and Ordinary Differential Equations Themed around three areas in separate sets of chapters – Static Simulation Optimization, Reinforcement Learning and Convergence Analysis – this book is written for researchers and students in the fields of engineering (industrial, systems, electrical and computer), operations research, computer science and applied mathematics.

    資料來源: Google Book
    延伸查詢 Google Books Amazon
    回到最上