OPTIMIZING EXPECTATIONS: FROM DEEP REINFORCEMENT LEARNING TO STOCHASTIC COMPUTATION GRAPHS

PDF Publication Title:

OPTIMIZING EXPECTATIONS: FROM DEEP REINFORCEMENT LEARNING TO STOCHASTIC COMPUTATION GRAPHS ( optimizing-expectations-from-deep-reinforcement-learning-to- )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 099

[Lev+16] [Lil+15] [Lin93] [MT03] [MS12] [Mar10] [MG14] [Mni+13] [Mni+14] [Mni+15] [Mni+16] [Mol+15] Bibliography 91 S. Levine, C. Finn, T. Darrell, and P. Abbeel. “End-to-end training of deep visuomotor policies.” In: Journal of Machine Learning Research 17.39 (2016), pp. 1–40 (cit. on pp. 3, 86). T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. “Continuous control with deep reinforcement learning.” In: arXiv preprint arXiv:1509.02971 (2015) (cit. on pp. 3, 5, 61, 62). L.-J. Lin. Reinforcement learning for robots using neural networks. Tech. rep. DTIC Docu- ment, 1993 (cit. on p. 2). P. Marbach and J. N. Tsitsiklis. “Approximate gradient methods in policy-space op- timization of Markov reward processes.” In: Discrete Event Dynamic Systems 13.1-2 (2003), pp. 111–148 (cit. on p. 47). J. Martens and I. Sutskever. “Training deep and recurrent networks with Hessian- free optimization.” In: Neural Networks: Tricks of the Trade. Springer, 2012, pp. 479– 535 (cit. on p. 40). J. Martens. “Deep learning via Hessian-free optimization.” In: Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010, pp. 735–742 (cit. on pp. 65, 73). A. Mnih and K. Gregor. “Neural variational inference and learning in belief net- works.” In: arXiv:1402.0030 (2014) (cit. on pp. 64, 76, 80, 81). V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. “Playing Atari with Deep Reinforcement Learning.” In: arXiv preprint arXiv:1312.5602 (2013) (cit. on pp. 3, 5, 32, 33). V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. “Recurrent models of visual attention.” In: Advances in Neural Information Processing Systems. 2014, pp. 2204–2212 (cit. on pp. 64, 83). V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, et al. “Human-level control through deep reinforcement learning.” In: Nature 518.7540 (2015), pp. 529–533 (cit. on p. 4). V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. “Asynchronous methods for deep reinforcement learning.” In: arXiv preprint arXiv:1602.01783 (2016) (cit. on pp. 3, 17). T. M. Moldovan, S. Levine, M. I. Jordan, and P. Abbeel. “Optimism-driven explo- ration for nonlinear systems.” In: 2015 IEEE International Conference on Robotics and Automation (ICRA). IEEE. 2015, pp. 3239–3246 (cit. on p. 86).

PDF Image | OPTIMIZING EXPECTATIONS: FROM DEEP REINFORCEMENT LEARNING TO STOCHASTIC COMPUTATION GRAPHS

PDF Search Title:

OPTIMIZING EXPECTATIONS: FROM DEEP REINFORCEMENT LEARNING TO STOCHASTIC COMPUTATION GRAPHS

Original File Name Searched:

thesis-optimizing-deep-learning.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)