Frontiers in brain-inspired AI (2018 Spring Bis800)
How do machines and brains solve optimal control problems? This course will explore frontiers in modern brain-inspired artificial intelligence. The first part of our course discusses how to solve the inverse problem:
Topics include neural networks, deep learning, and cortical information processing. The second part discusses how to solve the optimal control problem:
Students are expected to learn about basic theory, algorithms, and neuroscience of reinforcement learning (RL). We will also have in-depth discussions of recent advances in neuroscience of RL and deep RL.
#215 (E16 ChungMoonSul Bldg.)
Monday and Wednesday 10:30-12:00
Sang Wan Lee (firstname.lastname@example.org, Rm.#516 E16-1)
Tuesday and Thursday 11:00 – 11:55
3 units (3:0:0).
Linear algebra and probability (or equivalent)
Attendance (30%), mid-term exam (20%), presentation (30%), final term projects (20%)
Lecture materials (70%) + a few chapters of the followings (30%):
- S. Haykin, Neural Networks and Learning Machines, Prentice Hall, 2009.
- R. S. Sutton and A. G. Barto, An Introduction to Reinforcement Learning, MIT Press, 1998.
- D. Bertsekas, Dynamic Programming and Optimal Control: Approximate dynamic programming (Volume 2), Athena Scientific, 2012.