Supplementary Online Textbook
Reinforcement Learning: An Introduction - Draft
Richard Sutton and Andrew G. Barto
Second Edition, in progress
Note on Textbook: The textbook is freely available online and can also be purchased at the above site. The course lecture slides from the instructor are relatively self-contained, but the supplementary textbook offers many valuable perspectives and examples. Note that the mathematical notation in the book and the course slides will not always be consistent.
Instructor Office Hours: Wednesday nights from 9pm-10pm via WebEx at: https://oregonstate.webex.com/meet/afernoregonstate.edu
TA Office Hours: (Vahid Ghadakchi) Monday and Wednesday 2-3 in Kelley Atrium
In this course we will study models and algorithms for automated planning and decision making. The course will be divided into four main sections.
1) We will study planning in the context of Markov decision processes (MDPs) where the environment is allowed to be stochastic. We will cover the basic theory and algorithms for explicit state-space MDPs for exactly solving small to moderately sized problems.
2) We will study the basic theory and algorithms for reinforcement learning (RL), where the agent is not given a model of the environment, but instead must learn to act in the world by directly interacting with the environment. We will learn about model-based approaches and the two primary model-free RL paradigms, temporal-difference learning and policy gradient methods. The course will study how the paradigms can be applied to learn both linear and non-linear agent architectures (including what is now known as Deep RL).
3) We will study the area of Monte-Carlo planning, which is a middle ground between reinforcement learning and MDP planning, where a simulator of the system to be controlled is available and can be used to make intelligent action choices.
There will be a number of assignments, which will involve some amount of implementation of algorithms and experimentation.
This year we are experimenting with the use of Intel's DevCloud for assignments, which will let us consider distributed (multi-core) implementations.
No prior distributed programming experience will be needed, but Python will be the required language for this course.
There will be several sets of written questions posted for students to work through with solutions being made available a week after posting. The written questions will not be graded, but it will be important to understand understand the concepts raised in the questions to do well on the quizzes.
There will be three in class quizzes that will be announced at least a week ahead of time. The quizzes will cover the conceptual and theoretical concepts taught in class.
The final grade will be calculated as follows: Implementation Assignments 70%, Quizzes 30%
Pairs of students may work together on the implementation assignments, but students can work individually if they prefer. The instructor and TAs will actively check for copying of code and solutions. The work you (or your team) turns in should be your own. Any violation of these rules will result in failing the course.
The syllabus page shows a table-oriented view of the course schedule, and the basics of course grading. You can add any other comments, notes, or thoughts you have about the course structure, course policies or anything else.
To add some comments, click the "Edit" link at the top.