Model Based Reinforcement Learning: Policy Iteration, Value Iteration, and Dynamic Programming

86,487
0
Published 2022-01-07
Here we introduce dynamic programming, which is a cornerstone of model-based reinforcement learning. We demonstrate dynamic programming for policy iteration and value iteration, leading to the quality function and Q-learning.

Citable link for this video: doi.org/10.52843/cassyni.6fs4s9

This is a lecture in a series on reinforcement learning, following the new Chapter 11 from the 2nd edition of our book "Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control" by Brunton and Kutz

Book Website: databookuw.com/
Book PDF: databookuw.com/databook.pdf

Amazon: www.amazon.com/Data-Driven-Science-Engineering-Lea…

Brunton Website: eigensteve.com

This video was produced at the University of Washington

All Comments (21)
  • @kevinchahine7553
    Thank you for making these videos. I'm learning so much! You are such a great explainer.
  • Thank you for simplifying a lot of things. I had read corresponding chapters from Sutton and Barto book but I got more clarity on practical aspects from this video.
  • i love the way you explain it through the formula's most experts tell you the formula then go to an actual case, which leaves the learner disconnected from the math, thanks!
  • @ghazal246486
    I've watched other lectures on RL before, I can understand the formulas much better now, the way you explain formulas is brilliant, you're a wonderful math lecturer
  • @august4633
    Thank you so much. I've watched a lot of videos and didn't fully get these concepts for some reason. Now I think I finally get it. You're a great teacher.
  • @Moonz97
    Love this series! Hoped the video to go on and on but it ended too quickly. Can't wait for the next part! Keep up the great work :)
  • @AliRashidi97
    Tnx a lot professor Brunton! You're creating great materials!
  • @paaabl0.
    Great and clear explanation, Steve! Thank you.
  • @samueldelsol8101
    your videos are increadibly well thought out and very educational, I should have known about them sooner. greetings from Munich, Germany!
  • @asier6734
    Very well structured and layed out, clearly explained, thank you
  • @yiyangshao2003
    This is just awesome, especially for an undergraduate without much pre-knowledge about machine learning.Many thanks from a Chinese freshman.
  • @mariogalindoq
    Beautiful. Please continue. Will you explain algorithms like PPO, TD3, DDPG, etc.? If so, I will appreciate each one. Also, it will be very interesting if you can give your opinion on some RL libraries like ray/RLlib, baselines3, etc. I know that this may be much more than what you are thinking of including in this course, but I do not lose anything by suggesting those topics to you :) Thank you.
  • @suri6294
    SUPERBBBBBB! Now I understand every inch of the research paper I was reading. Thanks!!!!
  • @RasitEvduzen
    Optimal control, Control Theory, Reinforcement Learning, Machine Learning, System Theory, System Identification are intellectual banquet.
  • Thank you Prof! this video really helpful to classify RL's methods. I really appreciate your diagram and your explanation.
  • @micknamens8659
    16:55 The value iteration function (VI) differs slightly from Bellman's equation (BE) because VI uses max on a (hence uses a single value), whereas BE uses max on all pi. Because pi is a probabilistic function, i.e. is yielding a specific action value 'a' with a certain probability, VI would need to have another level of summation over a multiplying the terms by pi(s,a). 20:05 Here we construct pi(s,a) as the argmax of VI. This means we set pi(s, argmax(s))=1, and pi(s, a')=0 for all other values a' /= argmax(s). This means pi(s,a) is deterministic, instead of probabilistic.
  • At 3:57, I think the R(s', s, a) function you are referring to is the "reward function", which returns the "Immediately reward (r) if you are at stage (s) and do the action (a) which lead to stage (s')". That would make more sense than "returning a PROBABILITY of a reward (r) given (s, a and s')". I saw this in your book also but cannot find this kind of function anywhere else. All other resources I found, when talking about this function R, that means the "immediately reward" of doing action a given stage s and new stage s', NOT the "probability of the reward". Later on in the clip, when you uses it in value function, I also see you use it as a mean for measuring the "Value of reward", not the "Probability of reward", therefore I think this might really be a mistake or something. If I'm getting it wrong somewhere, please help me clear my thought. I'm just being curious. Love your great work.