Proceedings Vol. 10 (2004)
ENGINEERING MECHANICS 2004
May 10 – 13, 2004, Svratka, Czech Republic
Copyright © 2004 Institute of Thermomechanics, Academy of Sciences of the Czech Republic, Prague
ISSN 1805-8248 (printed)
ISSN 1805-8256 (electronic)
list of papers scientific commitee
pages 307 - +6p., full text
Standard algorithm of Q-Learning is limited by discrete states and actions and Qfunction is u sually represented as discrete table. To avoid this obstacle and extend the use of Q-learning for continuous states and actions the algorithm must be modified and such modification is presented in the paper. Straightforward way is to replace discrete table with suitable approximator. A number of approximators can be used, with respect to memory and computational requirements the local approximator is particularly favorable. We have used Locally Weighted Regression (LWR) algorithm. The paper discusses advantages and disadvantages of modified algorithm demonstrated on simple control task.
back to list of papers
Text and facts may be copied and used freely, but credit should be given to these Proceedings.
All papers were reviewed by members of the scientific committee.