Incremental-learning for robot control

I-Jen Chiang, Jane Yung jen Hsu

研究成果: 書貢獻/報告類型會議貢獻


A robot can learn to act by trial and error in the world. A robot continues to obtain information about the environment from its sensors and to choose a suitable action to take. Having executed an action, the robot receives a reinforcement signal from the world indicating how well the action performed in that situation. The evaluation is used to adjust the robot's action selection policy for the given state. The process of learning the state-action function has been addressed by Watkins' Q-learning, Sutton's temporal-difference method, and Kaelbling's interval estimation method. One common problem with these reinforcement learning methods is that the convergence can be very slow due to the large state space. State clustering by least-square-error or Hamming distance, hierarchical learning architecture, and prioritized swapping can reduce the number of states, but a large portion of the space still has to be considered. This paper presents a new solution to this problem. A state is taken to be a combination of the robot's sensor status. Each sensor is viewed as an independent component. The importance of each sensor status relative to each action is computed based on the frequency of its occurrences. Not all sensors are needed for every action. For example, the forward sensors play the most important roles when the robot is moving forward.
主出版物標題Proceedings of the IEEE International Conference on Systems, Man and Cybernetics
編輯 Anon
出版狀態已發佈 - 1995
事件Proceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics. Part 2 (of 5) - Vancouver, BC, Can
持續時間: 10月 22 199510月 25 1995


其他Proceedings of the 1995 IEEE International Conference on Systems, Man and Cybernetics. Part 2 (of 5)
城市Vancouver, BC, Can

ASJC Scopus subject areas

  • 硬體和架構
  • 控制與系統工程


深入研究「Incremental-learning for robot control」主題。共同形成了獨特的指紋。