MARC보기
LDR00000nam u2200205 4500
001000000434238
00520200226143033
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781088366899
035 ▼a (MiAaPQ)AAI22587147
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 004
1001 ▼a Wen, Min.
24510 ▼a Reinforcement Learning with High-Level Task Specifications.
260 ▼a [S.l.]: ▼b University of Pennsylvania., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 172 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
500 ▼a Advisor: Topcu, Ufuk
5021 ▼a Thesis (Ph.D.)--University of Pennsylvania, 2019.
506 ▼a This item must not be sold to any third party vendors.
520 ▼a Reinforcement learning (RL) has been widely used, for example, in robotics, recommendation systems, and financial services. Existing RL algorithms typically optimize reward-based surrogates rather than the task performance itself. Therefore, they suffer from several shortcomings in providing guarantees for the task performance of the learned policies: An optimal policy for a surrogate objective may not have optimal task performance. A reward function that helps achieve satisfactory task performance in one environment may not transfer well to another environment. RL algorithms tackle nonlinear and nonconvex optimization problems and may, in general, not able to find globally optimal policies. The goal of this dissertation is to develop RL algorithms that explicitly account for formal high-level task specifications and equip the learned policies with provable guarantees for the satisfaction of these specifications. The resulting RL and inverse RL algorithms utilize multiple representations of task specifications, including conventional reward functions, expert demonstrations, temporal logic formulas, trajectory-based constraint functions as well as their combinations. These algorithms offer several promising capabilities. First, they automatically generate a memory transition system, which is critical for tasks that cannot be implemented by memoryless policies. Second, the formal specifications can act as reliable performance criteria for the learned policies despite the quality of the designed reward functions and variations in the underlying environments. Third, the algorithms enable online RL that never violates critical task and safety requirements, even during exploration.
590 ▼a School code: 0175.
650 4 ▼a Artificial intelligence.
650 4 ▼a Computer science.
690 ▼a 0800
690 ▼a 0984
71020 ▼a University of Pennsylvania. ▼b Electrical and Systems Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-03B.
773 ▼t Dissertation Abstract International
790 ▼a 0175
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15492969 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1008102
991 ▼a E-BOOK