MARC보기
LDR03179nam u200373 4500
001000000420279
00520190215164342
008190108s2018 |||||||||||||||||c||eng d
020 ▼a 9780438375369
035 ▼a (MiAaPQ)AAI10840966
035 ▼a (MiAaPQ)purdue:23014
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 001.5
1001 ▼a Nguyen, Thanh Minh.
24510 ▼a Selectively Decentralized Reinforcement Learning.
260 ▼a [S.l.]: ▼b Purdue University., ▼c 2018.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2018.
300 ▼a 157 p.
500 ▼a Source: Dissertation Abstracts International, Volume: 80-02(E), Section: B.
500 ▼a Adviser: Snehasis Mukhopadhyay.
5021 ▼a Thesis (Ph.D.)--Purdue University, 2018.
520 ▼a The main contributions in this thesis include the selectively decentralized method in solving multi-agent reinforcement learning problems and the discretized Markov-decision-process (MDP) algorithm to compute the sub-optimal learning policy in completely unknown learning and control problems. These contributions tackle several challenges in multi-agent reinforcement learning: the unknown and dynamic nature of the learning environment, the difficulty in computing the closed-form solution of the learning problem, the slow learning performance in large-scale systems, and the questions of how/when/to whom the learning agents should communicate among themselves. Through this thesis, the selectively decentralized method, which evaluates all of the possible communicative strategies, not only increases the learning speed, achieves better learning goals but also could learn the communicative policy for each learning agent. Compared to the other state-of-the-art approaches, this thesis's contributions offer two advantages. First, the selectively decentralized method could incorporate a wide range of well-known algorithms, including the discretized MDP, in single-agent reinforcement learning; meanwhile, the state-of-the-art approaches usually could be applied for one class of algorithms. Second, the discretized MDP algorithm could compute the sub-optimal learning policy when the environment is described in general nonlinear format; meanwhile, the other state-of-the-art approaches often assume that the environment is in limited format, particularly in feedback-linearization form. This thesis also discusses several alternative approaches for multi-agent learning, including Multidisciplinary Optimization. In addition, this thesis shows how the selectively decentralized method could successfully solve several real-worlds problems, particularly in mechanical and biological systems.
590 ▼a School code: 0183.
650 4 ▼a Artificial intelligence.
690 ▼a 0800
71020 ▼a Purdue University. ▼b Computer Sciences.
7730 ▼t Dissertation Abstracts International ▼g 80-02B(E).
773 ▼t Dissertation Abstract International
790 ▼a 0183
791 ▼a Ph.D.
792 ▼a 2018
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15013680 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 201812 ▼f 2019
990 ▼a ***1012033