MARC보기
LDR00000nam u2200205 4500
001000000434075
00520200226140435
008200131s2019 ||||||||||||||||| ||eng d
020 ▼a 9781085627337
035 ▼a (MiAaPQ)AAI13862820
040 ▼a MiAaPQ ▼c MiAaPQ ▼d 247004
0820 ▼a 621
1001 ▼a Masadeh, Ala'eddin .
24510 ▼a Enhancing the Performance of Energy Harvesting Wireless Communications Using Optimization and Machine Learning.
260 ▼a [S.l.]: ▼b Iowa State University., ▼c 2019.
260 1 ▼a Ann Arbor: ▼b ProQuest Dissertations & Theses, ▼c 2019.
300 ▼a 115 p.
500 ▼a Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
500 ▼a Advisor: Kamal, Ahmed E.
5021 ▼a Thesis (Ph.D.)--Iowa State University, 2019.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item is not available from ProQuest Dissertations & Theses.
506 ▼a This item must not be sold to any third party vendors.
506 ▼a This item is not available from ProQuest Dissertations & Theses.
520 ▼a The motivation behind this thesis is to provide efficient solutions for energy harvesting communications. Firstly, an energy harvesting underlay cognitive radio relaying network is investigated. In this context, the secondary network is an energy harvesting network. Closed-form expressions are derived for transmission power of secondary source and relay that maximizes the secondary network throughput. Secondly, a practical scenario in terms of information availability about the environment is investigated. We consider a communications system with a source capable of harvesting solar energy. Two cases are considered based on the knowledge availability about the underlying processes. When this knowledge is available, an algorithm using this knowledge is designed to maximize the expected throughput, while reducing the complexity of traditional methods. For the second case, when the knowledge about the underlying processes is unavailable, reinforcement learning is used. Thirdly, a number of learning architectures for reinforcement learning are introduced. They are called selector-actor-critic, tuner-actor-critic, and estimator-selector-actor-critic. The goal of the selector-actor-critic architecture is to increase the speed and the efficiency of learning an optimal policy by approximating the most promising action at the current state. The tuner-actor-critic aims at improving the learning process by providing the actor with a more accurate estimation about the value function. Estimator-selector-actor-critic is introduced to support intelligent agents. This architecture mimics rational humans in the way of analyzing available information, and making decisions. Then, a harvesting communications system working in an unknown environment is evaluated when it is supported by the proposed architectures. Fourthly, a realistic energy harvesting communications system is investigated. The state and action spaces of the underlying Markov decision process are continuous. Actor-critic is used to optimize the system performance. The critic uses a neural network to approximate the action-value function. The actor uses policy gradient to optimize the policy's parameters to maximize the throughput.
590 ▼a School code: 0097.
650 4 ▼a Electrical engineering.
650 4 ▼a Computer engineering.
690 ▼a 0544
690 ▼a 0464
71020 ▼a Iowa State University. ▼b Electrical and Computer Engineering.
7730 ▼t Dissertations Abstracts International ▼g 81-03B.
773 ▼t Dissertation Abstract International
790 ▼a 0097
791 ▼a Ph.D.
792 ▼a 2019
793 ▼a English
85640 ▼u http://www.riss.kr/pdu/ddodLink.do?id=T15490978 ▼n KERIS ▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다.
980 ▼a 202002 ▼f 2020
990 ▼a ***1816162
991 ▼a E-BOOK