대구한의대학교 향산도서관

상세정보

부가기능

Enhancing the Performance of Energy Harvesting Wireless Communications Using Optimization and Machine Learning

상세 프로파일

상세정보
자료유형학위논문
서명/저자사항Enhancing the Performance of Energy Harvesting Wireless Communications Using Optimization and Machine Learning.
개인저자Masadeh, Ala'eddin .
단체저자명Iowa State University. Electrical and Computer Engineering.
발행사항[S.l.]: Iowa State University., 2019.
발행사항Ann Arbor: ProQuest Dissertations & Theses, 2019.
형태사항115 p.
기본자료 저록Dissertations Abstracts International 81-03B.
Dissertation Abstract International
ISBN9781085627337
학위논문주기Thesis (Ph.D.)--Iowa State University, 2019.
일반주기 Source: Dissertations Abstracts International, Volume: 81-03, Section: B.
Advisor: Kamal, Ahmed E.
이용제한사항This item must not be sold to any third party vendors.This item is not available from ProQuest Dissertations & Theses.This item must not be sold to any third party vendors.This item is not available from ProQuest Dissertations & Theses.
요약The motivation behind this thesis is to provide efficient solutions for energy harvesting communications. Firstly, an energy harvesting underlay cognitive radio relaying network is investigated. In this context, the secondary network is an energy harvesting network. Closed-form expressions are derived for transmission power of secondary source and relay that maximizes the secondary network throughput. Secondly, a practical scenario in terms of information availability about the environment is investigated. We consider a communications system with a source capable of harvesting solar energy. Two cases are considered based on the knowledge availability about the underlying processes. When this knowledge is available, an algorithm using this knowledge is designed to maximize the expected throughput, while reducing the complexity of traditional methods. For the second case, when the knowledge about the underlying processes is unavailable, reinforcement learning is used. Thirdly, a number of learning architectures for reinforcement learning are introduced. They are called selector-actor-critic, tuner-actor-critic, and estimator-selector-actor-critic. The goal of the selector-actor-critic architecture is to increase the speed and the efficiency of learning an optimal policy by approximating the most promising action at the current state. The tuner-actor-critic aims at improving the learning process by providing the actor with a more accurate estimation about the value function. Estimator-selector-actor-critic is introduced to support intelligent agents. This architecture mimics rational humans in the way of analyzing available information, and making decisions. Then, a harvesting communications system working in an unknown environment is evaluated when it is supported by the proposed architectures. Fourthly, a realistic energy harvesting communications system is investigated. The state and action spaces of the underlying Markov decision process are continuous. Actor-critic is used to optimize the system performance. The critic uses a neural network to approximate the action-value function. The actor uses policy gradient to optimize the policy's parameters to maximize the throughput.
일반주제명Electrical engineering.
Computer engineering.
언어영어
바로가기URL : 이 자료의 원문은 한국교육학술정보원에서 제공합니다.

서평(리뷰)

  • 서평(리뷰)

태그

  • 태그

나의 태그

나의 태그 (0)

모든 이용자 태그

모든 이용자 태그 (0) 태그 목록형 보기 태그 구름형 보기
 
로그인폼