자료유형 | 학위논문 |
---|---|
서명/저자사항 | Integrating Analogical Similarity and Error-driven Learning for Relational Concept Acquisition. |
개인저자 | Foster, James Michael. |
단체저자명 | University of Colorado at Boulder. Psychology. |
발행사항 | [S.l.]: University of Colorado at Boulder., 2019. |
발행사항 | Ann Arbor: ProQuest Dissertations & Theses, 2019. |
형태사항 | 140 p. |
기본자료 저록 | Dissertations Abstracts International 81-04A. Dissertation Abstract International |
ISBN | 9781088301753 |
학위논문주기 | Thesis (Ph.D.)--University of Colorado at Boulder, 2019. |
일반주기 |
Source: Dissertations Abstracts International, Volume: 81-04, Section: A.
Advisor: Jones, Matt. |
이용제한사항 | This item must not be sold to any third party vendors. |
요약 | How can people and machines learn new relational concepts, such as the concept of 'fork' in chess, or the grammar of sentences? The approach taken in this work is to develop a theoretical and computational framework for how such concepts are learned and applied. The framework integrates established principles of cognition (analogy and error-driven learning) and explores their computational power and empirical validity. The first chapter of this thesis presents computational models and simulation results in the domain of two-player adversarial games. These models demonstrate how a synthesis of analogy and reinforcement learning (RL) provides a framework for constructing abstract relational concepts and evaluating their usefulness. The second chapter describes experiments with humans that qualitatively test the model predictions. These experiments demonstrate how reward feedback and frequency affect the reinforcement of relational concepts. The third chapter explores how the framework can be extended to model grammar learning in language processing. This model extension demonstrates the viability of tensor-based representations (HRRs) as the interface between an analogical meaning system and a recurrent neural network (RNN) sequencing system in the domain of sentence production. Together, these models offer solutions to the now three-decade challenge of unifying the flexibility and fluidity of deep learning with the expressive power of compositional representations. |
일반주제명 | Artificial intelligence. Cognitive psychology. Language. |
언어 | 영어 |
바로가기 |
: 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |