LDR | | 00000nam u2200205 4500 |
001 | | 000000432460 |
005 | | 20200224130944 |
008 | | 200131s2019 ||||||||||||||||| ||eng d |
020 | |
▼a 9781088301753 |
035 | |
▼a (MiAaPQ)AAI13895340 |
040 | |
▼a MiAaPQ
▼c MiAaPQ
▼d 247004 |
082 | 0 |
▼a 400 |
100 | 1 |
▼a Foster, James Michael. |
245 | 10 |
▼a Integrating Analogical Similarity and Error-driven Learning for Relational Concept Acquisition. |
260 | |
▼a [S.l.]:
▼b University of Colorado at Boulder.,
▼c 2019. |
260 | 1 |
▼a Ann Arbor:
▼b ProQuest Dissertations & Theses,
▼c 2019. |
300 | |
▼a 140 p. |
500 | |
▼a Source: Dissertations Abstracts International, Volume: 81-04, Section: A. |
500 | |
▼a Advisor: Jones, Matt. |
502 | 1 |
▼a Thesis (Ph.D.)--University of Colorado at Boulder, 2019. |
506 | |
▼a This item must not be sold to any third party vendors. |
520 | |
▼a How can people and machines learn new relational concepts, such as the concept of 'fork' in chess, or the grammar of sentences? The approach taken in this work is to develop a theoretical and computational framework for how such concepts are learned and applied. The framework integrates established principles of cognition (analogy and error-driven learning) and explores their computational power and empirical validity. The first chapter of this thesis presents computational models and simulation results in the domain of two-player adversarial games. These models demonstrate how a synthesis of analogy and reinforcement learning (RL) provides a framework for constructing abstract relational concepts and evaluating their usefulness. The second chapter describes experiments with humans that qualitatively test the model predictions. These experiments demonstrate how reward feedback and frequency affect the reinforcement of relational concepts. The third chapter explores how the framework can be extended to model grammar learning in language processing. This model extension demonstrates the viability of tensor-based representations (HRRs) as the interface between an analogical meaning system and a recurrent neural network (RNN) sequencing system in the domain of sentence production. Together, these models offer solutions to the now three-decade challenge of unifying the flexibility and fluidity of deep learning with the expressive power of compositional representations. |
590 | |
▼a School code: 0051. |
650 | 4 |
▼a Artificial intelligence. |
650 | 4 |
▼a Cognitive psychology. |
650 | 4 |
▼a Language. |
690 | |
▼a 0800 |
690 | |
▼a 0633 |
690 | |
▼a 0679 |
710 | 20 |
▼a University of Colorado at Boulder.
▼b Psychology. |
773 | 0 |
▼t Dissertations Abstracts International
▼g 81-04A. |
773 | |
▼t Dissertation Abstract International |
790 | |
▼a 0051 |
791 | |
▼a Ph.D. |
792 | |
▼a 2019 |
793 | |
▼a English |
856 | 40 |
▼u http://www.riss.kr/pdu/ddodLink.do?id=T15491594
▼n KERIS
▼z 이 자료의 원문은 한국교육학술정보원에서 제공합니다. |
980 | |
▼a 202002
▼f 2020 |
990 | |
▼a ***1008102 |
991 | |
▼a E-BOOK |