Toward Integrating Cognitive Linguistics and Cognitive Language Processing, P Lindes, JE Laird

Tags: ECG, grammar, comprehension, semantic memory, processing, language processing, sentences, Construction Grammar, sphere, Oxford Handbook, Graeme Trousdale, production rules, New York, Experiment, new constructions, semantic structures, meanings, green block, prepositional phrase attachment, Oxford University Press, Descriptor, sentence processing, Thomas Hoffman, Item, world model, University of Chicago Press, University of California at Berkeley, Bernd Heine, Cambridge University Press, Charles J., Heiko Narrog, Paul Schermerhorn, Cognitive Science Society, Jerome A., International Conference, Psychology Press, shows, LUCIA, structures, decision cycle, semantic structure, The green arrow, prepositional phrase, prepositional phrases, hand-coded, Cognitive Linguistics, University of Michigan, language understanding, cognitive architecture, language comprehension, human language, Cognitive Modeling
Content: In D. Reitter & F. E. Ritter (Eds.), Proceedings of the 14th International Conference on Cognitive Modeling (ICCM 2016). University Park, PA: Penn State.
Toward Integrating Cognitive Linguistics and Cognitive Language Processing Peter Lindes ([email protected]) University of Michigan, 2260 Hayward Street Ann Arbor, MI 48109-2121 USA John E. Laird ([email protected]) University of Michigan, 2260 Hayward Street Ann Arbor, MI 48109-2121 USA
Abstract We present a system to comprehend Natural Language that combines cognitive linguistics with known properties of human language processing. It is built on Embodied Construction Grammar (ECG) and the Soar cognitive architecture. Its core is a novel grounded semantic parser. Experiments show the system produces actionable meanings and fulfills ten cognitive criteria we set out. Keywords: language comprehension; construction grammar; Soar; grounded semantics; language in robots; cognitive linguistics; cognitive architecture. Introduction This work attempts to combine two separate threads of research. One is cognitive linguistics, where formalisms have been developed for syntactic and semantic knowledge, such as Embodied Construction Grammar (ECG; Bergen & Chang, 2013). The second is from research on the cognitive modeling of language processing, where the emphasis is on modeling how humans process language, independent of specific linguistic formalisms for representing syntactic and semantic knowledge. In this paper, we develop a system called LUCIA that attempts to tie these two threads together, developing a novel comprehension system whose knowledge of language is specified in the ECG formalism (Bryant, 2008) and then translated into production rules. Those rules are used in a language comprehension process which is designed to fit many of the characteristics of human language processing. Cognitive Linguistics Cognitive linguistics is based on the idea that language is an integral part of cognition. Language is closely related to perception (Miller & Johnson-Laird, 1976) and action (Coello & Bartolo, 2013). To explain language we must study categories (Lakoff, 1987), image schemas (Johnson, 1987; Mandler & Pagбn Cбnovas, 2014), and metaphor (Lakoff & Johnson, 1980). Meaning is seen as being represented by frames (Fillmore, 1976, 2013; Fillmore & Baker, 2009) or scripts (Schank, 1972). Psychological theories attempt to explain comprehension at the discourse (Kintsch, 1998) and sentence level (Ferstl, 1994). Looking at language usage leads to theories of construction grammar (Goldberg, 1995 & 2006; Hoffmann & Trousdale, 2013) that integrate semantics and syntax.
Construction grammars provide a theory for representing syntax and semantics (Goldberg, 2013). ECG (Dodge, 2010; Feldman, 2006) is a specific formalism in this field based on much of the cognitive linguistic research mentioned above. Such a representation is necessary to language understanding, independent of how the processing is done, in order to insure that the language understanding system is capable of addressing the scope of human language. Parsers have been built for ECG (Bryant, 2008), as well as for a related formalism called Fluid Construction Grammar (FCG), which has been used for communication with robots (Steels & Hild 2012; Steels, 2013). Lindes (2014) used ideas from ECG for information extraction. However, none of these approaches attempts to model the characteristics of human sentence processing. Consider the ECG example in Figure 1. On the left we see a syntactic construction for a TransitiveCommand, and on the right we see a meaning schema called ActOnIt, along with its generalization Action. Figure 1: ECG example This example shows several characteristics of ECG. A composite construction lists its constituents, in this case named verb and object. Each constituent slot is labeled with the type of construction that can fill that slot. A construction can specify the name of a meaning schema to be evoked when it is instantiated, in this case ActOnIt. Schemas have roles to be filled. Both constructions and schemas can be generalized through the subcase of clause, and schemas can inherit roles from their parents. A construction can specify constraints that supply values to these roles through unification. In the example the constraints unify the meanings of the constituents with the roles in this construction's meaning schema. This formalism is an abstraction that can describe many linguistic structures; however, one unanswered question is:
86
is this type of representation sufficient for representing the knowledge needed for modeling human sentence processing? Cognitive Language Processing Cognitive language processing research (Newell, 1990; Lewis, 1993; Lewis & Vasishth, 2005) looks at building computer models that comprehend language using methods that approximate properties of human language processing. We have chosen to focus on the following characteristics of human-like processing: 1. Incremental ­ Processing extracts as much syntactic and semantic information as it can from each word, one at a time (Lewis, 1993). 2. Integrated ­ Syntactic and semantic information are extracted jointly during comprehension (Lewis, 1993). 3. Eclectic ­ Semantic, pragmatic, and world knowledge are used to resolve ambiguities. 4. Real time ­ Comprehension proceeds in real time (Lewis, 1993). 5. Useful ­ The meanings extracted are "actionable intelligence" that the agent can use for its purposes. 6. Repair-based processing ­ The system greedily builds structures that may need to be repaired as more information becomes available (Lewis, 1993). 7. Context-dependent meaning ­ Words can have multiple meanings; the meaning in a particular sentence is selected according to the context. 8. Compositional ­ Elements with known meanings are combined to comprehend novel sentences. 9. Hierarchical ­ Both lexical items and higher-level constructions contribute elements of meaning (Goldberg, 1995, 2006). 10. Grounded ­ The meanings derived from a sentence are grounded in the agent's perception, action capabilities, and world knowledge. Lewis (1993) describes a parser that is incremental (Item 1), does local repairs (Item 6), and shows correspondence to human processing in terms of its real-time performance (Item 4) and the kinds of structures that it has difficulty processing. Lewis and Vasishth (2005) extend this work to explore more detailed mechanisms of memory retrieval. That work, however, does not build full, grounded semantic structures that would be useful to an embodied agent. Ball et al. (2010), as part of the Synthetic Teammate Project, have a model of human language processing implemented in ACT-R that attempts "adherence to wellestablished cognitive constraints." This model takes advantage of ACT-R's subsymbolic capabilities to resolve some kinds of ambiguities, and it does incremental, integrated, and grounded sentence understanding (Items 1, 2, and 10). However, the "Double R" theory of grammar it uses does not have the same capabilities of ECG (Feldman et al., 2009) to recognize many alternative expressions and to represent complex semantic structure. Cantrell et al. (2010) have a system for natural language understanding for robots that is designed to build semantics
in an incremental and integrated way (Items 1 and 2), and ground the language in the robot's perception (Item 10). This system, however, does not take advantage of cognitive linguistics or prior work on cognitive language processing. Bringing these two research threads together has some advantages. Cognitive linguistic theory, and ECG in particular, provide a formal way of describing meaning representations that is grounded in research on human knowledge representation. The formalism also describes syntax and the relationships by which form evokes meaning. Cognitive language processing attempts to ground this theory in actual processing that reflects known characteristics of human processing, thus making a theory that can be tested in the real world. This brings us to our main research question: is it possible to implement a comprehension system that uses the ECG formalism, that is consistent with human language processing, and that produces results that are useful to an embodied autonomous agent? Here we take some initial steps to answer this question by developing a system based on ECG that has many of the characteristic of human sentence processing. An Integrated Solution In this paper we describe LUCIA, which works as part of an embodied Soar agent called Rosie (Mohan et al., 2013). We show that it produces useful results for directing and instructing this robot, and that the method meets the above cognitive characteristics. It does not address the immense scope of natural language, discourse level understanding, the ability to learn new lexical, syntactic, and semantic structure, or how the brain implements comprehension. Nor have we explored the limits of understandable syntactic structures that Lewis (1993) emphasizes. We have developed a translator that converts ECG into Soar production rules, and we have written by hand a collection of rules that provide the infrastructure for language comprehension. The ECG grammar for our experiments is adequate to comprehend a set of sentences that provide directions to a robot, and the results are evaluated against a gold standard of meaning structures known to be useful to the robot. The outputs from LUCIA produce the correct actions with the Rosie simulator. In the rest of this paper we explain how LUCIA works, show experimental results of its performance, and discuss how it satisfies the ten properties of human language processing. Then we draw conclusions and propose future work. Language Processing in LUCIA Here we describe the basic principles that LUCIA is built on, show some examples, and relate these to our ten items. Basic Operation The LUCIA comprehension subsystem replaces the language comprehension part of Rosie and sends messages to the task performance subsystem, which acts on them, as
87
shown in Figure 2. The comprehension subsystem consists of rules in Soar's procedural memory, some generated from a grammar and some hand-coded. The hand-coded rules encode functionality that is independent of specific language structures. Figure 2: LUCIA in context Words of a sentence come into the comprehender, which processes them one at a time to create a semantic interpretation of the complete sentence. In doing this, LUCIA draws on a world model that is assembled from the agent's visual perception and an ontology that defines objects, properties, actions, etc. In Soar the rules are held in production memory, the world model in working memory, and the ontology in semantic memory. When a complete interpretation of a sentence has been built, a message is passed to the task performance subsystem, labeled "Rosie Operations" in Figure 2, which performs the indicated action. This may involve moving the robot, manipulating physical objects, or providing natural language responses to the human user. As the robot acts, it updates its world model, which is always available to the language comprehender. Linguistic Knowledge As shown in Figure 1, an ECG grammar consists of "schemas" defining semantic structures and "constructions" which relate an input form to a meaning expressed in those schemas. Our translator is built based on Bryant's (2008) formal definition of the ECG language. Each construction or schema produces one or more Soar rules. In order to have a system that could later be extended to learn more grammar incrementally, each construction or schema is translated independently, without using global knowledge of the grammar or interaction with other items. The linguistic knowledge that the comprehender depends on is represented in Soar production rules: those generated by the ECG translator, as well as a smaller set of handcoded rules that provide functions that are common over the whole grammar. These functions include retrieving properties or actions from semantic memory and resolving referential expressions to references to particular objects in the model of the perceived world in working memory. Still others handle bookkeeping tasks.
Dynamic Processing The core of the system is the comprehend-word operator, which is applied once for each input word to implement incremental processing (Item 1). As part of comprehendword, a lexical-access operator is selected for each word, and rules generated from ECG apply to create a lexical construction along with any evoked semantic structures. A match-construction operator is selected each time one or more constituents can be composed into a larger construction. These operators are applied by other ECG-generated rules which fire, sometimes several in parallel, to evoke, build, and populate semantic schemas. Together, all these rules implement integrated syntactic and semantic comprehension (Item 2). Both lexical and composite constructions contribute meaning (Item 9). At appropriate points, various hand-coded operators are selected to ground referring expressions to the current perceived world model and the ontology in semantic memory (Item 10). Finally, results for this word are returned to the higher-level state. Once a complete sentence has been comprehended, infrastructure rules interpret it to form a message for the task performance subsystem. These results are compared to the gold standard developed for the robot, so we can verify that they are correct and useful (Item 5). These operators and rules do not fire in a fixed sequence, but in a dynamic one determined by the word being comprehended, the syntactic and semantic context, and the knowledge contained in the world model and ontology. These dynamics arise from the principle of doing as much analysis as possible while processing each word in order, without any look-ahead to future words (Item 1). This approach can produce good performance, but it often makes mistakes. These are corrected by a local repair mechanism (Item 6) modeled after the one Lewis (1993) used to simulate human sentence processing with Soar. We call the complete process Informed Dynamic Analysis (IDA) since the syntactic and semantic analyses evolve dynamically together by applying whatever linguistic and world knowledge is relevant at each moment (Item 3). Examples Below are examples that illustrate this dynamic process. Example 1: A Simple Sentence A simple example is Pick up the green sphere. Figure 3 shows the results of the analysis, part of which constitutes an instantiation of the ECG items in Figure 1. The figure summarizes the operation of the many operators needed to comprehend this sentence. Numbers indicate when structures were built by the corresponding application of comprehend-word. Constructions are shown as blue rectangles, their meaning schemas as green ovals, the identifiers of structures in semantic memory in red, and structures in the world model in orange. The identifiers in green and orange are used to make associations between the comprehension process and items in the shared memories.
88
1 Action Descriptor pick-up1 @A1001
5 ActOnIt 5 Transitive Comma nd
2 PickUp
5
5 Reference Descriptor
large-green-sphere1 5 object block green1 large1 sphere1
RefExpr
4 Property Descriptor @P1004 color green1
5 Entity
1 PICK
2 UP
3 THE 4 GREEN 5 SPHERE
block sphere1
Pick
up
the
green sphere.
Figure 3: Comprehension of a simple sentence
The semantic parse shown here is built up incrementally as each word is processed in stages 1 to 5 (Items 1 and 2). Each word leads to the retrieval of a lexical construction, those with names in capitals. Larger constructions are composed whenever possible (Items 8 and 9). As soon as the verb is identified in stage 1, its grounded meaning with id @A1001 is retrieved from semantic memory (Item 10). The PickUp construction in stage 2 attaches to the meaning already built for its constituent PICK. (The green arrow from PICK to its meaning has been omitted to avoid clutter.) In stage 4, a lookup to semantic memory (Item 10) finds the id @P1004 to ground the property green. When the referential expression is complete in stage 5, it is resolved to an object in the world model (Item 10). In stage 5, the complete TransitiveCommand construction, a composite of the structures for Pick up and the green sphere, is also built as soon as its constituents are present. Note that several levels of processing are done for one word (Item 1). No repairs are needed in this example.
Example 2: Phrase Attachment and Repair Figure 4 shows the abbreviated results for Pick up the green sphere on the stove. This example illustrates the integration of Lewis's repair mechanisms with the semantics available from ECG.
8 ActOnIt
1 Action Descriptor put-down1 @A1000
5 ActOnIt
8 Transitive Command
5 Reference 5 Transitive
Descriptor
Command
large5 green- sphere1
2 PickUp
5 RefExpr
8 Reference Descriptor
large- 8 green- sphere1
8 RefExprPrepPhrase
8
?
8 PrepPhrase
Prep Relation
!
on1
8 RefExpr
8 Reference Descriptor
1 PICK
2 UP
3 THE 4 GREEN 5 SPHERE 6 ON 7 THE 8 STOVE
8 location
stove
Pick up the green sphere on the stove.
there is the classic problem of prepositional phrase attachment: should the phrase be attached to the command that is the current upper-most construction, or to modify the green sphere? The simplest way to attach this phrase would be as a target location for the command, and that is what would happen if the sentence were Put the green sphere on the stove. But the system can use semantic knowledge to know that put needs a target location and pick up does not (Item 3). A "repair" is done by "snipping" (Lewis, 1993) the items shown with dotted lines and attaching on the stove to the green sphere (Item 6). Now the reference for the green sphere must be resolved again with the new information, but in this case the same answer results because in the current perceptual model this sphere is in fact on the stove. Finally, the semantic structure for the command is rebuilt with the revised referential expression. Attaching a relative clause, as in Pick up the green block that is on the stove., works in a very similar way, except that the word that is lexically ambiguous. In this sentence, it is a relative pronoun introducing the relative clause. In Put that in the pantry. it is a deictic pronoun referring to something salient in the context. The grammar has both meanings and they both are created during lexical-access. Later infrastructure rules select which one to use, and the other is discarded. This illustrates Item 7. Informed Dynamic Analysis The whole process just described is similar to the analysis in any semantic parsing system in that it takes a sentence of text and produces a semantic representation. However, it uses a dynamic process where at every step semantic and world knowledge can be applied. Thus, instead of generating many parses and ranking their likelihood, it uses non-syntactic knowledge to resolve ambiguities and repair mistakes dynamically as the analysis proceeds. This approach implements Items 1, 2, 3, and 10. Experiments The Rosie team has built up a corpus of several hundred sentences used to instruct the Rosie agent in various tasks. A parser has been custom-built that allows the agent to understand this corpus. The LUCIA system attempts to duplicate the processing of that parser while being more general and scalable to a wider variety of linguistic forms and problem domains. To evaluate the capability, generality, and scalability of LUCIA, we have devised the following experiments.
Figure 4: Phrase attachment and repair In this case, the words up through sphere form a valid sentence, so the first 5 stages run exactly as before. But the end has not yet been reached, as on the stove remains to be processed. Stages 6 and 7 are very simple, but a lot happens at stage 8. First the process recognizes and resolves the stove. Next on is added to form a prepositional phrase. Now
Experiment 1 First, we took the entire Rosie sentence corpus and reduced it by removing sentences for its game-playing domain, which is beyond the scope of this project, and eliminating duplicate sentences. Then we selected 50 of the remaining 209 sentences. Each of the 50 shows a slightly unique linguistic pattern and they collectively cover much of the linguistic space of all 209 sentences. These 50 sentences fall
89
into several categories which are listed below, with some of the language forms covered and an example sentence or two for each category: Declarative statements (8): noun phrases, adjectives, properties, states, prepositional phrases The red triangle is on the stove. Manipulation commands (19): manipulation verbs, transitive commands, commands with a location target, prepositional phrase attachment issues, multi-word prepositions Put the green sphere in front of the pantry. Store the large green sphere on the red triangle. relative clauses, etc. (5): relative clauses with properties, relative clauses with prepositional phrases, multiple prepositional phrases Pick [up] a green block that is larger than the green box. Move the green rectangle to the left of the large green rectangle to the pantry. These two examples show a relative clause, a larger than relation that is computed during resolution, a to the left of relation which is found stored in the world model and picks out the correct green rectangle, and the proper attachment of the two prepositional phrases with to. Navigation commands (10): navigation verbs, spatial references, absolute and relative directions, abbreviated commands, goal phrases Follow the right wall. Go until there is a doorway. Yes/no answers (1): Yes. Definitions of words (2): Octagon is a shape. Conditional commands (1): If the green box is large then go forward. Questions (4): What is inside the pantry? Is the small orange triangle behind the green sphere? Together, this set of 50 sentences partially addresses the ten distinguishing properties of human sentence processing listed earlier. To cover this set, it was necessary to build the ECG constructions and schemas they use, both for the lexical items and the composite constructions. Then these sentences served as the test suite to fully develop the infrastructure rules that complete the LUCIA comprehender. We also built an evaluator that takes the output of LUCIA for each sentence and compares it with the gold standard semantics provided by the Rosie team. When differences were found, the grammar and hand-coded rules were corrected as needed to get the desired result. Finally, all 50 sentences were comprehended correctly.
Table 1 shows the number of Soar rules that were generated automatically and by hand. The ECG column counts constructions and schemas, and the Rules column counts Soar production rules. Over 60% of the code was generated automatically from the grammar, showing that the ECG representation is capable of representing the majority of the knowledge that is needed.
Table 1: Experiment 1 statistics
Category Grammar Hand-coded Total
ECG 226 0 226
Rules 487 292 779
Proportion 62.5% 37.5%
Another key measure of performance relates to real time, our Item 4. The Soar theory (Newell, 1990) maps execution time to real time by assuming each decision cycle takes 50 msec. Lewis (1993, p. 13) points out that humans comprehend speech "as quickly as we hear it" and read even faster at "~240 words per minute." Thus an incremental comprehender has about 4 to 5 decision cycles, on average, to comprehend each word. Our run of all 50 sentences processed 284 words in 2,582 decision cycles, or 9.09 cycles/word and 132 words/minute. This is too slow by about a factor of two. However, an analysis shows that within a sentence there are 4 decision cycles of overhead within each comprehend-word cycle, and this overhead could be reduced considerably. As we developed the system to comprehend more and more of the 50 sentences, new declarative knowledge in the form of ECG items and new procedural knowledge in the form of the hand-coded rules were added to the system in many small steps. Although LUCIA has no built-in learning mechanism, this increase of knowledge can be thought of as a model of what a true learning system would have to learn. Figure 5 shows how this knowledge grows with the number of sentences comprehended.
Figure 5: Code growth with knowledge The number of rules generated from the grammar is much larger than the number of hand-coded ones, and this
90
proportion grows as the grammar grows. However on the last step, where four questions were added, only 13 ECG items and 29 rules were added to the grammar, while 58 new hand-coded rules were needed. The grammar changes were simple additions, but new ways of attachment, grounding, and formatting were also required. An important issue is whether the number of hand-coded rules plateaus as we extend LUCIA to new constructions.
Experiment 2 To test the generality of the system, we applied LUCIA to a Spanish translation of the same sentences used for Experiment 1, comparing the results to the same gold standard semantic structures used for Experiment 1. The translation was done by the first author, a fluent Spanish speaker, with consultation with a native Spanish speaker. Both have extensive English-Spanish translation experience. Several linguistic differences needed to be dealt with, in addition to the obvious one of a different vocabulary: adjectives can come either before or after a noun as in la esfera verde (the green sphere); the morphology of pronouns attached to the end of verbs as in Levбntalo (Pick it up) and Oriйntate (Orient [yourself]); no equivalent of then in If ... then ..., although entonces could be used with some loss of fluency; word order may be different as in all the example questions; and the meanings of many words, especially prepositions, don't correspond across languages. For example, on may be translated as either en or sobre and to be can correspond to either ser or estar. Spanish also has morphological variation in verb conjugations that English does not have, but that doesn't affect this corpus since everything is in the present tense, all command verbs are in the second person familiar imperative form, and all to be verbs are in the third person. Some new constructions had to be added to handle some of the differences from English. Following these extensions, all 50 sentences were processed correctly. Table 2 shows the relevant code statistics.
Table 2: Experiment 2 statistics
Category Common Spanish-specific Hand coded Total
ECG 140 114 0 254
Rules 319 263 296 878
Proportion 36.3% 30.0% 33.7%
Experiment 3 To evaluate the scalability of the system, we took the exact code used for Experiment 1 and ran it on the full original list of 209 sentences. With no additional vocabulary, 110 sentences could not be understood due to 88 unknown words. Of the remaining 99 sentences, the system understood 82. This shows that the system can often process novel sentences that use known words (Item 8).
We then added lexical items for those 88 words, which required adding 113 ECG items that generate 178 Soar rules. With these additions, 92 sentences were understood. This shows that the system can process even more sentences, but also that new constructions must be added to understand many new sentences. It doesn't understand more sentences because the original 209 sentences were chosen to demonstrate a variety of syntactic constructions, which require additional grammatical and semantic knowledge. Conclusions and Future Work We set out to evaluate whether LUCIA could provide language comprehension to Rosie in a way that is both useful and cognitively plausible. The above experiments show that it is useful, and that it satisfies, at least partially, the ten cognitive criteria. It does incremental processing that integrates syntax, semantics, and grounding in the perceived world. Its grammar is both hierarchical and compositional. It can eclectically apply all available knowledge at any stage of processing. It has a working repair mechanism and a method for handling lexical ambiguity, although so far these only cover a limited number of cases. Based on Soar assumptions, it comes within a factor of two of real-time processing, and it seems clear how to improve that. Future work could begin with improving the real-time course of comprehension, adding more robust mechanisms for repair and handling lexical ambiguity, and exploring the correspondence to human limitations that Lewis's (1993) system demonstrates. We can continue on to the much larger challenges of learning grammar and concepts, and using that learning to expand the scope of understandable domains. Acknowledgments The work described here was supported by the National Science Foundation under Grant Number 1419590. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressly or implied, of the NSF or the U.S. Government. References Ball, Jerry, Mary Freiman, Stuart Rodgers and Christopher Myers (2010). Toward a Functional Model of Human Language Processing. Presented as a poster at 32nd Annual Conference of the Cognitive Science Society. Portland, OR. Bergen, Benjamin and Nancy Chang (2013). Embodied Construction Grammar. In Thomas Hoffman and Graeme Trousdale, eds, The Oxford Handbook of Construction Grammar. Oxford University Press, New York, pp. 168-190. Bryant, John Edward (2008). Best-Fit Constructional Analysis. PhD dissertation in computer science, University of California at Berkeley.
91
Cantrell, Rehj, Matthias Scheutz, Paul Schermerhorn, and Xuan Wu. (2010). Robust spoken instruction understanding for HRI. In 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 275-282. IEEE. Coello, Yann and Angela Bartolo, eds. (2013). Language and Action in cognitive neuroscience. Psychology Press, New York. Dodge, Ellen Kirsten (2010). Constructional and Conceptual Composition. PhD dissertation in Linguistics, University of California at Berkeley. Feldman, Jerome A. (2006). From Molecule to Metaphor: A Neural Theory of Language. MIT Press, Cambridge, MA. Feldman, Jerome, Ellen Dodge, and John Bryant (2009). Embodied Construction Grammar. In Bernd Heine and Heiko Narrog, eds., The Oxford Handbook of Linguistic Analysis. Oxford University Press, New York. Ferstl, Evelyn C. (1994). The Construction-Integration Model: A Framework for Studying Context Effects in Sentence Processing. In Proceedings of the Sixteenth Annual Conference of the Cognitive Science Society, 289-293. Fillmore, Charles J. (1976). Frame Semantics and the Nature of Language. In Annals of the New York Academy of Science, Vol. 280, Origins and Evolution of Language and Speech, pp. 20-32. Fillmore, Charles J. (2013). Berkeley Construction Grammar. In Thomas Hoffman and Graeme Trousdale, eds, The Oxford Handbook of Construction Grammar. Oxford University Press, New York, pp. 112-132. Fillmore, Charles J. and Collin Baker (2009). A Frames Approach to semantic analysis. In Bernd Heine and Heiko Narrog, eds., The Oxford Handbook of Linguistic Analysis, 313-340. Goldberg, Adele E. (1995). Constructions: A Construction Grammar Approach to argument structure. The University of Chicago Press. Goldberg, Adele E. (2006). Constructions at work: The nature of generalization in language. Oxford University Press. Goldberg, Adele E. (2013). Constructionist Approaches. In Thomas Hoffman and Graeme Trousdale, eds, The Oxford Handbook of Construction Grammar. Oxford University Press, New York, pp. 15-31. Hoffman, Thomas and Graeme Trousdale, eds. (2013). The Oxford Handbook of Construction Grammar. Oxford University Press, New York. Kintsch, Walter (1998). Comprehension: A paradigm for cognition. Cambridge University Press. Johnson, Mark (1987). The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason. The University of Chicago Press, Chicago. Lakoff, George (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.
Lakoff, George and Mark Johnson (1980). Metaphors We Live By. University of Chicago Press. Lewis, Richard Lawrence (1993). An Architecturallybased Theory of Human Sentence Comprehension. PhD dissertation in Computer Science, Carnegie Mellon University. Lewis, Richard L. and Shavran Vasishth (2005). An Activation-Based Model of Sentence Processing as Skilled Memory Retrieval. Cognitive Science 29, 375419. Lindes, Peter (2014). OntoSoar: Using Language to Find Genealogy Facts. Linguistics master's thesis, Brigham Young University. Mandler, Jean M. and Cristуbal Pagбn Cбnovas (2014). On defining image schemas. Language and Cognition 0, 1-23. Miller, George A. and Philip N. Johnson-Laird (1976). Language and Perception. Belknap Press. Mohan, Shiwali, Aaron H. Mininger, and John E. Laird (2013). Towards an indexical model of situated language comprehension for real-world cognitive agents. Advances in Cognitive Systems 3, 163-182. Newell, Allen (1990). Unified Theories of Cognition. Cambridge, MA: Harvard University Press. Schank, Roger C. (1972). Conceptual Dependency: A Theory of Natural Language Understanding. In Cognitive Psychology 3, 552-631. Steels, Luc (2013). Fluid Construction Grammar. In Thomas Hoffman and Graeme Trousdale, eds, The Oxford Handbook of Construction Grammar. Oxford University Press, New York, pp. 153-167. Steels, Luc and Manfred Hild, eds. (2012). Language Grounding in Robots. Springer.
92

P Lindes, JE Laird

File: toward-integrating-cognitive-linguistics-and-cognitive-language.pdf
Title: Toward Integrating Cognitive Linguistics and Cognitive Language Processing
Author: P Lindes, JE Laird
Author: Peter Lindes and John E. Laird
Published: Tue Aug 2 11:26:52 2016
Pages: 7
File size: 0.34 Mb


The Shape Shifter, 1 pages, 0.06 Mb

, pages, 0 Mb
Copyright © 2018 doc.uments.com