Conceptual dependency theory From Wikipedia, the free encyclopedia Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. Schank developed the model to represent knowledge for natural language input into computers. The model uses the following basic representational tokens:[3] real world objects, each with some attributes.real world actions, each with attributestimeslocations A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.
Java Applets for Neural Network and Artificial Life (To Japanese version) Artificial Neural Networks Lab Contents Competitive Learning Vector Quantizer (VQ) Related to hard competitive learning. Backpropagation Learning Neural Nets for Constraint Satisfaction and Optimization Other Neural Networks Artificial Life Genetic Algorithm Biomorph & L-system Life Game Boid Boids1 Simulation of flocking of animals Boids2 dead? Other AL AL collection Other Related Applets Mathtools.net See also my link collection. 2004.7.15: Update Akio Utsugi (home page)
Case-Based Reasoning Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made. This book presents the state of the art in case-based reasoning. This book is an excellent text for courses and tutorials on case-based reasoning. Neural Network Demo I first learned about neural networks sometime around 1991. Ever since, I have been intrigued and confused by them. After peripherally reading and talking with colleagues occasionally about the subject for years, I found myself no closer to understanding them than I did when I first learned of them. Starting earlier this week (around 10/24/2004), I finally got around to creating one for the first time. There has been so much hype about the subject of neural nets (NNs). I obviously am no expert in NNs, but I was surprised to see how much I was able to do with just a little code. Let me be quick to disclaim that all of what I say here is surely subject to scrutiny. Here is where a lot of introductions stop short. The goal of a neuron of this sort is to fire when it recognizes a known pattern of inputs. Figure 3: 10 x 10 image of the letter "A" and the corresponding input values for the dendrites of our neuron. "Firing" occurs when the output is above some threshold.
Direct Memory Access Parsing (DMAP) A Direct Memory Access Parser reads text and identifies the concepts in memory that text refers to. It does this by matching phrasal patterns attached to those concepts (mops). Attaching Phrases to Concepts For example, suppose we wanted to read texts about economic arguments, as given by people such as Milton Friedman and Lester Thurow. The first thing we have to do is define concepts for those arguments, those economists, and for the event of economists presenting arguments. Next we have to attach to these concepts phrases that are used to refer to them. More complex concepts, such as a change in an economic variable, or a communication about an event, require phrasal patterns . For example, the concept m-change-event has the role :variable which can be filled by any m-variable , such as m-interest-rates . The Concept Recognition Algorithm From the Friedman example, we can see that we want the following kinds of events to occur: Getting Output from DMAP with-monitors is a macro.
Meet NELL. See NELL Run, Teach NELL How To Run (Demo, TCTV) A cluster of computers on Carnegie Mellon’s campus named NELL, or formally known as the Never-Ending Language Learning System, has attracted significant attention this week thanks to a NY Times article, “Aiming To Learn As We Do, A Machine Teaches Itself.” Indeed, the eight-month old computer system attempts to “teach” itself by perpetually scanning slices of the web as it looks at thousands of sites simultaneously to find facts that fit into semantic buckets (like athletes, academic fields, emotions, companies) and finding details related to these nouns. The project, supported by federal grants, a $1 million check from Google, and a M45 supercomputer cluster donated by Yahoo, is trying break down the longstanding barrier between computers and semantics. And yet despite all of NELL’s initiative and innovation, she needs help. She is accurate 80-90% of the time, according to Professor Tom Mitchell, the head of the research team (see our demo with Mitchell above).
Universal Networking Language Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications. In UNL, the information conveyed by the natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links between nodes or hypernodes. As an example, the English sentence "The sky was blue?!" can be represented in UNL as follows: In the example above, sky(icl>natural world) and blue(icl>color), which represent individual concepts, are UW's attributes of an object directed to linking the semantic relation between the two UWs; "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs. UWs are expressed in natural language to be humanly readable.
The Process of Question Answering. ions - Search all of the collections listed below at once. Technical Reports - Scientific and technical (S&T) reports conveying results of Defense-sponsored research, development, test and evaluation (RDT&E) efforts on a wide range of topics. Collection includes both citations and many full-text, downloadable documents from mid-1900s to present. AULIMP - Air University Library Index to Military Periodicals. Subject index to significant articles, news items, and editorials from military and aeronautical periodicals, with citations from 1988 to present. BRD - Biomedical Research Database.
Universal Networking Language (UNL) Universal Networking Language (UNL) is an Interlingua developed by UNDL foundation. UNL is in the form of semantic network to represent and exchange information. Concepts and relations enable encapsulation of the meaning of sentences. The UNL consists of Universal Words (UWs), Relations and Attributes and knowledge base. Universal Words (UWs) Universal words are UNL words that carry knowledge or concepts. Examples: bucket(icl>container) water(icl>liquid) Relations Relations are labelled arcs that connect nodes (Uws) in the UNL graph. Examples: agt ( break(agt>thing,obj>thing), John(iof>person) ) Attributes Attributes are annotations used to represent grammatical categories, mood, aspect, etc. Example: work(agt>human). Knowledge Base The UNL Knowledge Base contains entries that define possible binary relations between UWs.
Great Books of the Western World The Great Books (second edition) Great Books of the Western World is a series of books originally published in the United States in 1952, by Encyclopædia Britannica, Inc., to present the Great Books in a 54-volume set. The original editors had three criteria for including a book in the series: the book must be relevant to contemporary matters, and not only important in its historical context; it must be rewarding to re-read; and it must be a part of "the great conversation about the great ideas", relevant to at least 25 of the 102 great ideas identified by the editors. The books were not chosen on the basis of ethnic and cultural inclusiveness, historical influence, or the editors' agreement with the views expressed by the authors.[1] A second edition was published in 1990 in 60 volumes. History[edit] After deciding what subjects and authors to include, and how to present the materials, the project was begun, with a budget of $2,000,000. Volumes[edit] Volume 1[edit] The Great Conversation
In-Depth Understanding This book describes a theory of memory representation, organization, and processing for understanding complex narrative texts. The theory is implemented as a computer program called BORIS which reads and answers questions about divorce, legal disputes, personal favors, and the like. The system is unique in attempting to understand stories involving emotions and in being able to deduce adages and morals, in addition to answering fact and event based questions about the narratives it has read. BORIS also manages the interaction of many different knowledge sources such as goals, plans, scripts, physical objects, settings, interpersonal relationships, social roles, emotional reactions, and empathetic responses. The book makes several original technical contributions as well. In-Depth Understanding is included in The MIT Press Artificial Intelligence Series.
A Syntopicon: An Index to The Great Ideas A Syntopicon: An Index to The Great Ideas (1952) is a two-volume index, published as volumes 2 and 3 of Encyclopædia Britannica’s collection Great Books of the Western World. Compiled by Mortimer Adler, an American philosopher, under the guidance of Robert Hutchins, president of the University of Chicago, the volumes were billed as a collection of the 102 great ideas of the western canon. The term “syntopicon” was coined specifically for this undertaking, meaning “a collection of topics.”[1] The volumes catalogued what Adler and his team deemed to be the fundamental ideas contained in the works of the Great Books of the Western World, which stretched chronologically from Homer to Freud. The Syntopicon lists, under each idea, where every occurrence of the concept can be located in the collection’s famous works. History[edit] The Syntopicon was created to set the Great Books collection apart from previously published sets (such as Harvard Classics). Purpose[edit] Content[edit] [edit]
Great Conversation From Wikipedia, the free encyclopedia Concept in the philosophy of literature The Great Conversation is the ongoing process of writers and thinkers referencing, building on, and refining the work of their predecessors. This process is characterized by writers in the Western canon making comparisons and allusions to the works of earlier writers and thinkers. As such it is a name used in the promotion of the Great Books of the Western World published by Encyclopædia Britannica Inc. in 1952. According to Hutchins, "The tradition of the West is embodied in the Great Conversation that began in the dawn of history and that continues to the present day".[3] Adler said, What binds the authors together in an intellectual community is the great conversation in which they are engaged. See also[edit] Notes[edit] External links[edit] Great Conversation book discussion groupHutchins, Robert.
The Great Conversation by Robert Hutchins