Cell assemblies
Figure 1: The activity in a cell assembly according to Hebb. Figure and legend are copied from (Hebb 1949). It is not clear from Hebb’s writing whether each node is a single neuron, a group of neurons or a small network of neurons. The concept of cell assembly was coined by the Canadian neuropsychologist D. Present-day evolution of the concept Nowadays the concept of cell assembly is used loosely to describe a group of neurons that perform a given action or represent a given percept or concept in the brain. A moto-neuron pool (i.e. all neurons whose axons connect to the same muscle) share clear common action, yet one would hesitate to call such a pool cell assembly. In the examples above one seems to treat differently excitatory and inhibitory interactions. From the examples above one gets the impression that we expect to see strong mutual excitatory connections among the members of the cell assembly. References Hebb D.O. See also Neuron, Synfire chain
Stroop effect
Effect of psychological interference on reaction time Green Red BluePurple Red Purple Mouse Top FaceMonkey Top Monkey Naming the font color of a printed word is an easier and quicker task if word meaning and font color are congruent. If two words are both printed in red, the average time to say "red" in response to the written word "green" is greater than the time to say "red" in response to the written word "mouse". In psychology, the Stroop effect is the delay in reaction time between congruent and incongruent stimuli. The effect has been used to create a psychological test (the Stroop test) that is widely used in clinical practice and investigation. A basic task that demonstrates this effect occurs when there is a mismatch between the name of a color (e.g., "blue", "green", or "red") and the color it is printed on (i.e., the word "red" printed in blue ink instead of red ink). Original experiment[edit] Stimulus 1: Purple Brown Red Blue Green Stimulus 2: Brown GreenBlueGreen Neuroanatomy[edit]
Probability matching
Probability matching is a suboptimal decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, then the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. The optimal Bayesian decision strategy (to maximize the number of correct predictions, see Duda, Hart & Stork (2001)) in such a case is to always predict "positive" (i.e., predict the majority category in the absence of other information), which has 60% chance of winning rather than matching which has 52% of winning (where p is the probability of positive realization, the result of matching would be , here ).
Stochastic process
Stock market fluctuations have been modeled by stochastic processes. In probability theory, a stochastic process /stoʊˈkæstɪk/, or sometimes random process (widely used) is a collection of random variables; this is often used to represent the evolution of some random value, or system, over time. This is the probabilistic counterpart to a deterministic process (or deterministic system). Instead of describing a process which can only evolve in one way (as in the case, for example, of solutions of an ordinary differential equation), in a stochastic or random process there is some indeterminacy: even if the initial condition (or starting point) is known, there are several (often infinitely many) directions in which the process may evolve. Formal definition and basic properties[edit] Definition[edit] Given a probability space and a measurable space , an S-valued stochastic process is a collection of S-valued random variables on , indexed by a totally ordered set T ("time"). where each . . . . . .
Natural language processing
Natural language processing (NLP) is a field of computer science, artificial intelligence, and linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve natural language understanding, that is, enabling computers to derive meaning from human or natural language input, and others involve natural language generation. History[edit] The history of NLP generally starts in the 1950s, although work can be found from earlier periods. In 1950, Alan Turing published an article titled "Computing Machinery and Intelligence" which proposed what is now called the Turing test as a criterion of intelligence. The Georgetown experiment in 1954 involved fully automatic translation of more than sixty Russian sentences into English. Up to the 1980s, most NLP systems were based on complex sets of hand-written rules. NLP using machine learning[edit] Major tasks in NLP[edit] Parsing
Cognitive model
A cognitive model is an approximation to animal cognitive processes (predominantly human) for the purposes of comprehension and prediction. Cognitive models can be developed within or without a cognitive architecture, though the two are not always easily distinguishable. History[edit] Cognitive modeling historically developed within cognitive psychology/cognitive science (including human factors), and has received contributions from the fields of machine learning and artificial intelligence to name a few. Box-and-arrow models[edit] A number of key terms are used to describe the processes involved in the perception, storage, and production of speech. Computational models[edit] A computational model is a mathematical model in computational science that requires extensive computational resources to study the behavior of a complex system by computer simulation. Symbolic[edit] . expressed in characters, usually nonnumeric, that require translation before they can be used Subsymbolic[edit]
ACT-R
Most of the ACT-R basic assumptions are also inspired by the progress of cognitive neuroscience, and ACT-R can be seen and described as a way of specifying how the brain itself is organized in a way that enables individual processing modules to produce cognition. Inspiration[edit] What ACT-R looks like[edit] This means that any researcher may download the ACT-R code from the ACT-R website, load it into a Lisp distribution, and gain full access to the theory in the form of the ACT-R interpreter. Also, this enables researchers to specify models of human cognition in the form of a script in the ACT-R language. The language primitives and data-types are designed to reflect the theoretical assumptions about human cognition. Like a programming language, ACT-R is a framework: for different tasks (e.g., Tower of Hanoi, memory for text or for list of words, language comprehension, communication, aircraft controlling), researchers create "models" (i.e., programs) in ACT-R. Brief outline[edit]
Cognitive architecture
Distinctions[edit] Some well-known cognitive architectures[edit] See also[edit]
Cognitive therapy
Cognitive therapy (CT) is a type of psychotherapy developed by American psychiatrist Aaron T. Beck. CT is one of the therapeutic approaches within the larger group of cognitive behavioral therapies (CBT) and was first expounded by Beck in the 1960s. Cognitive therapy is based on the cognitive model, which states that thoughts, feelings and behavior are all connected, and that individuals can move toward overcoming difficulties and meeting their goals by identifying and changing unhelpful or inaccurate thinking, problematic behavior, and distressing emotional responses. As an example of how CT works might work: Having made a mistake at work, a man may believe, "I'm useless and can't do anything right at work." People who are working with a cognitive therapist often practice the use of more flexible ways to think and respond, learning to ask themselves whether their thoughts are completely true, and whether those thoughts are helping them to meet their goals. History[edit] Types[edit]