background preloader

Self-organizing map

Self-organizing map
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space. This makes SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen map or network.[1][2] Like most artificial neural networks, SOMs operate in two modes: training and mapping. A self-organizing map consists of components called nodes or neurons. Large SOMs display emergent properties. Learning algorithm[edit] Variables[edit] Related:  Machine Learning

Multilayer perceptron A multilayer perceptron (MLP) is a feedforward artificial neural network model that maps sets of input data onto a set of appropriate outputs. A MLP consists of multiple layers of nodes in a directed graph, with each layer fully connected to the next one. Except for the input nodes, each node is a neuron (or processing element) with a nonlinear activation function. Theory[edit] Activation function[edit] If a multilayer perceptron has a linear activation function in all neurons, that is, a linear function that maps the weighted inputs to the output of each neuron, then it is easily proved with linear algebra that any number of layers can be reduced to the standard two-layer input-output model (see perceptron). The two main activation functions used in current applications are both sigmoids, and are described by in which the former function is a hyperbolic tangent which ranges from -1 to 1, and the latter, the logistic function, is similar in shape but ranges from 0 to 1. Layers[edit] in the

Protein Secondary Structure Prediction with Neural Nets: Feed-Forward Networks Introduction to feed-forward nets Feed-forward nets are the most well-known and widely-used class of neural network. The popularity of feed-forward networks derives from the fact that they have been applied successfully to a wide range of information processing tasks in such diverse fields as speech recognition, financial prediction, image compression, medical diagnosis and protein structure prediction; new applications are being discovered all the time. In common with all neural networks, feed-forward networks are trained, rather than programmed, to carry out the chosen information processing tasks. The feed-forward architecture Feed-forward networks have a characteristic layered architecture, with each layer comprising one or more simple processing units called artificial neurons or nodes. Diagram of 2-Layer Perceptron Feed-forward nets are generally implemented with an additional node - called the bias unit - in all layers except the output layer. Training a feed-forward net 1. 2. 3.

Feature learning Feature learning or representation learning[1] is a set of techniques in machine learning that learn a transformation of "raw" inputs to a representation that can be effectively exploited in a supervised learning task such as classification. Feature learning algorithms themselves may be either unsupervised or supervised, and include autoencoders,[2] dictionary learning, matrix factorization,[3] restricted Boltzmann machines[2] and various form of clustering.[2][4][5] When the feature learning can be performed in an unsupervised way, it enables a form of semisupervised learning where first, features are learned from an unlabeled dataset, which are then employed to improve performance in a supervised setting with labeled data.[6][7] Clustering as feature learning[edit] K-means clustering can be used for feature learning, by clustering an unlabeled set to produce k centroids, then using these centroids to produce k additional features for a subsequent supervised learning task. See also[edit]

Feedforward neural network In a feed forward network information always moves one direction; it never goes backwards. A feedforward neural network is an artificial neural network where connections between the units do not form a directed cycle. This is different from recurrent neural networks. The feedforward neural network was the first and simplest type of artificial neural network devised. Single-layer perceptron[edit] The simplest kind of neural network is a single-layer perceptron network, which consists of a single layer of output nodes; the inputs are fed directly to the outputs via a series of weights. A perceptron can be created using any values for the activated and deactivated states as long as the threshold value lies between the two. Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. A multi-layer neural network can compute a continuous output instead of a step function. (times , in general form, according to the Chain Rule) Multi-layer perceptron[edit]

Autoencoder An autoencoder, autoassociator or Diabolo network[1]:19 is an artificial neural network used for learning efficient codings.[2] The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data, typically for the purpose of dimensionality reduction. Overview[edit] Architecturally, the simplest form of the autoencoder is a feedforward, non-recurrent neural net that is very similar to the multilayer perceptron (MLP), with an input layer, an output layer and one or more hidden layers connecting them. For each input x, Do a feed-forward pass to compute activations at all hidden layers, then at the output layer to obtain an output x̂ Measure the deviation of x̂ from the input x (typically using squared error, i) Backpropagate the error through the net and perform weight updates. (This algorithm trains one sample at a time, but batch learning is also possible.) Auto-encoders can also be used to learn overcomplete feature representations of data.

Restricted Boltzmann machine Diagram of a restricted Boltzmann machine with three visible units and four hidden units (no bias units). A restricted Boltzmann machine (RBM) is a generative stochastic neural network that can learn a probability distribution over its set of inputs. RBMs were initially invented under the name Harmonium by Paul Smolensky in 1986,[1] but only rose to prominence after Geoffrey Hinton and collaborators invented fast learning algorithms for them in the mid-2000s. RBMs have found applications in dimensionality reduction,[2] classification,[3] collaborative filtering, feature learning[4] and topic modelling.[5] They can be trained in either supervised or unsupervised ways, depending on the task. Restricted Boltzmann machines can also be used in deep learning networks. Structure[edit] associated with the connection between hidden unit and visible unit , as well as bias weights (offsets) for the visible units and for the hidden units. or, in vector form, where visible units and and See also[edit]

Perceptron Een perceptron (of meerlaags perceptron) is een neuraal netwerk waarin de neuronen in verschillende lagen met elkaar verbonden zijn. Een eerste laag bestaat uit ingangsneuronen, waar de inputsignalen aangelegd worden. Vervolgens zijn er één of meerdere 'verborgen’ lagen, die zorgen voor meer 'intelligentie' en ten slotte is er de uitgangslaag, die het resultaat van het perceptron weergeeft. Alle neuronen van een bepaalde laag zijn verbonden met alle neuronen van de volgende laag, zodat het ingangssignaal voort propageert door de verschillende lagen heen. Single-layer Perceptron[bewerken] De single-layer perceptron is de simpelste vorm van een neuraal netwerk, in 1958 door Rosenblatt ontworpen (ook wel Rosenblatt's perceptron genoemd). Rosenblatt's Perceptron Het is mogelijk het aantal klassen uit te breiden naar meer dan twee, wanneer de output layer wordt uitgebreid met meerdere output neurons. Trainingsalgoritme[bewerken] Begrippen: = inputvector = gewichtsvector (weights vector) Met = bias

History of the Perceptron History of the Perceptron The evolution of the artificial neuron has progressed through several stages. The roots of which, are firmly grounded within neurological work done primarily by Santiago Ramon y Cajal and Sir Charles Scott Sherrington . Ramon y Cajal was a prominent figure in the exploration of the structure of nervous tissue and showed that, despite their ability to communicate with each other, neurons were physically separated from other neurons. With a greater understanding of the basic elements of the brain, efforts were made to describe how these basic neurons could result in overt behaviors, to which William James was a prominent theoretical contributor. Working from the beginnings of neuroscience, Warren McCulloch and Walter Pitts in their 1943 paper, "A Logical Calculus of Ideas Immanent in Nervous Activity," contended that neurons with a binary threshold activation function were analogous to first order logic sentences. The activation function then becomes: x = f(b)

Online machine learning Online machine learning is used in the case where the data becomes available in a sequential fashion, in order to determine a mapping from the dataset to the corresponding labels. The key difference between online learning and batch learning (or "offline" learning) techniques, is that in online learning the mapping is updated after the arrival of every new datapoint in a scalable fashion, whereas batch techniques are used when one has access to the entire training dataset at once. Online learning could be used in the case of a process occurring in time, for example the value of a stock given its history and other external factors, in which case the mapping updates as time goes on and we get more and more samples. Ideally in online learning, the memory needed to store the function remains constant even with added datapoints, since the solution computed at one step is updated when a new datapoint becomes available, after which that datapoint can then be discarded. , where on . , such that .

Related: