Composing Music With Recurrent Neural Networks | hexahedria (Update: A paper based on this work has been accepted at EvoMusArt 2017! See here for more details.) It’s hard not to be blown away by the surprising power of neural networks these days. With enough training, so called “deep neural networks”, with many nodes and hidden layers, can do impressively well on modeling and predicting all kinds of data. (If you don’t know what I’m talking about, I recommend reading about recurrent character-level language models, Google Deep Dream, and neural Turing machines. For a while now, I’ve been floating around vague ideas about writing a program to compose music. Here’s a taste of things to come: But first, some background about neural networks, and RNNs in particular. Feedforward Neural Networks: A single node in a simple neural network takes some number of inputs, and then performs a weighted sum of those inputs, multiplying them each by some weight before adding them all together. A sigmoid function. Then we can connect multiple layers together: Results
Review: Amazon Echo is finally available to all Amazon Echo, the company's desktop, voice-activated personal assistant became available to the general public for the first time on Tuesday. Gizmag has been spending the past month with Echo and we've found it to be an exciting new product with plenty of potential and no real peers at the moment, but is it really ready for prime time? The hardest part of reviewing the Amazon Echo is first trying to figure out what it is – a desktop personal assistant? A voice-activated streaming audio system? Here's the basic concept: Take an attractive, cylindrical Bluetooth speaker, add a handful of top-notch microphones and a Siri-like voice interface capable of performing an ever-expanding menu of tasks from playing music to ordering products, looking up facts, news, weather and sports, managing your calendar, reading audiobooks and controlling certain smarthome appliances. An interesting added feature for the hack-minded is the ability to use Echo in conjunction with IFTTT.
Understanding Convolutional Neural Networks for NLP | WildML When we hear about Convolutional Neural Network (CNNs), we typically think of Computer Vision. CNNs were responsible for major breakthroughs in Image Classification and are the core of most Computer Vision systems today, from Facebook’s automated photo tagging to self-driving cars. More recently we’ve also started to apply CNNs to problems in Natural Language Processing and gotten some interesting results. In this post I’ll try to summarize what CNNs are, and how they’re used in NLP. The intuitions behind CNNs are somewhat easier to understand for the Computer Vision use case, so I’ll start there, and then slowly move towards NLP. What is Convolution? The for me easiest way to understand a convolution is by thinking of it as a sliding window function applied to a matrix. Convolution with 3×3 Filter. Imagine that the matrix on the left represents an black and white image. You may be wondering wonder what you can actually do with this. The GIMP manual has a few other examples. Narrow vs. .
Induction puzzles Logic puzzle Induction puzzles are logic puzzles, which are examples of multi-agent reasoning, where the solution evolves along with the principle of induction.[1][2] A puzzle's scenario always involves multiple players with the same reasoning capability, who go through the same reasoning steps. According to the principle of induction, a solution to the simplest case makes the solution of the next complicated case obvious. The muddy children puzzle is the most frequently appearing induction puzzle in scientific literature on epistemic logic.[4][5][6] Muddy children puzzle is a variant of the well known wise men or cheating wives/husbands puzzles.[7] Hat puzzles are induction puzzle variations that date back to as early as 1961.[8] In many variations, hat puzzles are described in the context of prisoners.[9][10] In other cases, hat puzzles are described in the context of wise men.[11][12] Muddy Children Puzzle[edit] Description[edit] Logical solution[edit] ) will step forward together on turn
What a Deep Neural Network thinks about your #selfie Convolutional Neural Networks are great: they recognize things, places and people in your personal photos, signs, people and lights in self-driving cars, crops, forests and traffic in aerial imagery, various anomalies in medical images and all kinds of other useful things. But once in a while these powerful visual recognition models can also be warped for distraction, fun and amusement. In this fun experiment we're going to do just that: We'll take a powerful, 140-million-parameter state-of-the-art Convolutional Neural Network, feed it 2 million selfies from the internet, and train it to classify good selfies from bad ones. Just because it's easy and because we can. And in the process we might learn how to take better selfies :) Yeah, I'll do real work. Convolutional Neural Networks Before we dive in I thought I should briefly describe what Convolutional Neural Networks (or ConvNets for short) are in case a slightly more general audience reader stumbles by. A bit of history. Training.
Patents for technology to read people’s minds hugely increasing - News - Gadgets and Tech - The Independent Fewer than 400 neuro-technology related patents were filed between 2000-2009. But in 2010 alone that reached 800, and last year 1,600 were filed, according to research company SharpBrains. The patents are for a range of uses, not just for the healthcare technology that might be expected. The company with the most patents is market research firm Nielsen, which has 100. Other uses of the technology that have been patented include devices that can change the thoughts of feelings of those that they are used on. But there are still medical uses — some of those patents awarded include technology to measure brain lesions and improve vision. Loading gallery Gadgets and Tech News in Pictures 1 of 96 The volume and diversity of the patents shows that we are at the beginning of “the pervasive neurotechnology age”, the company’s CEO Alvaro Fernandez said.
Exercising Sparse Autoencoder | Vanessa's Imiloa Home > Computer Vision, Machine Learning > Exercising Sparse Autoencoder Deep learning recently becomes such a hot topic across both academic and industry area. Guess the best way to learn some stuff is to implement them. So I checked the recent tutorial posted at ACL 2012 + NAACL 2013 Tutorial: Deep Learning for NLP (without Magic) and they have a nice ‘assignment‘ for whoever wants to learn for sparse autoencoder. There are two main parts for an autoencoder: feedforward and backpropagation. You can think of autoencoder as an unsupervised learning algorithm, that sets the target value to be equal to the inputs. Thus, the network will be forced to learn a compressed representation of the input. After learning completed, the weights represent the signals ( think of certain abstraction or atoms) that unsupervised learned from the data, like below: Like this: Like Loading...
IBM has built a digital rat brain that could power tomorrow’s smartphones If you’ve been thinking about applying for any of the US government’s expedited screening programs for frequent fliers—Global Entry, TSA PreCheck, and the like—don’t put it off any longer. The process is easier than you might imagine, and the benefits are as good as people say. We’ll take you through all the information you need. Which program is right for me? Global Entry: It’s the most expensive program, at $100 for five years, but comes with the best benefits: You can skip the lines at passport control and customs when entering the United States and also enjoy TSA PreCheck, Nexus, and Sentri (explained below). Who’s eligible: US citizens and permanent residents, and citizens of Germany, the Netherlands, Panama, South Korea, and Mexico. TSA PreCheck: TSA stands for the Transportation Security Administration, the people who screen you and your carry-on baggage. Who’s eligible: US citizens and permanent residents. Nexus: Choose this option if you want to save money and aren’t in a rush.
Machine Learning Blog: Autoencoders This App Wants To Change Email Forever -- By Getting Inside Your Head A new app called Crystal calls itself "the biggest improvement to email since spell-check." Its goal is to help you write emails with empathy. How? Crystal, which launched on Wednesday, exists in the form of a website and a Chrome extension, which integrates the service with your Gmail. With the personality profile, you'll see advice on how to speak to the person, email them, work with them and sell to them. Here's my profile: I would say that this is a pretty accurate representation of me, though I found that many of my coworkers' profiles were fairly similar to mine. "There are 64 different personality categories someone can be assigned from Crystal, and some are closer to each other on the spectrum than others," Crystal founder Drew D'Agostino told The Huffington Post in an email on Tuesday. As fun as it is to look up all of your coworkers and friends on Crystal, the really special part is how it helps you write emails.
Fast Forward Labs: How do neural networks learn? Neural networks are generating a lot of excitement, as they are quickly proving to be a promising and practical form of machine intelligence. At Fast Forward Labs, we just finished a project researching and building systems that use neural networks for image analysis, as shown in our toy application Pictograph. Our companion deep learning report explains this technology in depth and explores applications and opportunities across industries. As we built Pictograph, we came to appreciate just how challenging it is to understand how neural networks work. Even research teams at large companies like Google and Facebook are struggling to understand how neural network layers interact and how the algorithms “learn,” or improve their performance on a task over time. You can learn more about this on their research blog and explanatory videos. To help understand how neural networks learn, I built a visualization of a network at the neuron level, including animations that show how it learns.
AI machine achieves IQ test score of young child Some people might find it enough reason to worry; others, enough reason to be upbeat about what we can achieve in computer science; all await the next chapters in artificial intelligence to see what more a machine can do to mimic human intelligence. We already saw what machines can do in arithmetic, chess and pattern recognition. MIT Technology Review poses the bigger question: to what extent do these capabilities add up to the equivalent of human intelligence? Shedding some light on AI and humans, a team went ahead to subject an AI system to a standard IQ test given to humans. Their paper describing their findings has been posted on arXiv. The team is from the University of Illinois at Chicago and an AI research group in Hungary. Results: It scored a WPPSI-III VIQ that is average for a four-year-old child, but below average for 5 to 7 year-olds "We found that the WPPSI-III VIQ psychometric test gives a WPPSI-III VIQ to ConceptNet 4 that is equivalent to that of an average four-year old.
WildML | A blog about Machine Learning, Deep Learning and NLP.