Mind control: Correcting robot mistakes using EEG brain signals. For robots to do what we want, they need to understand us.
Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking? A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Boston University is working on this problem, creating a feedback system that lets people correct robot mistakes instantly with nothing more than their brains. Using data from an electroencephalography (EEG) monitor that records brain activity, the system can detect if a person notices an error as a robot performs an object-sorting task. The team’s novel machine-learning algorithms enable the system to classify brain waves in the space of 10 to 30 milliseconds.
The paper presenting the work was written by BU PhD candidate Andres F. What if AI could advance the science surrounding dementia? [Science and Technology podcast] Written by Lieve Van Woensel with Sara Suna Lipp, Dementia is a growing public health concern, with no reliable prognosis or effective treatment methods.
Why your brain is not a computer. We are living through one of the greatest of scientific endeavours – the attempt to understand the most complex object in the universe, the brain.
Scientists are accumulating vast amounts of data about structure and function in a huge array of brains, from the tiniest to our own. Tens of thousands of researchers are devoting massive amounts of time and energy to thinking about what brains do, and astonishing new technology is enabling us to both describe and manipulate that activity. We can now make a mouse remember something about a smell it has never encountered, turn a bad mouse memory into a good one, and even use a surge of electricity to change how people perceive faces. We are drawing up increasingly detailed and complex functional maps of the brain, human and otherwise. In some species, we can change the brain’s very structure at will, altering the animal’s behaviour as a result.
And yet there is a growing conviction among some neuroscientists that our future path is not clear. Loebner Prize. The Loebner Prize is an annual competition in artificial intelligence that awards prizes to the computer programs considered by the judges to be the most human-like.
The format of the competition is that of a standard Turing test. In each round, a human judge simultaneously holds textual conversations with a computer program and a human being via computer. Based upon the responses, the judge must decide which is which. The contest was launched in 1990 by Hugh Loebner in conjunction with the Cambridge Center for Behavioral Studies, Massachusetts, United States.
Since 2014[1] it has been organised by the AISB at Bletchley Park.[2] It has also been associated with Flinders University, Dartmouth College, the Science Museum in London, University of Reading and Ulster University, Magee Campus, Derry, UK City of Culture. Prizes[edit] Originally, $2,000 was awarded for the most human-seeming program in the competition. Competition rules and restrictions[edit] Criticisms[edit] Contests[edit]
Turing’s Rules for the Imitation Game. Measuring the artificial intelligence quotient. If we’re going to do the Turing Test right, we might as well use the same intelligence tests we apply to humans.
Only by doing so will we ever have a sound basis for claiming that machines are more (or less) intelligent than people. What intelligence do we measure? Classic intelligence tests measure something called “intelligence quotient,” a term that, as noted in this article, was coined a century ago by German psychologist William Stern. As the article states, the foundation of IQ testing is to gauge human pattern-matching aptitudes in the following three areas: Logical: identifying patterns in sequences of conceptsMathematical: identifying patterns in sequences of numbersLinguistic: identifying patterns in words, primarily focused on semantic patterns such as analogies, classifications, synonyms and antonyms These cognitive tasks may not be the only types of intelligence worth considering. Clearly, big data is fundamental to this promise. How smart are our machines becoming? Testing if a computer has human-level intelligence: Alternative to 'Turing test' proposed.
A Georgia Tech professor recently offered an alternative to the celebrated "Turing Test" to determine whether a machine or computer program exhibits human-level intelligence.
The Turing Test -- originally called the Imitation Game -- was proposed by computing pioneer Alan Turing in 1950. In practice, some applications of the test require a machine to engage in dialogue and convince a human judge that it is an actual person. Creating certain types of art also requires intelligence observed Mark Riedl, an associate professor in the School of Interactive Computing at Georgia Tech, prompting him to consider if that might lead to a better gauge of whether a machine can replicate human thought. "It's important to note that Turing never meant for his test to be the official benchmark as to whether a machine or computer program can actually think like a human," Riedl said. "And yet it has, and it has proven to be a weak measure because it relies on deception.
If you could measure machine intelligence like a humans IQ, what would you measure and how? How do you measure artificial intelligence? Biography, Facts, & Education.