background preloader

How will we build an artificial human brain?

How will we build an artificial human brain?
Related:  Transhumanismimmortal trendyimmortal trendy

Re-Evolving Mind, Hans Moravec, December 2000 Computers have permeated everyday life and are worming their way into our gadgets, dwellings, clothes, even bodies. But if pervasive computing soon automates most of our informational needs, it will leave untouched a vaster number of essential physical tasks. Construction, protection, repair, cleaning, transport and so forth will remain in human hands. Robot inventors in home, university and industrial laboratories have tinkered with the problem for most of the century. The first electronic computers in the 1950s did the work of thousands of clerks. But things are changing. The short answer is that, after decades at about one MIPS (million instructions (or calculations) per second), computer power available to research robots shot through 10, 100 and now 1,000 MIPS starting about 1990 (Figure 1). It was a common opinion in the AI labs that, with the right program, readily available computers could encompass any human skill. It's easy to explain the discrepancy in hindsight.

carreau de_peau Touch sensitivity on gadgets and robots is nothing new. A few strategically placed sensors under a flexible, synthetic skin and you have pressure sensitivity. Add a capacitive, transparent screen to a device and you have touch sensitivity. However, Stanford University’s new “super skin” is something special: a thin, highly flexible, super-stretchable, nearly transparent skin that can respond to touch and pressure, even when it’s being wrung out like a sponge. The brainchild of Stanford University Associate Professor of chemical engineering Zhenan Bao, this “super skin” employs a transparent film of spray-on, single-walled carbon nanotubes that sit in a thin film of flexible silicon, which is then sandwiched between more silicon. SEE ALSO: Humanoid Robot Charges Up, Takes a Load Off [VIDEOS] This unique makeup allows the malleable skin to measure force response even as it’s being stretched, or “squeezed like a sponge.”

cognition artifice In the 1950s and '60s, artificial-intelligence researchers saw themselves as trying to uncover the rules of thought. But those rules turned out to be way more complicated than anyone had imagined. Since then, artificial-intelligence (AI) research has come to rely, instead, on probabilities -- statistical patterns that computers can learn from large sets of training data. The probabilistic approach has been responsible for most of the recent progress in artificial intelligence, such as voice recognition systems, or the system that recommends movies to Netflix subscribers. Early AI researchers saw thinking as logical inference: if you know that birds can fly and are told that the waxwing is a bird, you can infer that waxwings can fly. The problem with this approach is, roughly speaking, that not all birds can fly. Embracing uncertainty “With probabilistic reasoning, you get all that structure for free,” Goodman says. Modeling minds

Brain imaging can predict how intelligent you are, study finds (Medical Xpress) -- When it comes to intelligence, what factors distinguish the brains of exceptionally smart humans from those of average humans? As science has long suspected, overall brain size matters somewhat, accounting for about 6.7 percent of individual variation in intelligence. More recent research has pinpointed the brain’s prefrontal cortex, a region just behind the forehead, as a critical hub for high-level mental processing, with activity levels there predicting another 5 percent of variation in individual intelligence. Now, new research from Washington University in St. Published in the Journal of Neuroscience, the findings establish “global brain connectivity” as a new approach for understanding human intelligence. “Our research shows that connectivity with a particular part of the prefrontal cortex can predict how intelligent someone is,” suggests lead author Michael W.

Goertzel Contra Dvorsky on Mind Uploading Futurist pundit George Dvorsky recently posted an article on io9, labeled as “DEBUNKERY” and aimed at the topic of mind uploading. According to the good Mr. Dvorsky, “You’ll Probably Never Upload Your Mind into a Computer.” He briefly lists eight reasons why, in his view, mind uploading will likely never happen. UPDATE - here is a video interview on this subject: Note that he’s not merely arguing that mind uploading may come too late for you and me to take advantage of it – he’s arguing that it probably will never happen at all! The topic of Dvorsky’s skeptical screed is dear to my heart and mind. Every one of Dvorsky's objections has been aired many times before – which is fine, as his post is a journalistic article, not an original scientific or philosophic work, so it doesn’t necessarily have to break new ground. In this article I will briefly run through Dvorsky’s eight objections, and give my own, in some cases idiosyncratic, take on each of them. But, whatever…. So what? True enough.

cyber ami Le moteur de recherche achète DeepMind, pour plus de 400 millions de dollars. Cette start-up est spécialiste de l'intelligence artificielle. Google prépare-t-il un robot intelligent? Le groupe Internet a confirmé dimanche avoir acheté l'entreprise DeepMind, une société londonienne qui travaille sur l'intelligence artificielle. DeepMind est une entreprise très discrète. DeepMind a été fondée en 2011 par Demis Hassabis, joueur prodige d'échec et neuroscientifique, Shane Legg et Mustafa Suleyman. Google s'est déjà intéressé à la question de l'intelligence artificielle dans le passé. Google, avec ses 56 milliards de dollars, peut investir partout.

brain trouble By Rick Nauert PhD Senior News Editor Reviewed by John M. Grohol, Psy.D. on October 8, 2012 UK researchers report the discovery of a neural mechanism that protects individuals from stress and trauma turning into post-traumatic stress disorder. Investigators from the University of Exeter Medical School began with the knowledge of the brain’s “plasticity,” its unique capability to adapt to changing environments. The receptors (called protease-activated receptor 1 or PAR1) act in the same way as a command center, telling neurons whether they should stop or accelerate their activity. Normally, PAR1s tell amygdala neurons to remain active and produce vivid emotions. This adaptation helps us to keep our fear under control, and not to develop exaggerated responses to mild or irrelevant fear triggers. In the study, researchers used a mouse model in which the PAR1 receptors were genetically de-activated. The study has been published in the journal Molecular Psychiatry.

Who’s conscious? A recent meeting of neuroscientists tried to define a set of criteria for that murky phenomenon called “consciousness”. I don’t know how successful they were; they’ve come out with a declaration on consciousness that isn’t exactly crystal clear. It seems to involve the existence of neural circuitry that exhibits specific states that modulate behavior. The neural substrates of emotions do not appear to be confined to cortical structures. This is where they’re losing me. They seem to have reached an agreement that a mammalian neocortex is not necessary for consciousness, which seems entirely reasonable to me. Anyway, here’s their conclusion. We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Wait, I missed something again. Also, here is an interesting summary of evidence for sophisticated intentional behaviors in octopus. The octopus is the only invertebrate to get a shout-out at all.

Related: