Are the robots about to rise? Google's new director of engineering thinks so… | Technology | The Observer It's hard to know where to start with Ray Kurzweil. With the fact that he takes 150 pills a day and is intravenously injected on a weekly basis with a dizzying list of vitamins, dietary supplements, and substances that sound about as scientifically effective as face cream: coenzyme Q10, phosphatidycholine, glutathione? With the fact that he believes that he has a good chance of living for ever? He just has to stay alive "long enough" to be around for when the great life-extending technologies kick in (he's 66 and he believes that "some of the baby-boomers will make it through"). But then everyone's allowed their theories. And now? But it's what came next that puts this into context. Google has bought almost every machine-learning and robotics company it can find, or at least, rates. And those are just the big deals. Bill Gates calls him "the best person I know at predicting the future of artificial intelligence". But then, he has other things on his mind. So far, so sci-fi. Well, yes.
How Much Longer Before Our First AI Catastrophe? As I distinctly recall, some speculated that the Stock Market crash of 1987, was due to high frequency trading by computers, and mindful of this possibility, I think regulators passed laws to prevent computers from trading in that specific pattern again. I remember something vague about about "throttles" being installed in the trading software that kick in whenever they see a sudden, global spike in the area in which they are trading. These throttles where supposed to slow down trading to a point where human operators could see what was happening and judge whether there was an unintentional feedback loop happening. This was 1987. I don't know if regulators have mandated other changes to computer trading software in the various panics and spikes since then. But yes, I definitely agree this is very good example where narrow AI got us into trouble. These are all examples of narrow machine intelligence. Obviously no system is going to be perfect. So it goes.
Hawking predicts uploading the brain into a computer Professor Stephen Hawking has predicted that it could be possible to preserve a mind as powerful as his on a computer — but not with technology existing today, The Telegraph reports. Hawking said the brain operates in a way similar to a computer program, meaning it could in theory be kept running without a body to power it. “I think the brain is like a program in the mind, which is like a computer, so it’s theoretically possible to copy the brain onto a computer and so provide a form of life after death.” He made the comments at the 33rd Cambridge Film Festival, featuring a special gala screening of Hawking presented by the documentary’s subject, Professor Stephen Hawking.
Fin de siècle The themes of fin de siècle political culture were very controversial and have been cited as a major influence on fascism.[5][6] The major political theme of the era was that of revolt against materialism, rationalism, positivism, bourgeois society and liberal democracy.[5] The fin-de-siècle generation supported emotionalism, irrationalism, subjectivism and vitalism,[6] while the mindset of the age saw civilization as being in a crisis that required a massive and total solution.[5] Fin de siècle culture has been perceived to have influenced 20th century culture, such as Bohemian counterculture having similarities to punk counterculture in that both celebrate a romantic and willful sense of decay and rejection of social order.[7] Degeneration Theory[edit] As fin de siècle citizens, attitudes tended toward science in an attempt to decipher the world in which they lived. Pessimism[edit] Algernon: I hope tomorrow will be a fine day, Lane. Lane: It never is, sir. Artistic Conventions[edit]
Risk of robot uprising wiping out human race to be studied 26 November 2012Last updated at 13:28 ET In The Terminator, the machines start to turn on the humans Cambridge researchers are to assess whether technology could end up destroying human civilisation. The Centre for the Study of Existential Risk (CSER) will study dangers posed by biotechnology, artificial life, nanotechnology and climate change. The scientists said that to dismiss concerns of a potential robot uprising would be "dangerous". Fears that machines may take over have been central to the plot of some of the most popular science fiction films. Perhaps most famous is Skynet, a rogue computer system depicted in the Terminator films. Skynet gained self-awareness and fought back after first being developed by the US military. 'Reasonable prediction' But despite being the subject of far-fetched fantasy, researchers said the concept of machines outsmarting us demanded mature attention. "What we're trying to do is to push it forward in the respectable scientific community."
Why a superintelligent machine may be the last thing we ever invent "...why would a smart computer be any more capable of recursive self-improvement than a smart human?" I think it mostly hinges on how artificial materials are more mutable than organic ones. We humans have already already developed lots of ways to enhance our mental functions, libraries, movies, computers, crowdsourcing R&D, etc. But most of this augmentation is done through offloading work onto tools and machinery external to the body. But to actually change the brain itself has been very slow going for us. "...would a single entity really be capable of comprehending the entirety of it's own mind?" It probably won't need to. This idea also applies to doctors. If genes and protein can generate such complexity after billions of years of natural selection, it seems reasonable to conclude that minds and culture lead to still greater levels of complexity through guided engineering.
Is It Time to Give Up on the Singularity? I actually like the term "singularity" and think it is pretty specific definition. It is when an intelligence uses that intelligence to further increase its own intelligence. It really should be called the second singularity, though. The first was the evolution of humans. If you think of humans in evolutionary terms, we are a singularity. The second singularity is the same change in exponential growth. One strange thought, though, is we might already be in the second singularity. DARPA's New Biotech Division Wants To Create A Transhuman Future They do realize that Blade Runner is NOT a future to aspire to, right? I don't believe in a sentient sapient life form - whether synthetic or organic or even a hybrid - EVER being property or purpose built for a position. If its built by humans or made from our genome they are our children if they can attain the level of awareness and sapience the least of us have. This is wrong, it must NEVER happen, otherwise we'll be falling down the road of every single science fiction that has shown what happens when you enslave a species and think you have complete control over it. And if you doubt that they intend to enslave anything they create, why else would you add a kill switch? It's sort of like Leo Szilard's letter to Roosevelt. And afterward Einstein said, about that fateful letter and the results of it in Hiroshima and Nagasaki, "If I had known they were going to do this, I would have become a shoemaker." I mean, I agree. Think of the power of Einstein's work.
Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem The robots will rise, we’re told. The machines will assume control. For decades we have heard these warnings and fears about artificial intelligence taking over and ending humankind. Such scenarios are not only currency in Hollywood but increasingly find supporters in science and philosophy. On Tuesday, leading scientist Stephen Hawking joined the ranks of the singularity prophets, especially the darker ones, as he told the BBC that “the development of full artificial intelligence could spell the end of the human race.” The problem with such scenarios is not that they are necessarily false—who can predict the future? Mark Coeckelbergh About Mark Coeckelbergh is Professor of Technology and Social Responsibility at De Montfort University in the UK, and is the author of Human Being @ Risk and Money Machines. These issues are far less sexy perhaps than that of superintelligence or the end of humankind. Go Back to Top.
Can we build an artificial superintelligence that won't kill us? SExpand At some point in our future, an artificial intelligence will emerge that's smarter, faster, and vastly more powerful than us. Once this happens, we'll no longer be in charge. But what will happen to humanity? And how can we prepare for this transition? Luke Muehlhauser is the Executive Director of the Machine Intelligence Research Institute (MIRI) — a group that's dedicated to figuring out the various ways we might be able to build friendly smarter-than-human intelligence. io9: How did you come to be aware of the friendliness problem as it relates to artificial superintelligence (ASI)? Muehlhauser: Sometime in mid-2010 I stumbled across a 1965 paper by I.J. Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Spike Jonze's latest film, Her, has people buzzing about artificial intelligence. Her is a fantastic film, but its portrayal of AI is set up to tell a good story, not to be accurate.