background preloader

The Three Laws of Transhumanism and Artificial Intelligence

The Three Laws of Transhumanism and Artificial Intelligence
Wikimedia Commons I recently gave a speech at the Artificial Intelligence and The Singularity Conference in Oakland, California. There was a great lineup of speakers, including AI experts Peter Voss and Monica Anderson, New York University professor Gary Marcus, sci-fi writer Nicole Sallak Anderson, and futurist Scott Jackisch. All of us are interested in how the creation of artificial intelligence will impact the world. My speech topic was: The Morality of an Artificial Intelligence Will be Different from our Human Morality Recently, entrepreneur Elon Musk made major news when he warned on Twitter that AI could be: "Potentially more dangerous than nukes." The coming of artificial intelligence will likely be the most significant event in the history of the human species. Naturally, as a transhumanist, I strive to be an optimist. But is it even possible to program such concepts into a machine? Wikimedia Commons I don't think so, at least not over the long run. Let's face it. Related:  Superintelligence

Collaborative learning for robots (Credit: 20th Century Fox) Researchers from MIT’s Laboratory for Information and Decision Systems have developed an algorithm in which distributed agents — such as robots exploring a building — collect data and analyze it independently. Pairs of agents, such as robots passing each other in the hall, then exchange analyses. In experiments involving several different data sets, the researchers’ distributed algorithm actually outperformed a standard algorithm that works on data aggregated at a single location, as described in an arXiv paper. Machine learning, in which computers learn new skills by looking for patterns in training data, is the basis of most recent advances in artificial intelligence, from voice-recognition systems to self-parking cars. That type of model-building gets complicated, however, in cases in which clusters of robots work as teams. The robots may have gathered information that, collectively, would produce a good model but which, individually, is almost useless.

Artificial Superintelligence: A Futuristic Approach Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy Indiegogo fundraiser for Roman V. Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. Writing Sample: Leakproofing Singularity What others said about: Leakproofing Singularity David J. “This is great! Appendix Dr.

How Artificial Superintelligence Will Give Birth To Itself Kinja is in read-only mode. We are working to restore service. "So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity," he says. "This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us." "From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance." I think this is a mistake. There are also a lot of things that we know we are inclined to do instinctively (i.e. we do essentially have some programmed "terminal values") but that doesn't stop some people from breaking from those instincts – see for example suicide, killing our own families, etc, which are examples of people going against their survival instincts. Flagged Keep in mind that we're not talking about a human-like mind with paleolithic tendencies.

What will happen when the internet of things becomes artificially intelligent? | Technology When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention. All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could “spell the end of the human race”. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our “biggest existential threat” and said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the “concerned” camp when it comes to machines becoming too intelligent for us humans to control. What are these wise souls afraid of? An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. So what happens when these millions of embedded devices connect to artificially intelligent machines?

The AI Revolution: Road to Superintelligence - Wait But Why PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.) Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge What does it feel like to stand here? It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. Which probably feels pretty normal… The Far Future—Coming Soon Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. This experience for him wouldn’t be surprising or shocking or even mind-blowing—those words aren’t big enough. This works on smaller scales too. 1. What Is AI?

Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent. Technological forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Definition[edit] Feasibility[edit] Superintelligence scenarios[edit] Concerns[edit]

Can AI save us from AI? | Singularity HUB Can AI save us from AI? Nick Bostrom’s book Superintelligence might just be the most debated technology book of the year. Since its release, big names in tech and science, including Stephen Hawking and Elon Musk, have warned of the dangers of artificial intelligence. Bostrom says that while we don’t know exactly when artificial intelligence will rival human intelligence, many experts believe there is a good chance it will happen at some point during the 21st century. He suggests that when AI reaches a human level of intelligence, it may very rapidly move past humans as it takes over its own development. The concept has long been discussed and is often described as an “intelligence explosion”—a term coined by computer scientist IJ Good fifty years ago. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Broader and seemingly beneficial goal setting might backfire too. So, what do you think?

Nick Bostrom’s Superintelligence and the metaphorical AI time bomb Frank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities, but also means we have imperfect knowledge of future events. According to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. “There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. Sometimes, due to uncertainty, we react too little or too late, but sometimes we overreact. Artificial Intelligence may be one of the areas where we overreact. Perhaps Elon was thinking of Blake’s The Book of Urizen when he described AI as ‘summoning the demon’: Lo, a shadow of horror is risen, In Eternity! Hawking and his co-authors were also keen to point out the “incalculable benefits.” of AI: The Bill Joy Effect

Related: