Nick Bostrom’s Superintelligence and the metaphorical AI time bomb Frank Knight was an idiosyncratic economist who formalized a distinction between risk and uncertainty in his 1921 book Risk, Uncertainty, and Profit. As Knight saw it, an ever-changing world brings new opportunities, but also means we have imperfect knowledge of future events. According to Knight, risk applies to situations where we do not know the outcome of a given situation, but can accurately measure the odds. Uncertainty, on the other hand, applies to situations where we cannot know all the information we need in order to set accurate odds in the first place. “There is a fundamental distinction between the reward for taking a known risk and that for assuming a risk whose value itself is not known,” Knight wrote. Sometimes, due to uncertainty, we react too little or too late, but sometimes we overreact. Artificial Intelligence may be one of the areas where we overreact. Perhaps Elon was thinking of Blake’s The Book of Urizen when he described AI as ‘summoning the demon’:
How Self-Replicating Spacecraft Could Take Over the Galaxy I'm going to re-post here a previous comment I made on this subject, because I think it's worth repeating. Any alien civilization that is sufficiently developed enough to span the cosmos, will be so far advanced from us, that we would not be able to even comprehend their technology and in turn they probably wouldn't even recognise us as a sentient intelligent species. I've always found the "Well if there are aliens why haven't they said hello?" argument to be far too arrogant. There are islands all over the oceans of our world that are nothing more than rocks sticking out of the water with bacteria on them. That's us, the barren rock. The alien probes have probably been through out solar system many times (we'd never know) looked at our skyscrapers, cities and agriculture.
Can AI save us from AI? | Singularity HUB Can AI save us from AI? Nick Bostrom’s book Superintelligence might just be the most debated technology book of the year. Since its release, big names in tech and science, including Stephen Hawking and Elon Musk, have warned of the dangers of artificial intelligence. Bostrom says that while we don’t know exactly when artificial intelligence will rival human intelligence, many experts believe there is a good chance it will happen at some point during the 21st century. He suggests that when AI reaches a human level of intelligence, it may very rapidly move past humans as it takes over its own development. The concept has long been discussed and is often described as an “intelligence explosion”—a term coined by computer scientist IJ Good fifty years ago. “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Broader and seemingly beneficial goal setting might backfire too. So, what do you think?
Do Robots Rule the Galaxy? Astronomy news this week bolstered the idea that the seeds of life are all over our solar system. NASA's MESSENGER spacecraft identified carbon compounds at Mercury's poles. Probing nearly 65 feet beneath the icy surface of a remote Antarctic lake, scientists uncovered a community of bacteria existing in one of Earth's darkest, saltiest and coldest habitats. And the dune buggy Mars Science Lab is beginning to look for carbon in soil samples. But the rulers of our galaxy may have brains made of the semiconductor materials silicon, germanium and gallium. PHOTOS: Alien Robots That Left Their Mark on Mars The idea of malevolent robots subjugating and killing off humans has been the staple of numerous science fiction books and movies. My favorite self-parody of this idea is the 1970 film "Colossus: the Forbin Project." A decade ago our worst apprehension of computers was no more than seeing Microsoft's dancing paper clip pop up on the screen. PHOTOS: NASA Welcomes Our Surgical Robot Overlords
Superintelligence A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. ‘’Superintelligence’’ may also refer to the form or degree of intelligence possessed by such an agent. Technological forecasters and researchers disagree about when human intelligence is likely to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Experts in AI and biotechnology do not expect any of these technologies to produce a superintelligence in the very near future. Definition[edit] Summarizing the views of intelligence researchers, Linda Gottfredson writes: Intelligence is a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. Feasibility[edit]
The Dominant Life Form in the Cosmos Is Probably Superintelligent Robots prophesied the rise of artificial intelligence Susan Schneider, a professor of philosophy at the University of Connecticut, is one who has. She joins a handful of astronomers, including Seth Shostak, director of NASA’s Search for Extraterrestrial Intelligence, or SETI, program, NASA Astrobiologist Paul Davies, and Library of Congress Chair in Astrobiology Stephen Dick in espousing the view that the dominant intelligence in the cosmos is probably artificial. In her paper “Alien Minds," written for a forthcoming NASA publication, Schneider describes why alien life forms are likely to be synthetic, and how such creatures might think. “Most people have an iconic idea of aliens as these biological creatures, but that doesn’t make any sense from a timescale argument,” Shostak told me. With the latest updates from NASA’s Kepler mission showing potentially habitable worlds strewn across the galaxy, it’s becoming harder and harder to assert that we’re alone in the universe. I hope she’s right.
The AI Revolution: Road to Superintelligence - Wait But Why PDF: We made a fancy PDF of this post for printing and offline viewing. Buy it here. (Or see a preview.) Note: The reason this post took three weeks to finish is that as I dug into research on Artificial Intelligence, I could not believe what I was reading. It hit me pretty quickly that what’s happening in the world of AI is not just an important topic, but by far THE most important topic for our future. We are on the edge of change comparable to the rise of human life on Earth. — Vernor Vinge What does it feel like to stand here? It seems like a pretty intense place to be standing—but then you have to remember something about what it’s like to stand on a time graph: you can’t see what’s to your right. Which probably feels pretty normal… The Far Future—Coming Soon Imagine taking a time machine back to 1750—a time when the world was in a permanent power outage, long-distance communication meant either yelling loudly or firing a cannon in the air, and all transportation ran on hay. 1. Speed.
The Stanford Astrobiology Course Welcome and introduction to website Astrobiology is at once one of the newest of scientific meta-disciplines, while at the same time encompassing some of our oldest and most profound questions. Beyond strictly utilitarian concerns, such as “what is for dinner?” and leaving offspring, asking the three great questions of astrobiology seems to be embedded in what it means to be human. So what is Astrobiology? 1. 2. 3. To fulfill the promise of Astrobiology requires a tool not normally in many scientists’ arsenal: space exploration. This website has grown out of the oldest such class in the country, Stanford’s “Astrobiology and Space Exploration” course. This website arises as a complement to the lectures available on iTunesU and the many other wonderful astrobiology and space exploration related web sites. Where do we come from? Where are we going? Are we alone? Space Exploration
What will happen when the internet of things becomes artificially intelligent? | Technology When Stephen Hawking, Bill Gates and Elon Musk all agree on something, it’s worth paying attention. All three have warned of the potential dangers that artificial intelligence or AI can bring. The world’s foremost physicist, Hawking said that the full development of artificial intelligence (AI) could “spell the end of the human race”. Musk, the tech entrepreneur who brought us PayPal, Tesla and SpaceX described artificial intelligence as our “biggest existential threat” and said that playing around with AI was like “summoning the demon”. Gates, who knows a thing or two about tech, puts himself in the “concerned” camp when it comes to machines becoming too intelligent for us humans to control. What are these wise souls afraid of? An important component of AI, and a key element in the fears it engenders, is the ability of machines to take action on their own without human intervention. So what happens when these millions of embedded devices connect to artificially intelligent machines?
Astrobiology Center The Columbia Astrobiology Center (NYC-Astrobiology Consortium) The Columbia Astrobiology* Center represents a unique consortium of Columbia University departments, the Goddard Institute for Space Studies (NASA), and the American Museum of Natural History. It is an interdisciplinary effort dedicated to investigating the wide range of phenomena that may participate in the origin and evolution of life on Earth and beyond. We undertake fundamental research in many areas, including: The study and modeling of exoplanets, their characteristics and climates. The study of planet formation and solar system meteoritics and early chemistry.The study of Earth and Martian paleoclimate. *"Astrobiology: The study of the origins, evolution, and future, of life in the Universe"
How Artificial Superintelligence Will Give Birth To Itself Kinja is in read-only mode. We are working to restore service. "So if you create an AI that has a terminal value of friendliness to humanity, the AI would not want to change itself in a way that caused it to be unfriendly to humanity," he says. "This way as the AI got smarter, it would use its enhanced intelligence to increase the odds that it did not change itself in a manner that harms us." "From there, the AGI would be interested in pursuing whatever goals it was programmed with — such as research, exploration, or finance." I think this is a mistake. There are also a lot of things that we know we are inclined to do instinctively (i.e. we do essentially have some programmed "terminal values") but that doesn't stop some people from breaking from those instincts – see for example suicide, killing our own families, etc, which are examples of people going against their survival instincts. Flagged Keep in mind that we're not talking about a human-like mind with paleolithic tendencies.
Why the USA and NASA need astrobiology I am an astrobiologist, for 50 years an astronomer, and before that a physicist. With my colleague and friend Roger Angel, we started the process of learning how to detect Earth-like planets in 1985. I am a co-author of the NASA booklet The Terrestrial Planet Finder. I have served with scientific and technical teams to develop that mission since 1995. As a professional who has moved my research area around many times, I have both been depressed and concerned about the difficulty my colleagues have in pulling together material that crosses many fields. There are many complex issues that face our country and our world today. The first activity of my Astrobiology team was to hold a graduate student conference. Last year the NASA Astrobiology Institute held an internal meeting to explore the range of research of Institute members. I do not know whether astrobiologists will or will not get an opportunity to ask about "Life Out There" in the near future. Neville J.
Artificial Superintelligence: A Futuristic Approach Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy Indiegogo fundraiser for Roman V. Yampolskiy‘s book. Many philosophers, futurologists and artificial intelligence researchers have conjectured that in the next 20 to 200 years a machine capable of at least human level performance on all tasks will be developed. Since such a machine would among other things be capable of designing the next generation of even smarter intelligent machines, it is generally assumed that an intelligence explosion will take place shortly after such a technological self-improvement cycle begins. While specific predictions regarding the consequences of such an intelligence singularity are varied from potential economic hardship to the complete extinction of the humankind, many of the involved researchers agree that the issue is of utmost importance and needs to be seriously addressed. Writing Sample: Leakproofing Singularity What others said about: Leakproofing Singularity David J. Appendix