Scientists warn the rise of AI will lead to extinction of humankind (NaturalNews) Everything you and I are doing right now to try to save humanity and the planet probably won't matter in a hundred years. That's not my own conclusion; it's the conclusion of computer scientist Steve Omohundro, author of a new paper published in the Journal of Experimental & Theoretical Artificial Intelligence. His paper, entitled Autonomous technology and the greater human good, opens with this ominous warning (1) Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. What Omohundro is really getting at is the inescapable realization that the military's incessant drive to produce autonomous, self-aware killing machines will inevitably result in the rise of AI Terminators that turn on humankind and destroy us all. Lest you think I'm exaggerating, click here to read the technical paper yourself.
Robots that fly... and cooperate The dynamics of this robot are quite complicated. In fact, they live in a 12-dimensional space. — Vijay Kumar Synopsis Vijay Kumar and his team build flying quadrotors, small, agile robots that swarm, sense each other and form ad hoc teams – for construction, surveying disasters and far more. About the Speaker At the University of Pennsylvania, Vijay Kumar studies the control and coordination of multi-robot formations. BigDog Beach'n Controversy Brews Over Role Of ‘Killer Robots’ In Theater of War Technology promises to improve people’s quality of life, and what could be a better example of that than sending robots instead of humans into dangerous situations? Robots can help conduct research in deep oceans and harsh climates, or deliver food and medical supplies to disaster areas. As the science advances, it’s becoming increasingly possible to dispatch robots into war zones alongside or instead of human soldiers. The idea of a killer robot, as a coalition of international human rights groups has dubbed the autonomous machines, conjures a humanoid Terminator-style robot. Whatever else they do, robots that kill raise moral questions far more complicated than those posed by probes or delivery vehicles. Seeing a slippery slope ahead, human rights groups began lobbying last year for lethal robots to be added to the list of prohibited weapons that includes chemical weapons. “Robots should not have the power of life and death over human beings,” Heyns wrote in the report. The U.S.
Advanced Humanoid Male Robot Military Robot 2013 DARPA LS3 Automatically Follows Soldiers Should I Be Afraid of the Future? About Geophysical disasters, global warming, robot uprising, zombie apocalypse, overpopulation and last but not least the end of the Mayan calendar... humanity faces many threats! Will we survive the end of this year? And if we do, what's next lurking around the corner? What is science fiction, what is science fact? 'Extraordinary claims require extraordinary evidence.' Should I Be Afraid of the Future is a collaborative project by Bold Futures, who researched and developed the concept, Envisioning Technology, who crafted it all into the poster, and Ana Viegas, who expanded it into this website. This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License
Unfair Advantages of Emotional Computing Unfair Advantages of Emotional Computing Earlier this week, Softbank CEO Masayoshi Son announced an amazing new robot called Pepper. The most amazing feature isn't that it will only cost $2,000, or that Pepper is intended to babysit your kids and work the registers at retail stores. What's really remarkable is that Pepper is designed to understand and respond to human emotion. Heck, understanding human emotion is tough enough for most HUMANS. There is a new field of "affect computing" coming your way that will give entrepreneurs and marketers a real unfair advantage. What are the unfair advantages? Take Beyond Verbal, a start-up in Tel Aviv, for example. Better than that, the software itself can also pinpoint and influence how consumers make decisions. For example, if this person is an innovator, you want to offer the latest and greatest product. Talk about targeted advertising! How can this improve quality of life? She tells a story about how she and her boyfriend were in a nasty fight.
Notes for IEEEE paper Stephen Hawking Thinks These 3 Things Could Destroy Humanity Stephen Hawking may be most famous for his work on black holes and gravitational singularities, but the world-renowned physicist has also become known for his outspoken ideas about things that could destroy human civilization. Hawking suffers from a motor neuron disease similar to amyotrophic lateral sclerosis, or ALS, which left him paralyzed and unable to speak without a voice synthesizer. But that hasn't stopped the University of Cambridge professor from making proclamations about the wide range of dangers humanity faces — including ourselves. Here are a few things Hawking has said could bring about the demise of human civilization. Artificial intelligence Hawking is part of a small but growing group of scientists who have expressed concerns about "strong" artificial intelligence (AI) — intelligence that could equal or exceed that of a human. "The development of full artificial intelligence could spell the end of the human race," Hawking told the BBC in December 2014. Human aggression
untitled