background preloader

Artificial Intelligence @ MIRI

Artificial Intelligence @ MIRI
Related:  Artificial Intelligence

Artificial Intelligence introduction › cross-links to AI Context The PRL project aims to naturally express the reasoning of computer scientists and mathematicians when they are justifying programs or claims. It represents mathematical knowledge and makes it accessible in problem solving. These goals are similar to goals in AI, and our project has strong ties to AI. We see AI as having a definite birth event and date: the Dartmouth Conference in the summer of 1956 organized by John McCarthy and Marvin Minsky. According to many AI researchers, they have set the agenda for a large part of Computer Science in the sense that AI successes seeded separate research areas of CS, some no longer even associated with AI. It is clear that our project has benefitted substantially from AI research, and that will continue. For us, the science fiction is a world in which the progeny of Nuprl are helping computer scientists crack the P = NP problem or helping mathematicians settle the Riemann hypothesis. Knowledge Representation

Association for the Advancement of Artificial Intelligence Stanford to Research the effects of Artificial Intelligence What will intelligent machines mean for society and the economy in 30, 50 or even 100 years from now? That’s the question that Stanford University scientists are hoping to take on with a new project, the One Hundred Year Study on Artificial Intelligence (AI100). “If your goal is to create a process that looks ahead 30 to 50 to 70 years, it’s not altogether clear what artificial intelligence will mean, or how you would study it,” said Russ Altman, a professor of bioengineering and computer science at Stanford. “But it’s a pretty good bet that Stanford will be around, and that whatever is important at the time, the university will be involved in it.” The future, and potential, of artificial intelligence has come under fire and increasing scrutiny in the past several months after both renowned physicist, cosmologist and author Stephen Hawking and high-tech entrepreneur Elon Musk warned of what they perceive as a mounting danger from developing AI technology. Written By: Sharon Gaudin

You (YOU!) Can Take Stanford's 'Intro to AI' Course Next Quarter, For Free Stanford has been offering portions of its robotics coursework online for a few years now, but professors Sebastian Thrun and Peter Norvig are kicking things up a notch (okay, lots of notches) with next semester's CS221: Introduction to Artificial Intelligence. For the first time, you can take this course, along with several hundred Stanford undergrads, without having to fill out an application, pay tuition, or live in a dorm. This is more than just downloading materials and following along with a live stream; you're actually going to have to do all the same work as the Stanford students. There's a book you'll need to get. There will be at least 10 hours per week of studying, along with weekly graded homework assignments. You won't technically earn credits for the course unless you're a Stanford student, but for all practical purposes, you'll be getting the exact same knowledge and experience -- transmitted directly to you by none other than two living Jedis of modern AI.

Peering into the Future: AI and Robot brains In Singularity or Transhumanism: What Word Should We Use to Discuss the Future? on Slate, Zoltan Istvan writes: "The singularity people (many at Singularity University) don't like the term transhumanism. Transhumanists don't like posthumanism. See what the proponents of these words mean by them and why the old talmudic rabbis and jesuits are probably laughing their socks off. Progress toward AI? Baby X, a 3D-simulated human child is getting smarter day by day. "An experiment in machine learning, Baby X is a program that imitates the biological processes of learning, including association, conditioning and reinforcement learning. This is precisely the sixth approach to developing AI that is least discussed by “experts” in the field… and that I have long believed to be essential, in several ways. It's coming. Meet Jibo, advertised as "the world's first family robot." Ever hear of “neuromorphic architecture?” Now… How to keep what we produce sane? Creating Superintelligence Developing Brain

How DARPA Is Making a Machine Mind out of Memristors Artificial intelligence has long been the overarching vision of computing, always the goal but never within reach. But using memristors from HP and steady funding from DARPA, computer scientists at Boston University are on a quest to build the electronic analog to a human brain. The software they are developing – called MoNETA for Modular Neural Exploring Traveling Agent – should be able to function more like a mammalian brain than a conventional computer. At least, that's what they're claiming in a new feature in IEEE Spectrum. There's reason to be optimistic that this attempt might be different from all the previous AI let-downs that have come before it. Why? The Boston U. team, by its own admission, doesn't yet know exactly what these platforms will look like, but they seem very confident that they will soon be a reality. Decide for yourself if MoNETA is the real deal by clicking through the source link below. [IEEE Spectrum]

Artificial intelligence: two common misconceptions Recent comments by Elon Musk and Stephen Hawking, as well as a new book on machine superintelligence by Oxford professor Nick Bostrom, have the media buzzing with concerns that artificial intelligence (AI) might one day pose an existential threat to humanity. Should we be worried? Let’s start with expert opinion. A recent survey of the world’s top-cited living AI scientists yielded three major conclusions: AI scientists strongly expect “high-level machine intelligence” (HLMI) — that is, AI that “can carry out most human professions at least as well as a typical human” — to be built sometime this century. First, should we trust expert opinion on the timing of HLMI and machine superintelligence? But can we do better than expert opinion? Given this uncertainty, we should be skeptical both of confident claims that HLMI is coming soon and of confident claims that HLMI is very far away. Second, what about social impact?

MoNETA: A Mind Made from Memristors Though memristors are dense, cheap, and tiny, they also have a high failure rate at present, characteristics that bear an intriguing resemblance to the brain's synapses. It means that the architecture must by definition tolerate defects in individual circuitry, much the way brains gracefully degrade their performance as synapses are lost, without sudden system failure. Basically, memristors bring data close to computation, the way biological systems do, and they use very little power to store that information, just as the brain does. For a comparable function, the new hardware will use two to three orders of magnitude less power than Nvidia's Fermi-class GPU. For the first time we will begin to bridge the main divide between biological computation and traditional computation. The use of the memristor addresses the basic hardware challenges of neuromorphic computing: the need to simultaneously move and manipulate data, thereby drastically cutting power consumption and space.

Related: