There Are Two Kinds of AI, and the Difference is Important - Popular Science - Pocket.
The Hidden Costs of Automated Thinking. Like many medications, the wakefulness drug modafinil, which is marketed under the trade name Provigil, comes with a small, tightly folded paper pamphlet.
For the most part, its contents—lists of instructions and precautions, a diagram of the drug’s molecular structure—make for anodyne reading. The subsection called “Mechanism of Action,” however, contains a sentence that might induce sleeplessness by itself: “The mechanism(s) through which modafinil promotes wakefulness is unknown.” This approach to discovery—answers first, explanations later—accrues what I call intellectual debt. It’s possible to discover what works without knowing why it works, and then to put that insight to use immediately, assuming that the underlying mechanism will be figured out later. In some cases, we pay off this intellectual debt quickly. In the past, intellectual debt has been confined to a few areas amenable to trial-and-error discovery, such as medicine.
Consider image recognition.
Artsy. Five questions you can use to cut through AI hype. Two weeks ago, the United Arab Emirates hosted Ai Everything, its first major AI conference and one of the largest AI applications conferences in the world.
The event was an impressive testament to the breadth of industries in which companies are now using machine learning. It also served as an important reminder of how the business world can obfuscate and oversell the technology’s abilities. In response, I’d like to briefly outline the five questions I typically use to assess the quality and validity of a company’s technology: 1. What is the problem it’s trying to solve? I always start with the problem statement. The Quest to Make a Bot That Can Smell as Well as a Dog.
How malevolent machine learning could derail AI. Artificial intelligence won’t revolutionize anything if hackers can mess with it.
That’s the warning from Dawn Song, a professor at UC Berkeley who specializes in studying the security risks involved with AI and machine learning. Speaking at EmTech Digital, an event in San Francisco produced by MIT Technology Review, Song warned that new techniques for probing and manipulating machine-learning systems—known in the field as “adversarial machine learning” methods—could cause big problems for anyone looking to harness the power of AI in business.
DeepMind and Google: the battle to control artificial intelligence. One afternoon in August 2010, in a conference hall perched on the edge of San Francisco Bay, a 34-year-old Londoner called Demis Hassabis took to the stage.
Walking to the podium with the deliberate gait of a man trying to control his nerves, he pursed his lips into a brief smile and began to speak: “So today I’m going to be talking about different approaches to building…” He stalled, as though just realising that he was stating his momentous ambition out loud. And then he said it: “AGI”. AGI stands for artificial general intelligence, a hypothetical computer program that can perform intellectual tasks as well as, or better than, a human. AGI will be able to complete discrete tasks, such as recognising photos or translating languages, which are the single-minded focus of the multitude of artificial intelligences (AIs) that inhabit our phones and computers. But it will also add, subtract, play chess and speak French. Don’t look now: why you should be worried about machines reading your emotions. Could a program detect potential terrorists by reading their facial expressions and behavior?
This was the hypothesis put to the test by the US Transportation Security Administration (TSA) in 2003, as it began testing a new surveillance program called the Screening of Passengers by Observation Techniques program, or Spot for short. While developing the program, they consulted Paul Ekman, emeritus professor of psychology at the University of California, San Francisco.
Decades earlier, Ekman had developed a method to identify minute facial expressions and map them on to corresponding emotions. This method was used to train “behavior detection officers” to scan faces for signs of deception. But when the program was rolled out in 2007, it was beset with problems. Ekman tried to distance himself from Spot, claiming his method was being misapplied. Inside the ‘Black Box’ of a Neural Network. A philosopher argues that an AI can’t be an artist. On March 31, 1913, in the Great Hall of the Musikverein concert house in Vienna, a riot broke out in the middle of a performance of an orchestral song by Alban Berg.
Chaos descended. Furniture was broken. Police arrested the concert’s organizer for punching Oscar Straus, a little-remembered composer of operettas. Later, at the trial, Straus quipped about the audience’s frustration. The punch, he insisted, was the most harmonious sound of the entire evening. You may not enjoy Schoenberg’s dissonant music, which rejects conventional tonality to arrange the 12 notes of the scale according to rules that don’t let any predominate. AAAS: Machine learning 'causing science crisis' Image copyright Reuters Machine-learning techniques used by thousands of scientists to analyse data are producing results that are misleading and often completely wrong.
Dr Genevera Allen from Rice University in Houston said that the increased use of such systems was contributing to a “crisis in science”. She warned scientists that if they didn’t improve their techniques they would be wasting both time and money. Her research was presented at the American Association for the Advancement of Science in Washington. A growing amount of scientific research involves using machine learning software to analyse data that has already been collected. DeepMind AI breakthrough on protein folding made scientists melancholy. It was with a strangely deflated feeling in his gut that Harvard biologist Mohammed AlQuraishi made his way to Cancun for a scientific conference in December.
Strange because a major advance had just been made in his field, something that might normally make him happy. Deflated because the advance hadn’t been made by him or by any of his fellow academic researchers. It had been made by a machine. DeepMind, an AI company that Google bought in 2014, had outperformed all the researchers who’d submitted entries to the Critical Assessment of Structure Prediction (CASP) conference, which is basically a fancy science contest for grown-ups.
Every two years, researchers working on one of the biggest puzzles in biochemistry, known as the protein folding problem, try to prove how good their predictive powers are by submitting a prediction about the 3D shapes that certain proteins will take. New AI fake text generator may be too dangerous to release, say creators.
The creators of a revolutionary AI system that can write news stories and works of fiction – dubbed “deepfakes for text” – have taken the unusual step of not releasing their research publicly, for fear of potential misuse.
OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough. At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next.
The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses. This is how AI bias really happens—and why it’s so hard to fix. Over the past few months, we’ve documented how the vast majority of AI’s applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data.
We’ve also covered how these technologies affect people’s lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. Google Just Open Sourced TensorFlow, Its Artificial Intelligence Engine. The World’s Fastest Supercomputer Breaks an AI Record. Why are Machine Learning Projects so Hard to Manage? I’ve watched lots of companies attempt to deploy machine learning — some succeed wildly and some fail spectacularly. One constant is that machine learning teams have a hard time setting goals and setting expectations. Why is this? Redirect?&url= I slide back into the MRI machine, adjust the mirror above the lacrosse helmet-like setup holding my skull steady so that I can see the screen positioned behind my head, then I resume my resting position: video game button pad and emergency abort squeeze ball in my hands, placed crosswise across the breast bone like a mummy. My brain scan and the results of this MRI battery, if they were not a demo, would eventually be fed into a machine learning algorithm.
A team of scientists and researchers would use it to help potentially discover how human beings respond to social situations. What is artificial intelligence? Your AI questions, answered. Stephen Hawking has said, “The development of full artificial intelligence could spell the end of the human race.” Elon Musk claims that AI is humanity’s “biggest existential threat.” That might have people asking: Wait, what? But these grand worries are rooted in research. Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of the researchers working in AI today believe that advanced AI systems, if deployed carelessly, could end all life on earth. Interact with a deep learning ai at DuckDuckGo. Home - deeplearning.ai. The state of AI in 2019. It’s a common psychological phenomenon: repeat any word enough times, and it eventually loses all meaning, disintegrating like soggy tissue into phonetic nothingness.
For many of us, the phrase “artificial intelligence” fell apart in this way a long time ago. AI is everywhere in tech right now, said to be powering everything from your TV to your toothbrush, but never have the words themselves meant less. It shouldn’t be this way. While the phrase “artificial intelligence” is unquestionably, undoubtedly misused, the technology is doing more than ever — for both good and bad. It’s being deployed in health care and warfare; it’s helping people make music and books; it’s scrutinizing your resume, judging your creditworthiness, and tweaking the photos you take on your phone.
How computers got shockingly good at recognizing images. Right now, I can open up Google Photos, type "beach," and see my photos from various beaches I've visited over the last decade. I never went through my photos and labeled them; instead, Google identifies beaches based on the contents of the photos themselves. This seemingly mundane feature is based on a technology called deep convolutional neural networks, which allows software to understand images in a sophisticated way that wasn't possible with prior techniques. In recent years, researchers have found that the accuracy of the software gets better and better as they build deeper networks and amass larger data sets to train them.
That has created an almost insatiable appetite for computing power, boosting the fortunes of GPU makers like Nvidia and AMD. Google developed its own custom neural networking chip several years ago, and other companies have scrambled to follow Google's lead. These Portraits Were Made by AI: None of These People Exist. Check out these rather ordinary looking portraits. 2018 Was the Year That Tech Put Limits on AI. One Giant Step for a Chess-Playing Machine. The Welfare State Is Committing Suicide by Artificial Intelligence. Everyone likes to talk about the ways that liberalism might be killed off, whether by populism at home or adversaries abroad. Fewer talk about the growing indications in places like Denmark that liberal democracy might accidentally commit suicide. A Study on Driverless-Car Ethics Offers a Troubling Look Into Our Values.
The first time Azim Shariff met Iyad Rahwan—the first real time, after communicating with him by phone and e-mail—was in a driverless car. It was November, 2012, and Rahwan, a thirty-four-year-old professor of computing and information science, was researching artificial intelligence at the Masdar Institute of Science and Technology, a university in Abu Dhabi. AI is sending people to jail—and getting it wrong. How do you fight an algorithm you cannot see? 60 Minutes: Facial and emotional recognition; how one man is advancing artificial intelligence. Governance of Artificial Intelligence.
Victoria Krakovna (@vkrakovna) AI Alignment Podcast: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah. Finland’s grand AI experiment. The Elements of AI - a free online course. The Elements of AI - a free online course. Never mind killer robots—here are six real AI dangers to watch out for in 2019. DARPA wants to build an AI to find the patterns hidden in global chaos. This clever AI hid data from its creators to cheat at its appointed task. Artificial Intelligence Has Some Explaining to Do. We Need to Save Ignorance From AI - Nautilus - Pocket.
Google’s AI Guru Wants Computers to Think More Like Brains. A radical new neural network design could overcome big challenges in AI.
Future of AI. AI Language. When Tech Knows You Better Than You Know Yourself. Neuton: A new, disruptive neural network framework for AI applications. AI thinks like a corporation—and that’s worrying - Open Voices. A look at the surprisingly quarrelsome field of artificial intelligence - Axios. The AI Cold War With China That Threatens Us All. What is machine learning? We drew you another flowchart. One of the fathers of AI is worried about its future. The rare form of machine learning that can spot hackers who have already broken in. In the Age of A.I., Is Seeing Still Believing?
Is this AI? We drew you a flowchart to work it out. A robot scientist will dream up new materials to advance computing and fight pollution. China’s state-run press agency has created an ‘AI anchor’ to read the news. Why Elon Musk fears artificial intelligence. A.I. Is Helping Scientists Predict When and Where the Next Big Earthquake Will Be. Establishing an AI code of ethics will be harder than people think. Can Artificial Intelligence Be Smarter Than a Person? Redirect?&url= New AI Strategy Mimics How Brains Learn to Smell. China’s leaders are softening their stance on AI. Machine Learning Confronts the Elephant in the Room. Machine learning — Is the emperor wearing clothes?
A plan to advance AI by exploring the minds of children. Artificial General Intelligence Is Here, and Impala Is Its Name. Artificial General Intelligence Is Here, and Impala Is Its Name. Redirect?&url= Self-driving cars are headed toward an AI roadblock. Bias detectives: the researchers striving to make algorithms fair. Bloomberg. AI Gaydar and Other Stories of the Death of Ignorance. One 30-page document contains everything you need to know about AI. IKEA furniture and the limits of AI - The Kamprad test. Google's New AI Head Is So Smart He Doesn't Need AI. A New Study Suggests There Could Have Been Intelligent Life on Earth Before Humans.
One machine to rule them all: A ‘Master Algorithm’ may emerge sooner than you think. Elon Musk Wants You to Watch This Documentary About the Dangers of A.I. AI providers will increasingly compete with management consultancies - Leave it to the experts. The workplace of the future - AI-spy. Emmanuel Macron Q&A: France's President Discusses Artificial Intelligence Strategy. Economist. What Will Our Society Look Like When Artificial Intelligence Is Everywhere?
Gizmodo. Inside the Chinese lab that plans to rewire the world with AI. AI Has a Hallucination Problem That's Proving Tough to Fix.