'Mind reading' brain scans reveal secrets of human vision Courtesy of Fei-Fei Li Researchers were able to determine that study participants were looking at this street scene even when the participants were only looking at the outline. Researchers call it mind reading. One at a time, they show a volunteer – who's resting in an MRI scanner – a series of photos of beaches, city streets, forests, highways, mountains and offices. The subject looks at the photos, but says nothing. The researchers, however, can usually tell which photo the volunteer is watching at any given moment, aided by sophisticated software that interprets the signals coming from the scan. Now, psychologists and computer scientists at Stanford, Ohio State University and the University of Illinois at Urbana–Champaign have taken mind reading a step further, with potential impact on how both computers and the visually impaired make sense of the world they see. The results demonstrate that outlines play a crucial role in how the human eye and mind interpret what is seen. Media Contact
Christopher Manning Professor of Linguistics and Computer Science Natural Language Processing Group, Stanford University Chris Manning works on systems and formalisms that can intelligently process and produce human languages. His research concentrates on probabilistic models of language and statistical natural language processing; including text understanding, text mining, machine translation, information extraction, named entity recognition, part-of-speech tagging, probabilistic parsing and semantic role labeling, syntactic typology, computational lexicography, and other topics in computational linguistics and machine learning. Contact Brief Bio I'm Australian ("I come from a land of wide open spaces ...") Papers Most of my papers are available online in my publication list. Books My new book, an Introduction to Information Retrieval, with Hinrich Schütze and Prabhakar Raghavan, is now available in print. Conferences and Talks A few of my talks are available online. Research Projects Courses Other stuff
Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations | SpringerLink A holy grail of computer vision is the complete understanding of visual scenes: a model that is able to name and detect objects, describe their attributes, and recognize their relationships. Understanding scenes would enable important applications such as image search, question answering, and robotic interactions. Much progress has been made in recent years towards this goal, including image classification (Perronnin et al. 2010; Simonyan and Zisserman 2014; Krizhevsky et al. 2012; Szegedy et al. 2015) and object detection (Girshick et al. 2014; Sermanet et al. 2013; Girshick 2015; Ren et al. 2015b). An important contributing factor is the availability of a large amount of data that drives the statistical models that underpin today’s advances in computational visual understanding. While the progress is exciting, we are still far from reaching the goal of comprehensive scene understanding. An image is often a rich scenery that cannot be fully described in one summarizing sentence.
Collective intelligence: Ants and brain's neurons CONTACT: Stanford University News Service (415) 723-2558 Collective intelligence: Ants and brain's neurons STANFORD - An individual ant is not very bright, but ants in a colony, operating as a collective, do remarkable things. A single neuron in the human brain can respond only to what the neurons connected to it are doing, but all of them together can be Immanuel Kant. That resemblance is why Deborah M. "I'm interested in the kind of system where simple units together do behave in complicated ways," she said. No one gives orders in an ant colony, yet each ant decides what to do next. For instance, an ant may have several job descriptions. This kind of undirected behavior is not unique to ants, Gordon said. Gordon studies harvester ants in Arizona and, both in the field and in her lab, the so-called Argentine ants that are ubiquitous to coastal California. Argentine ants came to Louisiana in a sugar shipment in 1908. The motions of the ants confirm the existence of a collective. -jns/ants-
Oussama Khatib Professor of Computer Science Professor of Computer Science Oussama Khatib Artificial Intelligence Laboratory Stanford University Stanford, CA 94305-9010 Gates Building, Room 144 Phone: +1 650 723 9753 Fax: +1 650 725-1449 Research Interests | Publications | Teaching | Short Bio | Contact Contact Unfortunately, I get a large amount of email. Scheduling Appointments To schedule an appointment, please send me an e-mail message with the specific times you are available. Graduate Admissions All the information you need on applying for admission to CS graduate programs is available here . E-mail
ILSVRC2017 IntroductionNewsHistoryTimetableChallengesFAQCitationContact Introduction This challenge evaluates algorithms for object localization/detection from images/videos at scale. News Jun 25, 2017: Submission server for VID is open, new additional train/val/test images for VID is available now, deadline for VID is extended to July 7, 2017 5pm PDT. History Tentative Timetable Mar 31, 2017: Development kit, data, and registration made available.Jun 30, 2017, 5pm PDT: Submission deadline.July 5, 2017: Challenge results will be released.July 26, 2017: Most successful and innovative teams present at CVPR 2017 workshop. Main Challenges I: Object localization The data for the classification and localization tasks will remain unchanged from ILSVRC 2012 . In this task, given an image an algorithm will produce 5 class labels $c_i, i=1,\dots 5$ in decreasing order of confidence and 5 bounding boxes $b_i, i=1,\dots 5$, one for each class label. Let $d(c_i,C_k) = 0$ if $c_i = C_k$ and 1 otherwise. 1. 2. 3.
Stanford Encyclopedia of Philosophy Fei-Fei Li Ph.D. | Associate Professor, Stanford University Dr. Fei-Fei Li is a Professor at the Computer Science Department at Stanford University. She received her Ph.D. degree from California Institute of Technology, and a B.S. in Physics from Princeton University. Fei-Fei is currently the Co-Director of the Stanford Human-Centered AI (HAI) Institute, a Stanford University Institute to advance AI research, education, policy and practice to benefit humanity, by bringing together interdisciplinary scholarship across the university. Prior to this, Fei-Fei served as the Director of Stanford AI Lab from 2013 to 2018. She is also a Co-Director and Co-PI of the Stanford Vision and Learning Lab, where she works with the most brilliant students and colleagues worldwide to build smart algorithms that enable computers and robots to see and think, as well as to conduct cognitive and neuroimaging experiments to discover how brains see and think. Curriculum vitae
Center for the Study of Poverty and Inequality The Stanford Center on Poverty and Inequality (CPI), one of three National Poverty Centers, is a nonpartisan research center dedicated to monitoring trends in poverty and inequality, explaining what's driving those trends, and developing science-based policy on poverty and inequality. CPI supports research by new and established scholars, trains the next generation of scholars and policy analysts, and disseminates the very best research on poverty and inequality. The current economic climate makes CPI activities and research especially important. The following are a few critical poverty and inequality facts: Poverty:The U.S. poverty rate, according to the new Supplemental Poverty Measure, is estimated at 16.0 percent. The official poverty rate, recently released by the U.S. CPI monitors a wide gamut of other poverty and inequality indicators. The activities of CPI are currently supported with core funding from the Office of the Assistant Secretary for Planning and Evaluation at the U.S.
Next Big Test for AI: Making Sense of the World - MIT Technology Review A few years ago, a breakthrough in machine learning suddenly enabled computers to recognize objects shown in photographs with unprecedented—almost spooky—accuracy. The question now is whether machines can make another leap, by learning to make sense of what’s actually going on in such images. A new image database, called Visual Genome, could push computers toward this goal, and help gauge the progress of computers attempting to better understand the real world. Teaching computers to parse visual scenes is fundamentally important for artificial intelligence. Visual Genome was developed by Fei-Fei Li, a professor who specializes in computer vision and who directs the Stanford Artificial Intelligence Lab, together with several colleagues. Li and colleagues previously created ImageNet, a database containing more than a million images tagged according to their contents. Visual Genome isn’t the only complex image database out there for researchers to experiment with.
BJ Fogg's Website Vision Recogonition and Artificial Intelligence Computer Vision is the science and technology of obtaining models, meaning and control information from visual data. The two main fields of computer vision are computational vision and machine vision. Computational vision has to do with simply recording and analyzing the visual perception, and trying to understand it. Machine vision has to do with using what is found from computational vision and applying it to benefit people, animals, environment, etc. Computer Vision has influenced the field of Artificial Intelligence greatly. ASIMO, seen below, is another example of how computer vision is an important part of Artificial Intelligence. Artificial Intelligence can also use computer vision to communicate with humans. Artificial Intelligence also uses computer vision to recognize handwriting text and drawings. Another important part of Artificial Intelligence is passive observation and analysis.
Peace Innovation Lab Face recognition based on fringe pattern analysis | Optical Engineering | SPIE Two-dimensional face-recognition techniques suffer from facial texture and illumination variations. Although 3-D techniques can overcome these limitations, the reconstruction and storage expenses of 3-D information are extremely high. We present a novel face-recognition method that directly utilizes 3-D information encoded in face fringe patterns without having to reconstruct 3-D geometry. In the proposed method, a digital video projector is employed to sequentially project three phase-shifted sinusoidal fringe patterns onto the subject’s face.