background preloader

IBM Watson

IBM Watson
Related:  The Great Conversation Bot

Read the Web :: Carnegie Mellon University Browse the Knowledge Base! Can computers learn to read? We think so. "Read the Web" is a research project that attempts to create a computer system that learns over time to read the web. Since January 2010, our computer system called NELL (Never-Ending Language Learner) has been running continuously, attempting to perform two tasks each day: First, it attempts to "read," or extract facts from text found in hundreds of millions of web pages (e.g., playsInstrument(George_Harrison, guitar)). So far, NELL has accumulated over 50 million candidate beliefs by reading the web, and it is considering these at different levels of confidence.

Overview of AI Libraries in Java 1. Introduction In this article, we’ll go over an overview of Artificial Intelligence (AI) libraries in Java. Since this article is about libraries, we’ll not make any introduction to AI itself. AI is a very wide field, so we will be focusing on the most popular fields today like Natural Language Processing, Machine Learning, Neural Networks and more. 2. 2.1. Apache Jena is an open source Java framework for building semantic web and linked data applications from RDF data. 2.2. PowerLoom is a platform for the creation of intelligent, knowledge-based applications. 2.3. d3web d3web is an open source reasoning engine for developing, testing and applying problem-solving knowledge onto a given problem situation, with many algorithms already included. 2.4. Eye is an open source reasoning engine for performing semi-backward reasoning. 2.5. Tweety is a collection of Java frameworks for logical aspects of AI and knowledge representation. 3. 3.1. 3.2. 4. 4.1. 4.2. 5. 5.1. 5.2. 5.3. 5.4. 6. 6.1. 6.2.

Case-Based Reasoning Case-based reasoning is one of the fastest growing areas in the field of knowledge-based systems and this book, authored by a leader in the field, is the first comprehensive text on the subject. Case-based reasoning systems are systems that store information about situations in their memory. As new problems arise, similar situations are searched out to help solve these problems. Problems are understood and inferences are made by finding the closest cases in memory, comparing and contrasting the problem with those cases, making inferences based on those comparisons, and asking questions when inferences can't be made. This book presents the state of the art in case-based reasoning. This book is an excellent text for courses and tutorials on case-based reasoning. Top 5 machine learning libraries for Java The long AI winter is over. Instead of being a punchline, machine learning is one of the hottest skills in tech right now. Companies are scrambling to find enough programmers capable of coding for ML and deep learning. While no one programming language has won the dominant position, here are five of our top picks for ML libraries for Java. Weka It comes as no surprise that Weka is our number one pick for the best Java machine learning library. “Weka’s strength lies in classification, so applications that require automatic classification of data can benefit from it, but it also supports clustering, association rule mining, time series prediction, feature selection, and anomaly detection,” said Prof. Weka’s collection of machine learning algorithms can be applied directly to a dataset or called from your own Java code. SEE ALSO: Weka — An interface to a collection of machine learning algorithms in Java Massive Online Analysis (MOA) We’re big fans of MOA here at JAXenter.com. Deeplearning4j

The Process of Question Answering. ions - Search all of the collections listed below at once. Technical Reports - Scientific and technical (S&T) reports conveying results of Defense-sponsored research, development, test and evaluation (RDT&E) efforts on a wide range of topics. Collection includes both citations and many full-text, downloadable documents from mid-1900s to present.

The Juicer The Juicer API Documentation for the Juicer API is publicly available at this link. To use the API, you need an API key. If you’re a BBC employee, you can obtain an API key by registering at the Developer Portal and requesting “bbcrd-juicer-apis-product” as the product you want to access. If you do not work for the BBC, you can request an API key by emailing newslabs@bbc.co.uk, provided you plan to use it for non-commercial purposes and agree to the terms and conditions listed below under “FAQs”. Web interfaces We’ve built a beta web interface to the Juicer, which you can use if you are connected to a BBC network. We are working hard to give everyone more ways to play with the data available in the Juicer, including a public search interface, trending topics and more jaw-dropping visualisations. FAQs BBC Juicer: What it is and what it is NOT. BBC Juicer is a news aggregation ‘pipeline’. Which news sources did you include and why? We do NOT ingest content that is behind a paywall.

Conceptual dependency theory From Wikipedia, the free encyclopedia Conceptual dependency theory is a model of natural language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence.[1] This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. Schank developed the model to represent knowledge for natural language input into computers. The model uses the following basic representational tokens:[3] real world objects, each with some attributes.real world actions, each with attributestimeslocations A set of conceptual transitions then act on this representation, e.g. an ATRANS is used to represent a transfer such as "give" or "take" while a PTRANS is used to act on locations such as "move" or "go". A sentence such as "John gave a book to Mary" is then represented as the action of an ATRANS on two real world objects, John and Mary.

Reuters News Tracer: Filtering through the noise of social media | Reuters News Agency | Reuters News Agency With fake news challenging the veracity of news and integrity of information, Reuters has developed a tool that is combatting the problem and providing its journalists anywhere from an 8- to 60-minute head start. Increasingly, events surface first on social media as people post what they’re seeing, hearing and experiencing in the moment. With the proliferation of smartphones and social media, there are many more eyewitness accounts of a lot more events. However, it is the veracity of news and the integrity of information and sources that have been making headlines of their own lately. Reuters News Tracer Over a two year period, Reuters and technology professionals have been developing a solution: Reuters News Tracer™. Harnessing the power of the crowd, Reuters News Tracer receives alerts that enable it to tap into worldwide eyewitnesses, to see what’s happening around the world. How does it work? The results

Direct Memory Access Parsing (DMAP) A Direct Memory Access Parser reads text and identifies the concepts in memory that text refers to. It does this by matching phrasal patterns attached to those concepts (mops). Attaching Phrases to Concepts For example, suppose we wanted to read texts about economic arguments, as given by people such as Milton Friedman and Lester Thurow. The first thing we have to do is define concepts for those arguments, those economists, and for the event of economists presenting arguments. Next we have to attach to these concepts phrases that are used to refer to them. More complex concepts, such as a change in an economic variable, or a communication about an event, require phrasal patterns . For example, the concept m-change-event has the role :variable which can be filled by any m-variable , such as m-interest-rates . The Concept Recognition Algorithm From the Friedman example, we can see that we want the following kinds of events to occur: Getting Output from DMAP with-monitors is a macro.

Understanding Quill: What we mean when we say Advanced NLG Aug 02, 2017 | Andrew Paley Narrative Science is in a bit of a funny situation: When it comes to helping people understand what we do, we don’t just have to define our company, we have to define the definition. Why? From the outside, Natural Language Generation (NLG) can be somewhat difficult to understand. Why NLG is different than NLP and NLU For starters, there is often confusion with other “language” areas under the big tent of artificial intelligence, such as Natural Language Processing (NLP) or Natural Language Understanding (NLU). But even when you understand the general scope of NLG, there’s still a lack of clarity about “machines that can write.” Our approach? Understanding Basic NLG When considering “machines that write,” you could be forgiven for envisioning a Mad Libs experience. Basic NLG can be fine for certain use cases, such as templated reports and basic descriptions. And, beyond the functional roadblocks, one must ask - is Basic NLG really artificial intelligence? Print

Universal Networking Language Universal Networking Language (UNL) is a declarative formal language specifically designed to represent semantic data extracted from natural language texts. It can be used as a pivot language in interlingual machine translation systems or as a knowledge representation language in information retrieval applications. In UNL, the information conveyed by the natural language is represented sentence by sentence as a hypergraph composed of a set of directed binary labeled links between nodes or hypernodes. As an example, the English sentence "The sky was blue?!" can be represented in UNL as follows: In the example above, sky(icl>natural world) and blue(icl>color), which represent individual concepts, are UW's attributes of an object directed to linking the semantic relation between the two UWs; "@def", "@interrogative", "@past", "@exclamation" and "@entry" are attributes modifying UWs. UWs are expressed in natural language to be humanly readable.

Automated Journalism - AI Applications at New York Times, Reuters, and Other Media Giants - Artificial intelligence in news media is being used in new ways from speeding up research to accumulating and cross-referencing data and beyond. In this article we discuss several examples in which AI is being integrated into the newsroom, and we’ll aim to tackle the following three questions for our business and media industry readers: What new journalism tasks are made possible by AI? The following examples help to flesh out the directions that AI is taking in journalism, and the opportunities made available by it’s application. Before examining applications in each specific publication, we’ll start with a high-level overview of the findings from this research. An Overview of Findings in Automated Journalism AI is enhancing the newsroom in the following ways: Streamlining media workflows: AI enables journalists to focus on what they do best: reporting as illustrated by BBC’s Juicer. Below are seven brief highlights of AI use-cases at popular news media outlets. Reuters – Data Visualization

Universal Networking Language (UNL) Universal Networking Language (UNL) is an Interlingua developed by UNDL foundation. UNL is in the form of semantic network to represent and exchange information. Concepts and relations enable encapsulation of the meaning of sentences. In UNL, a sentence can be considered as a hypergraph where each node is the concept and the links or arcs represent the relations between the concepts. The UNL consists of Universal Words (UWs), Relations and Attributes and knowledge base. Universal Words (UWs) Universal words are UNL words that carry knowledge or concepts. Examples: bucket(icl>container) water(icl>liquid) Relations Relations are labelled arcs that connect nodes (Uws) in the UNL graph. Examples: agt ( break(agt>thing,obj>thing), John(iof>person) ) Attributes Attributes are annotations used to represent grammatical categories, mood, aspect, etc. Example: work(agt>human). Knowledge Base The UNL Knowledge Base contains entries that define possible binary relations between UWs.

Related: