Knowledge retrieval
Knowledge Retrieval seeks to return information in a structured form, consistent with human cognitive processes as opposed to simple lists of data items. It draws on a range of fields including epistemology (theory of knowledge), cognitive psychology, cognitive neuroscience, logic and inference, machine learning and knowledge discovery, linguistics, and information technology. Overview[edit] In the field of retrieval systems, established approaches include: Data Retrieval Systems (DRS), such as database management systems, are well suitable for the storage and retrieval of structured data.Information Retrieval Systems (IRS), such as web search engines, are very effective in finding the relevant documents or web pages. Both approaches require a user to read and analyze often long lists of data sets or documents in order to extract meaning. The goal of knowledge retrieval systems is to reduce the burden of those processes by improved search and representation. References[edit]
Data warehouse
Data Warehouse Overview In computing, a data warehouse (DW, DWH), or an enterprise data warehouse (EDW), is a database used for reporting and data analysis. Integrating data from one or more disparate sources creates a central repository of data, a data warehouse (DW). Data warehouses store current and historical data and are used for creating trending reports for senior management reporting such as annual and quarterly comparisons. The data stored in the warehouse is uploaded from the operational systems (such as marketing, sales, etc., shown in the figure to the right). A data warehouse constructed from integrated data source systems does not require ETL, staging databases, or operational data store databases. A data mart is a small data warehouse focused on a specific area of interest. This definition of the data warehouse focuses on data storage. Benefits of a data warehouse[edit] A data warehouse maintains a copy of information from the source transaction systems. History[edit]
Data Mining
Process of extracting and discovering patterns in large data sets Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems.[1] Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use.[1][2][3][4] Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD.[5] Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.[1] Etymology[edit] Background[edit] The manual extraction of patterns from data has occurred for centuries. Process[edit]
What’s the law around aggregating news online? A Harvard Law report on the risks and the best practices
[So much of the web is built around aggregation — gathering together interesting and useful things from around the Internet and presenting them in new ways to an audience. It’s the foundation of blogging and social media. But it’s also the subject of much legal debate, particularly among the news organizations whose material is often what’s being gathered and presented. Kimberley Isbell of our friends the Citizen Media Law Project has assembled a terrific white paper on the current state of the law surrounding aggregation — what courts have approved, what they haven’t, and where the (many) grey areas still remain. This should be required reading for anyone interested in where aggregation and linking are headed. You can get the full version of the paper (with footnotes) here; I’ve added some links for context. During the past decade, the Internet has become an important news source for most Americans. What is a news aggregator? Can they do that? AFP v. Associated Press v. So is it legal?
Information retrieval
Information retrieval is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on metadata or on full-text (or other content-based) indexing. Automated information retrieval systems are used to reduce what has been called "information overload". Many universities and public libraries use IR systems to provide access to books, journals and other documents. Web search engines are the most visible IR applications. Overview[edit] An information retrieval process begins when a user enters a query into the system. An object is an entity that is represented by information in a database. Most IR systems compute a numeric score on how well each object in the database matches the query, and rank the objects according to this value. History[edit] Model types[edit] For effectively retrieving relevant documents by IR strategies, the documents are typically transformed into a suitable representation. Recall[edit]
IBM - Knowledge Discovery and Data Mining
Knowledge Discovery and Data Mining (KDD) is an interdisciplinary area focusing upon methodologies for extracting useful knowledge from data. The ongoing rapid growth of online data due to the Internet and the widespread use of databases have created an immense need for KDD methodologies. The challenge of extracting knowledge from data draws upon research in statistics, databases, pattern recognition, machine learning, data visualization, optimization, and high-performance computing, to deliver advanced business intelligence and web discovery solutions. IBM Research has been at the forefront of this exciting new area from the very beginning. With the explosive growth of online data and IBM’s expansion of offerings in services and consulting, data-based solutions are increasingly crucial.
Museums and the Web 2010: Papers: Miller, E. and D. Wood, Recollection: Building Communities for Distributed Curation and Data Sharing
Background The National Digital Information Infrastructure and Preservation Program at the Library of Congress is an initiative to develop a national strategy to collect, archive and preserve the burgeoning amounts of digital content for current and future generations. It is based on an understanding that digital stewardship on a national scale depends on active cooperation between communities. The NDIIPP network of partners have collected a diverse array of digital content, including social science data-sets; geospatial information; Web sites and blogs; e-journals; audiovisual materials; and digital government records ( These diverse collections are held in the dispersed repositories and archival systems of over 130 partner institutions where each organization collects, manages, and stores at-risk digital content according to what is most suitable for the industry or domain that it serves. Specific goals for the Recollection project are to: Future Work
Systems Engineering
Systems engineering techniques are used in complex projects: spacecraft design, computer chip design, robotics, software integration, and bridge building. Systems engineering uses a host of tools that include modeling and simulation, requirements analysis and scheduling to manage complexity. Systems engineering is an interdisciplinary field of engineering that focuses on how to design and manage complex engineering systems over their life cycles. The systems engineering process is a discovery process that is quite unlike a manufacturing process. History[edit] The term systems engineering can be traced back to Bell Telephone Laboratories in the 1940s.[1] The need to identify and manipulate the properties of a system as a whole, which in complex engineering projects may greatly differ from the sum of the parts' properties, motivated various industries to apply the discipline.[2] Concept[edit] Systems engineering signifies only an approach and, more recently, a discipline in engineering.