linked open data National Libraries and a Museum open up their data using CC0 CC0 has been getting lots of love in the last couple months in the realm of data, specifically GLAM data (GLAM as in Galleries, Libraries, Archives, Museums). The national libraries of Spain and Germany have released their bibliographic data using the CC0 public domain dedication tool. For those of you who don’t know what that means, it means that the libraries have waived all copyrights to the extent possible in their jurisdictions, placing the data effectively into the public domain. What’s more, the data is available as linked open data, which means that the data sets are available as RDF (Resource Description Framework) on the web, enabling the data to be linked with other data from different sources. “Open Data Stickers” / Copyright and related rights waived via CC0 by jwyg The National Library of Spain teamed up with the Ontology Engineering Group (OEG) to create the data portal: datos.bne.es. 2 Comments » 3 Comments »
How to return SPARQL results in JSON-LD? Linked Data Platform Best Practices and Guidelines 2.1 Predicate URIs should be HTTP URLs URIs are used to uniquely identify resources and URLs are used to locate resources on the Web. That is to say that a URL is expected to resolve to an actual resource, which can be retrieved from the host. A URI, on the other hand, may also be a URL, but it does not have to be; it may refer to something that has no retrievable representation. One of the fundamental ideas behind Linked Data is that the things referred to by HTTP URIs can actually be looked up ("dereferenced"). Of course, it is also a common practice to reuse properties from open vocabularies that are publicly available. 2.2 Use and include the predicate rdf:type to represent the concept of type in LDPRs It is often very useful to know the type (class) of an LDPR, though it is not essential to work with the interaction capabilities that LDP offers. Example 1: Representation of an LDPR with explicit declaration of rdf:type @prefix rdf: <
How to create and publish a SKOS taxonomy in 5 minutes · AKSW/OntoWiki Wiki In a real world case you would have deployed OntoWiki on a server reachable by some specific URL. Lets assume that URL is After the following steps the resources created in this examples would then be resolvable by accessing them with a browser, for example by visiting This means that all resources created in OntoWiki are automatically published. The example taxonomy @prefix rdf: < . Note: In a real world case it would be better to reuse an existing product class. for example Create the knowledge base Open OntoWiki and log in as "Admin" or some other user that can create knowledge bases.Go to Knowledge Bases->Edit->Create Knowledge Base.Set the Knowledge Base URI to Now you have several options: Add classes and properties using dialogs On the right you should see the window "Properties of Product". Upload a file Paste source
Data on the Web Best Practices Status of This Document This section describes the status of this document at the time of its publication. Other documents may supersede this document. This early version of the document shows its expected scope and future direction. This document was published by the Data on the Web Best Practices Working Group as an Editor's Draft. Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. This document is governed by the 1 August 2014 W3C Process Document. Introduction The best practices described below have been developed to encourage and enable the continued expansion of the Web as a medium for the exchange of data. In broad terms, data publishers aim to share data either openly or with controlled access. This document sets out a series of best practices that will help publishers and consumers face the new challenges and opportunities posed by data on the Web. Context
The Quick Guide to GUIDs Our world is numbered. Books have ISBNs and products have barcodes. Cars have VINs, even people have social security numbers. Numbers help us reference items unambiguously. “John Smith” may be many people, but Social Security Number 123-45-6789 refers to one person exactly. A GUID (globally unique identifier) is a bigger, badder version of this type of ID number. Any way you title it, GUIDs or UUIDs are just big, gigantic ID numbers. The Problem With Counting “We don’t need no stinkin’ GUIDs,” you may be thinking between gulps of Top Ramen, “I’ll just use regular numbers and start counting up from 1.” Sure, it sounds easy. Who does the counting? The problem with counting is that we want to create ID numbers without the management headache. GUIDs to the Rescue GUIDs are large, enormous numbers that are nearly guaranteed to be unique. 30dd879c-ee2f-11db-8314-0800200c9a66 The format is a well-defined sequence of 32 hex digits grouped into chunks of 8-4-4-4-12. Here’s the thinking behind GUIDs:
Globally Unique Identifier (GUID) Globally Unique Identifier (GUID) A GUID identifies a person. In a URI, the GUID identifies the person who is associated with the data of the resource. For example, the following URI refers to the contacts (acquaintances) of the person whose GUID is 6677. Using the YQL Contacts Table, you specify the user with the GUID value of 6677 with the guid key as seen in this example: SELECT * FROM social.contacts WHERE guid='6677' An application can obtain the GUID of the person who is running the application. The YQL Social Tables use the me variable to store the GUID of the user running the application. SELECT * FROM social.contacts WHERE guid=me GUIDs have the following characteristics: A GUID exists for every Yahoo ID and is never the same as the Yahoo ID. For syntax and other details, see the Introspective GUID section.
VoID VoID (from "Vocabulary of Interlinked Datasets") is an RDF based schema to describe linked datasets. With VoID the discovery and usage of linked datasets can be performed both effectively and efficiently. A dataset is a collection of data, published and maintained by a single provider, available as RDF, and accessible, for example, through dereferenceable HTTP URIs or a SPARQL endpoint. [edit] Overview Basically, we find two classes at the heart of VoID: A dataset (void:Dataset) is a collection of data, which is: published and maintained by a single provider, and available as RDF, and accessible, for example, through dereferenceable HTTP URIs or a SPARQL endpoint. In the following, the modelling of the interlinking in voiD is depicted: voiD interlinking concept The core resources of the VoID spec are as follows: [edit] Using VoID A simple VoID example that describes two well-known LOD datasets and their interlinking is shown in the following. Much more is possible with VoID, though. [edit]
Promote Your Content with Structured Data Markup - Structured Data Google Search works hard to understand the content of a page. You can help us by providing explicit clues about the meaning of a page to Google by including structured data on the page. Structured data is a standardized format for providing information about a page and classifying the page content; for example, on a recipe page, what are the ingredients, the cooking time and temperature, the calories, and so on. Google uses structured data that it finds on the web to understand the content of the page, as well as to gather information about the web and the world in general. For example, here is a JSON-LD structured data snippet that might appear on a recipe page, describing the title of the recipe, the author of the recipe, and other details: Google Search also uses structured data to enable special search result features and enhancements. Structured data is coded using in-page markup on the page that the information applies to. Structured data format Structured data guidelines
The New York Times Linked Open Data APIs: All the News That’s Fit to printf() | ANTELOPE AS DOCUMENT Continuing the legacy of the New York Times Index, which stretches back nearly to the founding of the newspaper, The New York Times and The New York Times Company Research & Development Lab have adopted Linked Open Data to maintain and share the newspaper’s extensive holdings. The New York Times’ suite of Linked Open Data datasets, tools and APIs are based in large part on the newspaper’s 150 year old controlled vocabulary, which was released as 10,000 SKOS subject headings in January of 2010. The New York Times publicizes their projects through their blog – Open: All the News That’s Fit to printf () – and through social media. In addition to creating prototype tools such as Who Went Where, The New York Times also promotes the use of their APIs and source code of their tools. Open has been a regularly updated blog since 2007 when the New York Times Company began its foray into the use and promotion of open source software. The Dataset The APIs Who Went Where Tool For more information visit:
New York Times - Linked Open Data sameAs How to publish Linked Data on the Web This document provides a tutorial on how to publish Linked Data on the Web. After a general overview of the concept of Linked Data, we describe several practical recipes for publishing information as Linked Data on the Web. This tutorial has been superseeded by the book Linked Data: Evolving the Web into a Global Data Space written by Tom Heath and Christian Bizer. This tutorial was published in 2007 and is still online for historical reasons. The Linked Data book was published in 2011 and provides a more detailed and up-to-date introduction into Linked Data. The goal of Linked Data is to enable people to share structured data on the Web as easily as they can share documents today. The term Linked Data was coined by Tim Berners-Lee in his Linked Data Web architecture note. Applying both principles leads to the creation of a data commons on the Web, a space where people and organizations can post and consume data about anything. This chapter describes the basic principles of Linked Data.