FOAF-a-matic — Describase a si mismo en RDF Escrito por Leigh Dodds. Traducido por Leandro Mariano López. Introducción FOAF-a-matic es una simple aplicación de Javascript que le permite crear un descripción FOAF ("Friend-of-A-Friend" o Amigo-de-un-Amigo) de si mismo. Resumiendo, FOAF es una manera de describirse a uno mismo -- nombre, dirección de email, y la gente de quienes es amigo -- usando XML y RDF. FOAF-a-Matic le provee a usted una manera rápida y fácil de crear su propia descripción FOAF. Nota: nada de la información suministrada en esta página es usada o almacenada en ningún modo. Si tiene comentarios acerca de esta aplicación, u otras preguntas acerca de FOAF, por que no se une a la lista de correo RDFWeb-dev? Actualización: Actualmente estoy escribiendo FOAF-a-Matic Mark 2 una aplicación de escritorio para crear y administrar sus datos FOAF. Gente Que Conoce Informele a FOAF-a-matic acerca de la gente que conoce. Generate Results Ahora que ha completado los detalles, usted es listo para convertirse a FOAF... ¿Que Sigue??
D2R Server – Publishing Relational Databases on the Semantic Web D2R Server is a tool for publishing relational databases on the Semantic Web. It enables RDF and HTML browsers to navigate the content of the database, and allows querying the database using the SPARQL query language. It is part of the D2RQ Platform. 1. D2R Server is a tool for publishing the content of relational databases on the Semantic Web, a global information space consisting of Linked Data. Data on the Semantic Web is modelled and represented in RDF. Requests from the Web are rewritten into SQL queries via the mapping. 2. Browsing database contents A simple web interface allows navigation through the database's contents and gives users of the RDF data a “human-readable” preview. Resolvable URIs Following the Linked Data principles, D2R Server assigns a URI to each entity that is described in the database, and makes those URIs resolvable – that is, an RDF description can be retrieved simply by accessing the entity's URI over the Web. Content negotiation SPARQL endpoint and explorer 3.
Open Data An introductory overview of Linked Open Data in the context of cultural institutions. Clear labeling of the licensing terms is a key component of Open data, and icons like the one pictured here are being used for that purpose. Overview[edit] The concept of open data is not new; but a formalized definition is relatively new—the primary such formalization being that in the Open Definition which can be summarized in the statement that "A piece of data is open if anyone is free to use, reuse, and redistribute it — subject only, at most, to the requirement to attribute and/or share-alike Open data is often focused on non-textual material[citation needed] such as maps, genomes, connectomes, chemical compounds, mathematical and scientific formulae, medical data and practice, bioscience and biodiversity. A typical depiction of the need for open data: Creators of data often do not consider the need to state the conditions of ownership, licensing and re-use. I want my data back. Closed data[edit]
Derrick de Kerckhove Derrick de Kerckhove (born 1944) is the author of The Skin of Culture and Connected Intelligence and Professor in the Department of French at the University of Toronto, Canada. He was the Director of the McLuhan Program in Culture and Technology from 1983 until 2008. In January 2007, he returned to Italy for the project and Fellowship “Rientro dei cervelli”, in the Faculty of Sociology at the University of Naples Federico II where he teaches "Sociologia della cultura digitale" and "Marketing e nuovi media". Background[edit] De Kerckhove received his Ph.D in French Language and Literature from the University of Toronto in 1975 and a Doctorat du 3e cycle in Sociology of Art from the University of Tours (France) in 1979. Publications[edit] He edited Understanding 1984 (UNESCO, 1984) and co-edited with Amilcare Iannucci, McLuhan e la metamorfosi dell'uomo (Bulzoni, 1984) two collections of essays on McLuhan, culture, technology and biology. Other works[edit] References[edit]
Science Commons The more we understand about science and its complexities, the more important it is for scientific data to be shared openly. It’s not useful to have ten different labs doing the same research and not sharing their results; likewise, we’re much more likely to be able to pinpoint diseases if we have genomic data from a large pool of individuals. Since 2004, we’ve been focusing our efforts to expand the use of Creative Commons licenses to scientific and technical research. Science Advisory Board Open Access The Scholars’ Copyright Project Creative Commons plays an instrumental role in the Open Access movement, which is making scholarly research and journals more widely available on the Web. We’re also expanding Open Access to research institutions. We’ve created policy briefings and guidelines to help institutions implement Open Access into their frameworks. Open Data At Creative Commons, we believe scientific data should be freely available to everyone. Learn more
The Makers of BlueOrganizer and SmartLinks About DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia and make this information available on the Web. DBpedia allows you to ask sophisticated queries against Wikipedia, and to link the different data sets on the Web to Wikipedia data. We hope that this work will make it easier for the huge amount of information in Wikipedia to be used in some new interesting ways. Furthermore, it might inspire new mechanisms for navigating, linking, and improving the encyclopedia itself. Upcoming Events News Call for Ideas and Mentors for GSoC 2014 DBpedia + Spotlight joint proposal (please contribute within the next days)We started to draft a document for submission at Google Summer of Code 2014: are still in need of ideas and mentors. The DBpedia Knowledge Base Knowledge bases are playing an increasingly important role in enhancing the intelligence of Web and enterprise search and in supporting information integration. Within the
LinkedData - ESW Wiki LinkedData is to spreadsheets and databases what the Web of hypertext documents is to word processor files. Use URIs as names for things Use HTTP URIs so that people can look up those names. When someone looks up a URI, provide useful information. Include links to other URIs. so that they can discover more things groupage cargo service. Linked Data Presentations: Writings: Workshop Series about Linked Data at the WWW conferences Other Workshops about Linked Data 1st International Workshop on Consuming Linked Data (COLD 2010) at ISWC 2010 Community: Examples of Linked Data: See DataSets Client side tools: Server side tools: dbview.py by DanConnolly, Rob Crowell and TimBL Virtuoso - "Sponger" component of Virtuoso's SPARQL Engine, RDF Views of SQL, and the HTTP engine's Linked Data Deployment features D2R Server P2R - expose Prolog knowledge base as linked data (when bundled with UriSpace) SPARQL2XQuery - Bridging the Gab between the XML and the Semantic Web Worlds. Live Demos: Meetups:
How to publish Linked Data on the Web This document provides a tutorial on how to publish Linked Data on the Web. After a general overview of the concept of Linked Data, we describe several practical recipes for publishing information as Linked Data on the Web. This tutorial has been superseeded by the book Linked Data: Evolving the Web into a Global Data Space written by Tom Heath and Christian Bizer. This tutorial was published in 2007 and is still online for historical reasons. The Linked Data book was published in 2011 and provides a more detailed and up-to-date introduction into Linked Data. The goal of Linked Data is to enable people to share structured data on the Web as easily as they can share documents today. The term Linked Data was coined by Tim Berners-Lee in his Linked Data Web architecture note. Applying both principles leads to the creation of a data commons on the Web, a space where people and organizations can post and consume data about anything. This chapter describes the basic principles of Linked Data.
Seven rules of successful research data management in universities | Higher Education Network | Guardian Professional The availability of research data – the digital data or analogue sources that underpin research findings – is high on the agenda of higher education policy makers, funders and researchers committed to open practice. Sound research rests on the ability to evidence, verify and reproduce results. If this sounds obvious, the practice of making reseach data available is surprisingly limited. Take the recent case of the 2010 Reinhart-Rogoff paper on economic growth that was found to contain errors and the exclusion of some data that significantly undermined the results. The drivers for greater research data availability are not just to do with verifying results and uncovering errors. Let's be clear though, not all research data can or should be made openly available. Over the last two years, Jisc's Managing Research Data (MRD) programme has run a set of 17 projects to pilot research data management services in universities. 1) Understand how your institution deals with research data
Over a half million restaurants to choose from, but we'll h Linked Data - Design Issues Up to Design Issues The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data. With linked data, when you have some of it, you can find other, related, data. Like the web of hypertext, the web of data is constructed with documents on the web. However, unlike the web of hypertext, where links are relationships anchors in hypertext documents written in HTML, for data they links between arbitrary things described by RDF,. Use URIs as names for things Use HTTP URIs so that people can look up those names. Simple. The four rules I'll refer to the steps above as rules, but they are expectations of behavior. The first rule, to identify things with URIs, is pretty much understood by most people doing semantic web technology. The second rule, to use HTTP URIs, is also widely understood. The basic format here for RDF/XML, with its popular alternative serialization N3 (or Turtle). Basic web look-up or in RDF/XML