background preloader

API:Main page

API:Main page

Hook into Wikipedia using Java and the MediaWiki API | Integrating Stuff The Mediawiki API makes it possible for web developers to access, search and integrate all Wikipedia content into their applications. Given that Wikipedia is the ultimate online encyclopedia, there are dozens of use cases in which this might be useful. I used to post a lot of articles about using the webservice APIS of third party sites on this blog. The Wikipedia API makes it possible to interact with Wikipedia/Mediawiki through a webservice instead of the normal browserbased web interface. We cover a basic use case: getting the contents of the “Web service” article. To fetch the contents for this article, the following url suffices: A request to this url will return an xml document which includes the current wiki markup for the page titled “Web service”. We are not going to construct these urls ourselves. If you are using Maven you need to add the following repository to your pom: together with the following dependency: <! and if you want the addons:

Ubuntu Hardy Heron (Ubuntu 8.04 LTS Server Version 1.0 Author: Falko Timme <ft [at] falkotimme [dot] com> Last edited 04/24/2008 This tutorial shows how to set up an Ubuntu Hardy Heron (Ubuntu 8.04 LTS) based server that offers all services needed by ISPs and hosters: Apache web server (SSL-capable), Postfix mail server with SMTP-AUTH and TLS, BIND DNS server, Proftpd FTP server, MySQL server, Courier POP3/IMAP, Quota, Firewall, etc. This tutorial is written for the 32-bit version of Ubuntu 8.04 LTS, but should apply to the 64-bit version with very little modifications as well. I will use the following software: Web Server: Apache 2.2 with PHP 5.2.4 and Ruby Database Server: MySQL 5.0 Mail Server: Postfix DNS Server: BIND9 FTP Server: proftpd POP3/IMAP: I will use Maildir format and therefore install Courier-POP3/Courier-IMAP. Webalizer for web site statistics I want to say first that this is not the only way of setting up such a system. 1 Requirements To install such a system you will need the following: 2 Preliminary Note

Working With the "One-Second" Rule What is the "One-Second Rule?" The following condition in the Amazon Web Services license agreement often causes confusion or concern: You may make calls at any time that the Amazon Web Services are available, provided that you [...] do not exceed 1 call per second per IP address [...] Without the "one-second rule," Amazon's servers would be overwhelmed and unable to keep up with the demand on them. What, Me Worry? Often developers worry about what will happen if they occasionally make more than one query per second so they design complicated systems to prevent their programs from every making two calls less than a second apart. What Happens When You Exceed One Call Per Second? What happens when you regularly exceed the "one call per second" limit? How can I Download Everything? Many affiliate programs provide data feeds. Caching A2S Results You can cache the information so it doesn't have to be downloaded as often. Simple Cache If there is a result in the database, it looks at the timestamp.

Database download Wikipedia offers free copies of all available content to interested users. These databases can be used for mirroring, personal use, informal backups, offline use or database queries (such as for Wikipedia:Maintenance). All text content is multi-licensed under the Creative Commons Attribution-ShareAlike 3.0 License (CC-BY-SA) and the GNU Free Documentation License (GFDL). Images and other files are available under different terms, as detailed on their description pages. Where do I get... English-language Wikipedia[edit] Dumps from any Wikimedia Foundation project: Wikipedia dumps in SQL and XML: – Current revisions only, no talk or user pages. Other languages[edit] In the directory you will find the latest SQL and XML dumps for the projects, not just English. Some other directories (e.g. simple, nostalgia) exist, with the same structure. Dealing with compressed files[edit]

Introduction Adapted from Explaining OAuth, published on September 05, 2007 by Eran Hammer-Lahav A Little Bit of History OAuth started around November 2006, while Blaine Cook was working on the Twitter OpenID implementation. In April 2007, a Google group was created with a small group of implementers to write a proposal for an open protocol. What is it For? Many luxury cars today come with a valet key. Every day new websites launch offering services which tie together functionality from other sites. This is the problem OAuth solves. OAuth and OpenID OAuth is not an OpenID extension and at the specification level, shares only few things with OpenID – some common authors and the fact both are open specification in the realm of authentication and access control. Who is Going to Use it? Everyone. Is OAuth a New Concept? No. An area where OAuth is more evolved than some of the other protocols and services is its direct handling of non-website services. Is It Ready?

Shahzad Bhatti » Blog Archive » Working with Amazon Web Services « Merveilles du web 2.0… mon « copier bloguer » du web I started at Amazon last year, but didn’t actually got chance to work with them until recently when we had to integrate with Amazon Ecommerce Service (ECS). Amazon Web Services come in two flavors: REST and SOAP. According to inside sources about 70% use REST. I also found that REST interface was more reliable and simple. Getting Access ID First, visit I will describe ECS here and it comes with 450 pages of documentation, though most of it just describes URLs and input/output fields. Other interesting links include: blog site for updates on AWS, a Forum #1, Forum #2 and FAQ. Services Inside ECS, you will find following services: ItemSearchBrowseNodeLookupCustomerContentLookupItemLookupListLookupSellerLookupSellerListingLookupSimilarityLookupTransactionLookup REST Approach The rest approach is pretty simple, in fact you can simply type in following URL to your browser (with your access key) and will see the results (in XML) right away: Find DVD cover art:

Ways to process and use Wikipedia dumps – Prashanth Ellina Wikipedia is a superb resource for reference (taken with a pinch of salt of course). I spend hours at a time spidering through its pages and always come away amazed at how much information it hosts. In my opinion this ranks amongst the defining milestones of mankind’s advancement. Apart from being available through the data is provided for download so that you can create a mirror locally for quicker access. Setting up a local copy of Wikipedia Windows If you have Windows installed, Webaroo is an easy way to get Wikipedia locally as a “web pack”. Linux This page has instructions to setup on Linux. Any operating system Wikipedia provides static wiki dumps for download which should work fine on any operating system that supports a decent web browser. Windows Mobile, iPhone and Blackberry To access Wikipedia from your mobile, check out vTap from Veveo. Other uses for Wikipedia data dumps Getting the dumps Wikipedia is huge and this reflects in the data dumps.

plaintxt.org The unofficial homepage of Tim Dwyer I have a new position: Senior Lecturer and Larkins Fellow at Monash University, Australia. Dissertations Tim Dwyer (2005): "Two and a Half Dimensional Visualisation of Relational Networks", PhD Thesis, The University of Sydney. (23MB pdf) Tim Dwyer (2001): "Three Dimensional UML using Force Directed Layout", Honours Thesis, The University of Melbourne (TR download) Technical Reports T. T. Screengrab! :: Firefox Add-ons API:Query The action=query module allows you to get most of the data stored in a wiki, including tokens for editing. The query module has many submodules (called query modules), each with a different function. There are three types of query modules: Meta information about the wiki and the logged-in userProperties of pages, including page revisions and contentLists of pages that match certain criteria Multiple modules should be used together to get what you need in one request, e.g. prop=info|revisions&list=backlinks|embeddedin|imagelinks&meta=userinfo is a call to six modules in one request. Unlike meta and list modules, all property modules work on a set of pages provided with either titles, pageids, revids, or generator parameters. Use generator if you want to get data about pages that are the result of another api call. Lastly, you should always request the new "continue" syntax to iterate over results. Sample query[edit | edit source] api.php? Specifying pages[edit | edit source]

How to Get Alerted When Somebody Has Dugg your Article :: the How-To Geek Digg.com is the absolute biggest source of traffic that most content authors are going to ever see. The “Digg Effect” can cripple your site within an hour, so it’s nice to know if somebody has submitted one of your articles to Digg. Here’s a quick and dirty trick on how to set up an alert. First, go to the Digg Search page at Type in the base URL to your site into the search form: Make sure that you’ve selected ”URL Only”, and “Upcoming Stories”. Looks like I don’t have any upcoming stories… but if you look over on the right, there’s an RSS icon! Subscribe to the RSS feed for this search, and your RSS reader will let you know when you have been dugg, before it ever gets to the front page. If you do want to only be alerted when you get to the front page, you can change the search to “Front Page Stories” and subscribe to that feed instead. Enjoy!

DataMachine - jwpl - Documentation of the JWPL DataMachine - Java-based Wikipedia Library -- An application programming interface for Wikipedia Back to overview page. Learn about the different ways to get JWPL and choose the one that is right for you! (You might want to get fatjars with built-in dependencies instead of the download package on Google Code) Download the Wikipedia data from the Wikimedia Download Site You need 3 files: [LANGCODE]wiki-[DATE]-pages-articles.xml.bz2 OR [LANGCODE]wiki-[DATE]-pages-meta-current.xml.bz2 [LANGCODE]wiki-[DATE]-pagelinks.sql.gz [LANGCODE]wiki-[DATE]-categorylinks.sql.gz Note: If you want to add discussion pages to the database, use [LANGCODE]wiki-[DATE]-pages-meta-current.xml.bz2, otherwise [LANGCODE]wiki-[DATE]-pages-articles.xml.bz2 suffices. Example Transformation Commands (Note: increase heap space for large Wikipedia versions with the -Xmx flag) Mind that the names of the main category or the category marking disambiguation pages may change over time. Discussion Pages Discussion pages can only be included if the source file contains these pages (see above).

Useful Glossaries For Web Designers and Developers Advertisement In a day in age where there are just as many freelancers as there are university educated designers, developers, and all around web gurus, it is amazing to me how much many of us don’t know or have forgot about our trade. As a self-taught designer, I will admit to you upfront that there is a lot I don’t know when it comes to official jargon or certain aspects of things like typography and graphic design. It is these reasons that I call upon glossaries from time to time. These glossaries are also especially useful for those of you who are just getting started in the online business world. By understanding that basics of the core materials that make up whatever it is you are getting into, you will be able to have a better understanding of what’s going on in your industry, as well as be able to learn faster. But glossaries aren’t just for brushing up on old terms or for calling upon while you learn new things. Typography Glossaries Usability, UX and IA Glossaries SEO Glossaries

Related: