Data dumps The Wikimedia Foundation is requesting help to ensure that as many copies as possible are available of all Wikimedia database dumps. Please volunteer to host a mirror if you have access to sufficient storage and bandwidth. Summary[edit] Description[edit] WMF publishes data dumps of Wikipedia and all WMF projects on a regular basis. Content[edit] Stub-prefixed dumps for some projects which only have header info for pages and revisions without actual contentMedia bundles for each project, separated into files uploaded to the project and files from Commons images : Static HTML dumps for 2007-2008 (see more) Download[edit] You can download the latest dumps (for the last year) here ( for English Wikipedia, for German Wikipedia, etc). Archives :
RockMelt - Your Browser. Re-imagined. Connect for an invitation. Comparativa de wiki ディノオープンラボラトリ — 株式会社ディノ社員による技術メモ Who Writes Wikipedia? I first met Jimbo Wales, the face of Wikipedia, when he came to speak at Stanford. Wales told us about Wikipedia’s history, technology, and culture, but one thing he said stands out. “The idea that a lot of people have of Wikipedia,” he noted, “is that it’s some emergent phenomenon — the wisdom of mobs, swarm intelligence, that sort of thing — thousands and thousands of individual users each adding a little bit of content and out of this emerges a coherent body of work.”† But, he insisted, the truth was rather different: Wikipedia was actually written by “a community … a dedicated group of a few hundred volunteers” where “I know all of them and they all know each other”. Really, “it’s much like any traditional organization.” The difference, of course, is crucial. So did the Gang of 500 actually write Wikipedia? Stanford wasn’t the only place he’s made such a claim; it’s part of the standard talk he gives all over the world. At Stanford the students were skeptical. Excellent article.
Dossiers technopédagogiques This article endeavours to denote and promote pedagogical experimentations concerning a Free/Open technology called a "Wiki". An intensely simple, accessible and collaborative hypertext tool Wiki software challenges and complexifies traditional notions of - as well as access to - authorship, editing, and publishing. Usurping official authorizing practices in the public domain poses fundamental - if not radical - questions for both academic theory and pedagogical practice. The particular pedagogical challenge is one of control: wikis work most effectively when students can assert meaningful autonomy over the process. Cet article décrit et promeut des expériences pédagogiques qui ont recours à une technologie libre et ouverte désignée sous le terme «wiki». Le principal défi sur le plan pédagogique est celui qui relève du contrôle : les wikis sont en effet plus efficaces lorsque les élèves peuvent exercer une plus grande autonomie sur le processus.
PHP tip: How to strip HTML tags, scripts, and styles from a web page Technologies: PHP 4.3+. UTF-8 The HTML tags on a web page must be stripped away to get clean text for a PHP search engine, keyword extractor, or some other page analysis tool. PHP's standard strip_tags( ) function will do part of the job, but you need to strip out styles, scripts, embedded objects, and other unwanted page code first. This tip shows how. This article is both an independent article and part of an article series on How to extract keywords from a web page. Code PHP's handy strip_tags( ) function removes HTML tags that look like <word...>, <word.../>, or </word>. To fix these problems, you need to process certain tags first before using strip_tags(). Remove HTML tag pairs and enclosed content for styles, scripts, embedded objects, etc. Once this is done, call strip_tags() to remove the remaining tags. Below is sample code to do this. Downloads: strip_html_tags.zip. /** * Remove HTML tags, including invisible text such as style and * script code, and embedded objects. Example
WikipediaFS Wikinside.com - Wiki search engine - Wiki - Wikis PHP tip: How to decode HTML entities on a web page Technologies: PHP 4.3.0+, UTF-8 HTML entities encode special characters and symbols, such as € for €, or © for ©. When building a PHP search engine or web page analysis tool, HTML entities within a page must be decoded into single characters to get clean parsable text. This article is both an independent article and part of an article series on How to extract keywords from a web page. Code HTML's character reference syntax enables a web page to use special characters that aren't supported by the page's normal character encoding. There are three forms for an HTML character reference: Name. The named form of a character reference is called an HTML entity. To do text processing on a web page, you need to convert HTML entities and numeric character references into normal characters. Using html_entity_decode The html_entity_decode() function converts HTML character references into characters: $utf8_text = html_entity_decode( $text, ENT_QUOTES, "utf-8" ); Using mb_convert_encoding Example
wikiscanner Welcome to Wikispaces - Free Wikis for Everyone