
List of HTTP status codes Response codes of the Hypertext Transfer Protocol The Internet Assigned Numbers Authority (IANA) maintains the official registry of HTTP status codes.[1] All HTTP response status codes are separated into five classes or categories. The first digit of the status code defines the class of response, while the last two digits do not have any classifying or categorization role. There are five classes defined by the standard: 1xx informational response – the request was received, continuing process2xx successful – the request was successfully received, understood, and accepted3xx redirection – further action needs to be taken in order to complete the request4xx client error – the request contains bad syntax or cannot be fulfilled5xx server error – the server failed to fulfil an apparently valid request 1xx informational response An informational response indicates that the request was received and understood. 100 Continue 101 Switching Protocols 102 Processing (WebDAV; RFC 2518) 2xx success 410 Gone
Adresse web Les adresses web, également appelées URL (Uniform Resource Locator), sont des adresses utilisées pour identifier et localiser des ressources sur Internet, telles que des pages Web, des images, des vidéos et des fichiers. Elles sont généralement formées par la combinaison de protocoles de communication (tels que HTTP ou HTTPS), le nom de domaine (ou l'adresse IP) du serveur où se trouve la ressource, et un chemin vers la ressource spécifique. Les adresses web sont utilisées pour accéder à des contenus sur Internet à travers un navigateur web ou un autre client de réseau. Une invention fondamentale[modifier | modifier le code] Les trois inventions à la base du World Wide Web sont : Bien qu'un protocole (HTTP) et un format de données (HTML) aient été développés spécifiquement pour le Web, le web est conçu pour imposer un minimum de contraintes techniques[1]. La ressource est accessible en tant que fichier local page.html dans le répertoire /home/tim/.
IPv6 Cet article doit être actualisé (28 septembre 2023). L'IPv6 (Internet Protocol version 6) est un protocole réseau sans connexion de la couche 3 du modèle OSI (Open Systems Interconnection). IPv6 est l'aboutissement des travaux menés au sein de l'IETF au cours des années 1990 pour succéder à IPv4 et ses spécifications ont été finalisées dans la RFC 2460[1] en décembre 1998. IPv6 a été standardisé dans la RFC 8200[2] en juillet 2017. Grâce à des adresses de 128 bits au lieu de 32 bits, IPv6 dispose d'un espace d'adressage bien plus important qu'IPv4 (plus de 340 sextillions, ou , soit près de 7,9 × 1028 de fois plus que le précédent). IPv6 dispose également de mécanismes d'attribution automatique des adresses et facilite la renumérotation. En 2011, seules quelques sociétés ont entrepris de déployer la technologie IPv6 sur leur réseau interne, Google[5] notamment. En 2023, le taux d’utilisation mondial d'IPv6 serait d'environ 40 %[8]. Distribution de l'espace d'adressage IPv4[9]. On distingue :
Computer science Computer science deals with the theoretical foundations of information and computation, together with practical techniques for the implementation and application of these foundations History[edit] The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Further, algorithms for performing computations have existed since antiquity, even before sophisticated computing equipment were created. Blaise Pascal designed and constructed the first working mechanical calculator, Pascal's calculator, in 1642.[3] In 1673 Gottfried Leibniz demonstrated a digital mechanical calculator, called the 'Stepped Reckoner'.[4] He may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. Contributions[edit] Philosophy[edit]
Plain text Text file of The Human Side of Animals by Royal Dixon, displayed by the command cat in an xterm window The encoding has traditionally been either ASCII, sometimes EBCDIC. Unicode-based encodings such as UTF-8 and UTF-16 are gradually replacing the older ASCII derivatives limited to 7 or 8 bit codes. Plain text and rich text[edit] Files that contain markup or other meta-data are generally considered plain-text, as long as the entirety remains in directly human-readable form (as in HTML, XML, and so on (as Coombs, Renear, and DeRose argue,[1] punctuation is itself markup)). The use of plain text rather than bit-streams to express markup, enables files to survive much better "in the wild", in part by making them largely immune to computer architecture incompatibilities. According to The Unicode Standard, "Plain text is a pure sequence of character codes; plain Ue-encoded text is therefore a sequence of Unicode character codes." Plain text, the Unicode definition[edit] Usage[edit] Encoding[edit]
HTML HTML or HyperText Markup Language is the standard markup language used to create web pages. HTML is written in the form of HTML elements consisting of tags enclosed in angle brackets (like <html>). HTML tags most commonly come in pairs like <h1>and </h1>, although some tags represent empty elements and so are unpaired, for example <img>. The first tag in a pair is the start tag, and the second tag is the end tag (they are also called opening tags and closing tags). The purpose of a web browser is to read HTML documents and compose them into visible or audible web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the look and layout of text and other material. History[edit] The historic logo made by the W3C Development[edit] In 1980, physicist Tim Berners-Lee, who was a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. Further development under the auspices of the IETF was stalled by competing interests.
Fully qualified domain name Type of Internet domain name A fully qualified domain name (FQDN), sometimes also called an absolute domain name,[1] is a domain name that specifies its exact location in the tree hierarchy of the Domain Name System (DNS). It specifies all domain levels, including the top-level domain and the root zone.[2] A fully qualified domain name is distinguished by its unambiguous DNS zone location in the hierarchy of DNS labels: it can be interpreted only in one way. A fully qualified domain name is conventionally written as a list of domain labels separated using the full stop "." character (dot or period). The topmost layer of every domain name is the DNS root zone, which is expressed as an empty label and can be represented in an FQDN with a trailing dot, such as somehost.example.com.. Relative domain names [edit] Web addresses typically use FQDNs to represent the host, as it ensures the address will be interpreted identically on any network.
Category:Image processing Image processing is the application of signal processing techniques to the domain of images — two-dimensional signals such as photographs or video. Image processing does typically involve filtering an image using various types of filters. Related categories: computer vision and imaging. Subcategories This category has the following 13 subcategories, out of 13 total. Pages in category "Image processing" The following 200 pages are in this category, out of 213 total. (previous 200) (next 200)(previous 200) (next 200) Uniform Resource Locator Web address to a particular file or page A uniform resource locator (URL), colloquially known as an address on the Web, is a reference to a resource that specifies its location on a computer network and a mechanism for retrieving it. A URL is a specific type of Uniform Resource Identifier (URI),[2] although many people use the two terms interchangeably.[a] URLs occur most commonly to reference web pages (HTTP/HTTPS) but are also used for file transfer (FTP), email (mailto), database access (JDBC), and many other applications. Most web browsers display the URL of a web page above the page in an address bar. A typical URL could have the form which indicates a protocol (http), a hostname (www.example.com), and a file name (index.html). History Early WorldWideWeb collaborators including Berners-Lee originally proposed the use of UDIs: Universal Document Identifiers. Syntax Every HTTP URL conforms to the syntax of a generic URI. The URI comprises: Example: Notes
Data (computing) In an alternate usage, binary files (which are not human-readable) are sometimes called "data" as distinguished from human-readable "text".[4] The total amount of digital data in 2007 was estimated to be 281 billion gigabytes (= 281 exabytes).[5][6] At its heart, a single datum is a value stored at a specific location. To store data bytes in a file, they have to be serialized in a "file format". Typically, programs are stored in special file types, different from those used for other data. Keys in data provide the context for values. Computer main memory or RAM is arranged as an array of "sets of electronic on/off switches" or locations beginning at 0. Data has some inherent features when it is sorted on a key. Retrieving a small subset of data from a much larger set implies searching though the data sequentially. The advent of databases introduced a further layer of abstraction for persistent data storage.
Website Any web page served from a single domain A website (also written as a web site) is any web page whose content is identified by a common domain name and is published on at least one web server. Websites are typically dedicated to a particular topic or purpose, such as news, education, commerce, entertainment, or social media. Hyperlinking between web pages guides the navigation of the site, which often starts with a home page. The most-visited sites are Google, YouTube, and Facebook.[1][2] Background The World Wide Web (WWW) was created in 1989 by the British CERN computer scientist Tim Berners-Lee.[3][4] On 30 April 1993, CERN announced that the World Wide Web would be free to use for anyone, contributing to the immense growth of the Web.[5] Before the introduction of the Hypertext Transfer Protocol (HTTP), other protocols such as File Transfer Protocol and the gopher protocol were used to retrieve individual files from a server. History Static website Dynamic website Types See also References
Technological singularity The technological singularity is the hypothesis that accelerating progress in technologies will cause a runaway effect wherein artificial intelligence will exceed human intellectual capacity and control, thus radically changing civilization in an event called the singularity.[1] Because the capabilities of such an intelligence may be impossible for a human to comprehend, the technological singularity is an occurrence beyond which events may become unpredictable, unfavorable, or even unfathomable.[2] The first use of the term "singularity" in this context was by mathematician John von Neumann. Proponents of the singularity typically postulate an "intelligence explosion",[5][6] where superintelligences design successive generations of increasingly powerful minds, that might occur very quickly and might not stop until the agent's cognitive abilities greatly surpass that of any human. Basic concepts Superintelligence Non-AI singularity Intelligence explosion Exponential growth Plausibility