background preloader

OSI model

OSI model
Model with 7 layers to describe communications systems The Open Systems Interconnection model (OSI model) is a conceptual model that characterizes and standardizes the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard protocols. The model partitions a communication system into abstraction layers. The original version of the model defined seven layers. A layer serves the layer above it and is served by the layer below it. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO). Communication in the OSI-Model (example with layers 3 to 5) History[edit] In the late 1970s, the International Organization for Standardization (ISO) conducted a program to develop general standards and methods of networking. Description of OSI layers[edit] Layer 1: Physical Layer[edit]

Calxeda Amazon S3 At its inception, Amazon charged end users US$0.15 per gigabyte-month, with additional charges for bandwidth used in sending and receiving data, and a per-request (get or put) charge.[4] On November 1, 2008, pricing moved to tiers where end users storing more than 50 terabytes receive discounted pricing.[5] Amazon says that S3 uses the same scalable storage infrastructure that Amazon.com uses to run its own global e-commerce network.[6] Amazon S3 is reported to store more than 2 trillion objects as of April 2013[update].[7] This is up from 102 billion objects as of March 2010[update],[8] 64 billion objects in August 2009,[9] 52 billion in March 2009,[10] 29 billion in October 2008,[5] 14 billion in January 2008, and 10 billion in October 2007.[11] S3 uses include web hosting, image hosting, and storage for backup systems. S3 guarantees 99.9% monthly uptime,[12] i.e. not more than 43 minutes of downtime per month.[13] Design[edit] Hosting entire websites[edit] Notable uses[edit] Notes[edit]

Chapter 1. Overview - OpenStack Object Storage Developer Guide  - API v1 OpenStack Object Storage is an affordable, redundant, scalable, and dynamic storage service offering. The core storage system is designed to provide a safe, secure, automatically re-sizing and network-accessible way to store data. You can store an unlimited quantity of files and each file can be as large as 5 GB, plus with large object creation, you can upload and store objects of virtually any size. OpenStack Object Storage enables you to store and get files and content through the Representational State Transfer (REST) interface. For more details on the OpenStack Object Storage service, please refer to We welcome feedback, comments, and bug reports at

early pioneer Active Disks Amazon DynamoDB Overview[edit] DynamoDB differs from other Amazon services by allowing developers to purchase a service based on throughput, rather than storage. Although the database will not scale automatically, administrators can request more throughput and DynamoDB will spread the data and traffic over a number of servers using solid-state drives, allowing predictable performance.[1] It offers integration with Hadoop via Elastic MapReduce. In September 2013, Amazon made available a local development version of DynamoDB so developers can test DynamoDB-backed applications locally.[3] Language bindings[edit] References[edit] External links[edit]

IBM General Parallel File System The General Parallel File System (GPFS) is a high-performance clustered file system developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List.[1] For example, GPFS was the filesystem of the ASC Purple Supercomputer[2] which was composed of more than 12,000 processors and has 2 petabytes of total disk storage spanning more than 11,000 disks. In common with typical cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters, on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes. In addition to providing filesystem storage capabilities, GPFS provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote GPFS clusters.

Orange File System Apache Hadoop Apache Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. Hadoop is an Apache top-level project being built and used by a global community of contributors and users.[2] It is licensed under the Apache License 2.0. The Apache Hadoop framework is composed of the following modules: Hadoop Common – contains libraries and utilities needed by other Hadoop modulesHadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster.Hadoop YARN – a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users' applications.Hadoop MapReduce – a programming model for large scale data processing. Apache Hadoop is a registered trademark of the Apache Software Foundation. History[edit] Hadoop was created by Doug Cutting and Mike Cafarella[5] in 2005. Architecture[edit]

Related: