Amazon Web Services Blog wsdemo/results.md at results-v1 · ericmoritz/wsdemo Using Amazon DynamoDB Object Mapping (OM) with the AWS SDK for Android : Articles & Tutorials Version 2 of the AWS Mobile SDK This article and sample apply to Version 1 of the AWS Mobile SDK. If you are building new apps, we recommend you use Version 2. For details, please visit the AWS Mobile SDK page. Amazon DynamoDB is a fast, highly scalable, highly available, cost-effective, non-relational database service. The AWS SDK for Android supports Amazon DynamoDB, and this article discusses a new AWS SDK for Android add-on library that enables you to map your client-side classes to the Amazon DynamoDB tables. The complete sample code and project files are included in the AWS SDK for Android. Overview In Amazon DynamoDB, a database is a collection of tables. The app demonstrates how to add, modify, and remove users, and retrieve their preference data using Amazon DynamoDB OM. Creating an Amazon DynamoDB Client and Mapper To make low-level service requests to Amazon DynamoDB, you need to instantiate an Amazon DynamoDB client. Defining Mapping Class Creating Users (Item Creation)
Srinath's Blog :My views of the World: List of Known Scalable Architecture Templates - (Current Session: Current) For most Architects, "Scale" is the most illusive aspect of software architectures. Not surprisingly, it is also one of the most sort-out goals of todays software design. However, computer scientists do not yet know of a single architecture that can scale for all scenarios. We learn art by learning masterpieces, and scale should not be different! LB (Load Balancers) + Shared nothing Units - This model includes a set of units that does not share anything with each other fronted with a load balancer that routes incoming messages to a unit based on some criteria (round-robin, based on load etc.). However, combining them to create a scalable architecture is not at all trivial undertaking.
My Blog: AWS Diagrams Adobe Illustrator Object Collection: First Release Due to popular demand I've decided to release the collection of vector graphics objects I use to draw Amazon Web Services architecture diagrams. This is the first release and more are on the way. This is an Adobe Illustrator CS5 (.AI) file. I've obtained this artwork from the original AWS Architecture PDF files published at the AWS Architecture Center. You can use Adobe Illustrator to open this file and to create your diagrams or you can export these objects to SVG format and use GNU software to work with them. The file has been saved in "PDF Compatibility Mode" so plenty of utilities can import it without the need of using Adobe Illustrator (With Inkscape for instance). Disclaimer: - I provide this content as it is. Download link: And that's it.
The System of Record Approach to Multi-Master Database Applications Multi-master database systems that span sites are an increasingly common requirement in business applications. Yet the way such applications work in practice is not quite what you would think from accounts of NoSQL systems like . In this article I would like to introduce a versatile design pattern for multi-master SQL applications in which individual schemas are updated in a single location only but may have many copies elsewhere both locally as well as on other sites. This pattern is known as a architecture. You can build it with off-the-shelf MySQL and master/slave replication. Let's start by picking a representative software-as-a-service (SaaS) application: call center automation. The ideal solution for most SaaS vendors would be to have call center data and applications for all customers live on multiple sites at all times. This solution has only one problem. What about a SQL DBMS? Fortunately we are not really stuck. The definitive and singular source of operational data.
Using Amazon DynamoDB and Amazon Elastic MapReduce The integration of Amazon Elastic MapReduce (Amazon EMR) with Amazon DynamoDB enables several scenarios. For example, using a Hive cluster launched within Amazon EMR, you can export data to Amazon Simple Storage Service (Amazon S3) or upload it to a Native Hive Table. This walkthrough is presented first in a video and then in step-by-step instructions. You'll learn how to set up a Hive cluster, export DynamoDB data to Amazon S3, upload data to a native Hive table, and execute complex queries for business intelligence reporting or data mining. You can run queries against the data without using a lot of DynamoDB capacity units or interfering with your running application. When you have completed this walkthrough, you will have an Amazon DynamoDB table with sample data, an Amazon S3 bucket with exported data, an EMR job flow, two Apache Hive external tables, and one native Hive table. Setting Up the Environment To set up the walkthrough environment Exporting Data to Amazon S3
Scalability patterns and an interesting story... Some SSL / TLS basics is available here. SSL provides authentication, confidentiality and integrity. Authentication of the server, and less commonly used the server can also request authentication of the client. The confidentiality and integrity relies on pretty good theoretical grounds in hard-core cryptography so not much else needs to be said there. The authentication is what really distinguishes a public key infrastructure such as SSL/TLS from conventional cryptography where security basically depends on two parties sharing a common secret. So the title of the blog post sounds like it's contrary to security best practice, right? There are a few problems with this though. Rule 1 - Use private SSL certificates for enterprise server side applicationsThe certificate or the private CA will then have to be installed into the certificate trust store manually.
AWS Redshift: How Amazon Changed The Game – AK Tech Blog Edit: Thank you to Curt Monash who points out that Netezza is available for as little as $20k/TB/year with hardware (and 2.25x compression) and that there is an inconsistency in my early price estimates and the fraction I quote in my conclusion. I’ve incorporated his observations into my corrections below. I’ve also changed a sentence in the conclusion to make the point that the $5k/TB/year TCO number is the effective TCO given that a Redshift cluster that can perform these queries at the desired speed has far more storage than is needed to just hold the tables for the workloads I tested. Author’s Note: I’ll preface this post with a warning: some of the content will be inflammatory if you go into it with the mindset that I’m trying to sell you on an alternative to Hadoop. Fast-forward to (nearly) the present day: the business has grown, and we have ourselves a medium-to-large data (1-200TB) query problem and some cash to solve it. Present day: we hear about AWS’s Redshift offering.
Scalability Best Practices: Lessons from eBay At eBay, one of the primary architectural forces we contend with every day is scalability. It colors and drives every architectural and design decision we make. With hundreds of millions of users worldwide, over two billion page views a day, and petabytes of data in our systems, this is not a choice - it is a necessity. In a scalable architecture, resource usage should increase linearly (or better) with load, where load may be measured in user traffic, data volume, etc. Where performance is about the resource usage associated with a single unit of work, scalability is about how resource usage changes as units of work grow in number or size. There are many facets to scalability - transactional, operational, development effort. Best Practice #1: Partition by Function Whether you call it SOA, functional decomposition, or simply good engineering, related pieces of functionality belong together, while unrelated pieces of functionality belong apart. Best Practice #2: Split Horizontally Summary