Ushanka

Monday, April 14, 2008

Amazon EC2: Persistent Storage

Amazon has just started a private beta program for a new persistent storage API in EC2. According to their documentation, they provide an API to create and manage volumes between 1GB and 1TB in size that behave like unformatted disks. Each volume is persistent and independent of EC2 instances and a single EC2 instance can mount multiple volumes. Their disks are supposed to be low-latency and high throughput with calls to store snapshots onto S3.

A lack of persistent storage has been the biggest challenge for developers as EC2 (in my experience) has rather high failure rates. With this persistent storage API (scheduled for public release later this year), Amazon has just made EC2 a dead-easy buy-in.

Labels: , , ,

Saturday, March 1, 2008

Amazon EC2: The Potential

Amazon's Elastic Compute Cloud (EC2) is easily their most powerful web service offering. With EC2, you get flexible, on-demand computing resources: by launching an instance you get full access to a brand-new machine and its resources. Each CPU-hour costs $0.10, which equates to less than $80 per month if you run an instance 24/7! What's more, you can launch as many instances as you'd like so you can have your own network of machines hosted by Amazon. The clincher? All data transferred between EC2 instances and S3 is free!

The default configuration has the following specs:

  • CPU: 32-bit, 1.0-1.2 GHz Opteron/Xeon equivalent
  • RAM: 1.7 GB
  • Disk: 160 GB
  • NIC: 100 MBit

They have additional configurations if you need more resources on a single machine. For details, see the EC2 site. To make all of this work, EC2 allocates a virtual machine running on the Xen hypervisor instead of a physical machine for every instance launched.

Amazon designed EC2 primarily to perform many computationally expensive operations - something like batch video encoding or image recognition. Instead of making large hardware investments to perform these (potentially one-shot) tasks, you run the tasks in parallel on a few (hundred?) EC2 instances. Once the tasks are complete, just shut down the instances and your billing stops there. While Amazon's vision for EC2 is pretty sweet, the reality is that there's so much more potential there.

EC2 is the next-generation data center.

Instead of doing capacity planning as with a traditional data center, with EC2, I could monitor the load on my server and programmatically launch parallel instances once it reaches a threshold utilization. When the utilization drops again, I can terminate the extra instances and go back to a fairly quiescent state. With free traffic between EC2 and S3, I can churn through collected data as many times as I need to as in Amazon's vision. Amazon could even issue hardware updates (e.g. more RAM) to running instances without rebooting! With the inexpensive per-hour prices, any small business can afford to keep an active standby. The flexibility offered by programmatically managing machines running in a virtualized data center is tremendous. Coupled with Amazon's pricing model, this sort of service is poised to take some serious market share away from the traditional, physical data centers.

While EC2 has the potential to be all of this and probably much more, it's not currently ready to displace traditional data centers. In a subsequent post, I'll discuss some of the issues preventing EC2 from realizing this dream.

Labels: ,

Sunday, February 10, 2008

A Closer Look: Amazon S3

One of the most mature Amazon web services, Amazon Simple Storage Service (S3) provides a virtually unlimited data storage service. That's right: you can upload as much data as you'd like and it will be held on their machines with all the network capacity you could ever want and with redundancy built-in. Hard drive failures are easily the primary cause of server downtime and Amazon has taken the burden upon themselves to manage all the devices and failures that go along with it. As the name implies, the service is designed to provide simple access so you can't do funky things like mount the virtual filesystem directly.

I've been using S3 for over a year and I haven't had any reliability issues with it. Others have had brief outages but they were mostly when the service was first introduced. I'm quite happy with S3 but there's one missing feature that keeps it from being the ultimate simple storage service: range-PUT.

Suppose I've got a file on S3 and I want to update a small part of it. Without range-PUT, I would normally have to transfer the entire file again using the HTTP PUT method to store it on the remote host. Using the Content-Range header, I could specify just the range of bytes that have changed within the file and transfer just that portion. This feature would save a lot of bandwidth (and, consequently, money) if files often get modified partially.

Of course, supporting Content-Range opens up a can of worms. What happens if the file doesn't exist and the start of my range isn't offset 0? What if the file does exist but the start offset is beyond the end of file (i.e. not a simple append)? I can think of two solutions that seem reasonable: return an error or create the file if it doesn't exist and zero-pad the holes. The former would be easier to implement while the latter would produce a behaviour like Linux sparse files.

There are two major application classes that range-PUT would be suited to. The first would be the class of applications where we always append to the end-of-file. Log files would fit into this category but, more importantly, we could resume broken transfers. When uploading large files (S3 supports file sizes of up to 5GB), I've found that my connections often get dropped so if I could just append to an existing file, I could write an upload tool that would auto-resume. The second class of applications would be the ones that only update part of a file. In most cases, I'd imagine this kind of update would take place to change some file metadata. For example, if I modify the metadata for my MP3 file, I'd rather just upload the few changed bytes instead of uploading the whole MP3 again. The music is the same, it's just the metadata that has changed. This problem is even worse when dealing with video files.

S3 is a fantastic storage service. It's reliable, it's cheap, and it takes away the hassle of managing your own hardware or creating a highly-available, redundant persistent store. If S3 supported range-PUT, it would save a huge amount of bandwidth resulting in an even lower cost of operation.

Labels: ,

A Closer Look: Amazon Web Services

Amazon has been doing some pretty nifty stuff lately. They've exposed their computing infrastructure to the rest of the world via RESTful web services. I think it's a brilliant move by Jeff Bezos and it realizes some of the technology promises of the last decade or so.

Amazon's web services have been getting a lot of great reviews from many bloggers and deservedly so. With their pay-as-you-use model, it's amazingly easy to scale up or down based entirely on workload; there's virtually no need for capacity planning and no need to manage physical systems at all! I've been using their web services for over a year now for a variety of tasks and it has been, for the most part, quite pleasant. They've saved me a lot of time and money but their services do have their faults. I'll be talking about some of the issues I've encountered with the design or implementation of their services over the next few posts.

Labels: ,