Ushanka

Monday, April 14, 2008

Amazon EC2: Persistent Storage

Amazon has just started a private beta program for a new persistent storage API in EC2. According to their documentation, they provide an API to create and manage volumes between 1GB and 1TB in size that behave like unformatted disks. Each volume is persistent and independent of EC2 instances and a single EC2 instance can mount multiple volumes. Their disks are supposed to be low-latency and high throughput with calls to store snapshots onto S3.

A lack of persistent storage has been the biggest challenge for developers as EC2 (in my experience) has rather high failure rates. With this persistent storage API (scheduled for public release later this year), Amazon has just made EC2 a dead-easy buy-in.

Labels: , , ,

Sunday, February 10, 2008

A Closer Look: Amazon S3

One of the most mature Amazon web services, Amazon Simple Storage Service (S3) provides a virtually unlimited data storage service. That's right: you can upload as much data as you'd like and it will be held on their machines with all the network capacity you could ever want and with redundancy built-in. Hard drive failures are easily the primary cause of server downtime and Amazon has taken the burden upon themselves to manage all the devices and failures that go along with it. As the name implies, the service is designed to provide simple access so you can't do funky things like mount the virtual filesystem directly.

I've been using S3 for over a year and I haven't had any reliability issues with it. Others have had brief outages but they were mostly when the service was first introduced. I'm quite happy with S3 but there's one missing feature that keeps it from being the ultimate simple storage service: range-PUT.

Suppose I've got a file on S3 and I want to update a small part of it. Without range-PUT, I would normally have to transfer the entire file again using the HTTP PUT method to store it on the remote host. Using the Content-Range header, I could specify just the range of bytes that have changed within the file and transfer just that portion. This feature would save a lot of bandwidth (and, consequently, money) if files often get modified partially.

Of course, supporting Content-Range opens up a can of worms. What happens if the file doesn't exist and the start of my range isn't offset 0? What if the file does exist but the start offset is beyond the end of file (i.e. not a simple append)? I can think of two solutions that seem reasonable: return an error or create the file if it doesn't exist and zero-pad the holes. The former would be easier to implement while the latter would produce a behaviour like Linux sparse files.

There are two major application classes that range-PUT would be suited to. The first would be the class of applications where we always append to the end-of-file. Log files would fit into this category but, more importantly, we could resume broken transfers. When uploading large files (S3 supports file sizes of up to 5GB), I've found that my connections often get dropped so if I could just append to an existing file, I could write an upload tool that would auto-resume. The second class of applications would be the ones that only update part of a file. In most cases, I'd imagine this kind of update would take place to change some file metadata. For example, if I modify the metadata for my MP3 file, I'd rather just upload the few changed bytes instead of uploading the whole MP3 again. The music is the same, it's just the metadata that has changed. This problem is even worse when dealing with video files.

S3 is a fantastic storage service. It's reliable, it's cheap, and it takes away the hassle of managing your own hardware or creating a highly-available, redundant persistent store. If S3 supported range-PUT, it would save a huge amount of bandwidth resulting in an even lower cost of operation.

Labels: ,