S3 Rocks!
Every day, a script on Syndic8 generates a backup file. This file grows in proportion to the number of feeds in the system and is currently about 1.3 GB in length. At some point I will need to generate multiple files, but for now I can handle files of this size.
As an experiment, I uploaded a recent backup file into Amazon’s new S3 storage system using the Perl / Curl sample built by the S3 team. This sample is a simple (and very elegant) Perl script which computes the proper S3 authentication parameters and then invokes the command-line version of the Curl utililty to access S3. Since I had already created an S3 bucket, I used the following simple command to upload the file:
s3curl.pl --id=MY<em>ID --key=MY</em>KEY --put=MY<em>FILE.tar.bz -- http://s3.amazonaws.com/jeffbarr/backup/MY</em>FILE.tar.bz
The entire file was uploaded in just 23 minutes, for a net speed of 1 megabyte per second. I verified that the file was present like this:
s3curl.pl --id=MY<em>ID --key=MY</em>KEY -- http://s3.amazonaws.com/jeffbarr
Note that the arguments each start with a pair of dashes, and that a further pair of dashes is used before the final URL.
This returned an XML structure, so I copied the output of the command to my public_html directory and opened it in Firefox.
It would be very cool if S3 returned the MD5 checksum of the file. That way I could run md5sum locally, and then compare it to what S3 returned. I will talk to the development team about this in the very near future.