Showing posts with label Saturn ring. Show all posts
Showing posts with label Saturn ring. Show all posts

Thursday, October 15, 2009

Petabyte Explosion: How Caltech Manages to Manage Billions of Files

Managing billions of small files effectively requires a clear understanding of data flows and a system based on common Lego-like building blocks that provide services to application owners.

This was the message at the September 29th, 2009 Peer Incite Research Meeting, where an industry practitioner, Eugene Hacopians, Senior Systems Engineer at the California Institute of Technology (Caltech), addressed the Wikibon community.

Caltech is the academic home of NASA's Jet Propulsion Laboratory. As such it runs the downlink for the Spitzer Space Telescope, NASA's orbital space telescope, as well as 13 other missions, processes the raw data into images, and supports the needs of scientists visiting from locations worldwide. The focus of this discussion was the activities of the Infrared Processing and Analysis Center (IPAC), which has evolved to become the national archive for infrared analysis from telescopic space missions.


To be sure, Caltech's needs are on the edge. The organization is the steward for more than 2.3 petabytes of data created from its 14 currently active missions. Caltech captures data from these missions and performs intense analysis in what it calls its 'Sandbox', a server and storage infrastructure that supports scientific applications that analyze the data. Once 'crunched,' the data is moved to an archive, using homegrown data movement software.

The team at Caltech had to design a cost-effective means of providing reliable data access to all this scientific data. As well, organizationally, the projects supported by Caltech had to be completely walled from each other from an accounting standpoint. Rather than implement a shared SAN infrastructure with onerous chargeback mechanisms, Caltech decided to use a common set of technologies that would support each of the projects. The technological building blocks are:

A Sun Solaris server running the ZFS file system, A QLogic 5602 FC switch, One-to-three Nexsan SATA Beast arrays.
Caltech uses Nexsan's Automaid spindown capabilities in its archive to reduce energy costs, using Level 1 (slowing the spin speed of the disk) and Level 2 (parking the heads after sufficient inactivity). It does not put the drives into sleep mode (Level 3) and has never had reliability problems associated with spinning down devices.

Caltech uses SAIC tape for long term archiving and last resort off-site disaster recovery. However, its own tests indicate that because of the huge number of small files involved, recovery from tape would take weeks or longer.

This building block approach has allowed Caltech to use common configurations across its infrastructure. Caltech derives four main benefits from this strategy:

1.The infrastructure is architected for fast, simple, safe recovery from failure or data loss.
2.The approach scales nicely in support of Caltech's data growth, which occurs in large chunks of hundreds of TB's and billions of tiles at a time.
3.It streamlines staff training.
4.The "Lego" building-block method allows Caltech to reuse infrastructure when it comes off maintenance, providing it with large numbers of spares and saving money.

Caltech uses a cascading refresh approach when new infrastructure is purchased, placing the newer equipment in support of the most critical parts of the infrastructure and migrating older equipment to less mission-critical areas. In this case, the archive is the most critical as it houses massive numbers of files that scientists access for their research and because it is regarded as a National Archive, which should be kept indefinitely. The Sandbox infrastructure is the least critical because data is quickly migrated off it into the archive.
Click here for the entire story:
http://wikibon.org/vault/Petabyte_Explosion:_How_Caltech_Manages_to_Manage_Billions_of_Files