

On the other hand, CMS could not benefit from this change as much as other experiments supported at CNAF, since in the CMS main workflows at Tier-1 sites the disk is configured as a buffer in front of the tape system. Both these experiments gained clear benefits from this migration, and the residual load on CASTOR (for the tape back-end storage) became smaller. At the end of 2007, the StoRM/GPFS system was put into operations for ATLAS and LHCb. the so-called “D1T0” storage class: one copy on disk, no copy on tape). After another round of tests on StoRM itself, aiming to demonstrate the scalabil- ity and the robustness of such product, the CNAF storage team encouraged all experiments to progressively migrate from CASTOR to GPFS for the storage of data resident on disk (i.e. At the same time, StoRM, an implementation of the Storage Resource Management (SRM) interface for POSIX file-systems (including GPFS and Lustre) conforming to SRM 2.2 specifications, was being implemented at INFN. As an outcome of those tests, GPFS qualified itself as the best solution both in terms of easiness of management and outstanding throughput performances (roughly a factor of 2 better with respect to dCache and xrootd). The CNAF storage team concluded this phase in early 2007, with a stress test aiming to compare CASTOR, GPFS (Lustre had been excluded in a direct comparison with GPFS) together with dCache and xrootd. Several tests were performed, on parallel file-systems such as the General Parallel File System (GPFS), a scalable, highly-available, high-performance file- system by IBM optimized for multi-petabyte storage management - as well as with Lustre (currently from Oracle). In parallel to maintaining CASTOR in production, the CNAF storage team in 2006 started to search for a potentially more scalable, performing and robust solution. ), and for all other experiments relying on it. CMS contacts at CNAF constantly helped the CNAF storage team to keep the system in production for the CMS experiment (see e.g.
