WLCG – Worldwide LHC Computing Grid
Data pours out of the LHC detectors at a blistering rate. Even after filtering out 99% of it, there is still around 30 petabytes of data per year to deal with. That’s 30 million gigabytes, the equivalent to nearly 9 million high-definition (HD) movies. The scale and complexity of data from the LHC is unprecedented. This data needs to be stored, easily retrieved and analysed by physicists all over the world. This requires massive storage facilities, global networking, immense computing power, and, of course, funding. CERN does not have the computing or financial resources to crunch all of the data on site, so in 2002 it turned to grid computing to share the burden with computer centres around the world. The result, the Worldwide LHC Computing Grid (WLCG), is a distributed computing infrastructure arranged in tiers – giving a community of over 10,000 physicists near real-time access to LHC data. The WLCG builds on the ideas of grid technology initially proposed in 1999 by Ian Foster and Carl Kesselman(link is external). With more than 10,000 LHC physicists across the four main experiments – ALICE, ATLAS, CMS and LHCb – actively accessing and analysing data in near real-time, the computing system designed to handle the data has to be very flexible. WLCG provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. Users make job requests from one of the many entry points into the system. A job will entail the processing of a requested set of data, using software provided by the experiments. The computing Grid establishes the identity of the user, checks their credentials, and searches for available sites that can provide the resources requested. Users do not have to worry about where the computing resources are coming from – they can tap into the Grid’s computing power and access storage on demand. This will be added to the tools section of Research Resources Subject Tracer™ Information Blog.