About

About the Worldwide LHC Computing Grid

The Worldwide LHC Computing Grid (WLCG), is a distributed computing infrastructure arranged in tiers. It gives a community of over 12 200 scientists of 110 nationalities, from institutes in more than 70 countries, near real-time access to LHC data.

The scale and complexity of data from the LHC is unprecedented. This data needs to be stored, easily retrieved and analysed by physicists all over the world. This requires massive storage facilities, global networking, immense computing power, and, of course, funding.

Using the Grid

The WLCG computing model has been designed to be very flexible, to allow for worldwide access and analysis in near real-time.

WLCG provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualization tools and more. Users make job requests from one of the many entry points into the system. A job will entail the processing of a requested set of data, using software provided by the experiments

The computing Grid establishes the identity of the user, checks their credentials, and searches for available sites that can provide the resources requested. Users do not have to worry about where the computing resources are coming from – they can tap into the Grid's computing power and access storage on demand.

The journey of LHC data, from accelerator to detector, to the CERN Data Centre and out to the world via WLCG. (Video: CERN)

LHC Run 3 - 2022-2025

LHC Run 3 is now operational, scheduling for 2022-2025. This looks to be even more challenging than the previous runs; we are expecting to collect as much data in Run 3 alone, as we have for Run 1 and Run 2 combined.

This of course translates into masses amounts more of data sent out across the Worlwide LHC Computing Grid, to be made available to physicists across the globe. The computing requirements of ALICE, ATLAS, CMS and LHCb are evolving and increasing in conjunction with the experiments’ physics programmes and the improved precision of the detectors’ measurements. 

LHC Run 4: HiLumi-LHC, ~2029-2032

The High-Luminosity Large Hadron Collider (HL-LHC) project aims to crank up the performance of the LHC in order to increase the potential for discoveries after 2029.

Requirements for data and computing will skyrocket dramatically during this time. The needs for processing, network transfer and storage are expected to increase many times over and above what technology evolution will provide.

As a consequence, partnerships - such as those with CERN openlab and other programmes of R&D - are essential to investigate how the computing models could evolve to address these needs. They will focus on applying more intelligence into filtering and selecting data as early as possible. Investigating the distributed infrastructure itself (the grid) and how one can best make use of available technologies and opportunistic resources (e.g. grid, cloud, HPC, volunteer, etc), and most notably improving software performance to optimise the overall system.

In 2017 the High Energy Physics community produced a roadmap Community White Paper (arXiv:1712.06982) that explores these software challenges and describes a number of potential mechanisms by which the community can address these problems of capacity and efficiency that will be faced during the next decade.