Structure

The four main component layers of the Worldwide LHC Computing Grid (WLCG) are networking, hardware, middleware and physics analysis software.

Networking

CERN to Wigner connectivity

Perhaps one of the most impressive components of the WLCG is the networking and connectivity. Without networking, there would be no WLCG. WLCG can initiate the distribution of data to the hundreds of collaborating institutes worldwide thanks to the excellent connectivity and dedicated networking infrastructure set up at CERN and subsequently worldwide.

CERN-Wigner Tier-0 connection

The two 100Gb/s lines linking CERN and Wigner (Budapest, Hungary) create a combined 'Tier-0' with a network latency (the amount of time it takes a packet of data to travel from source to destination) of a mere 25 miliseconds between the two sites. Tier-0 can even take advantage of CERN's own internet exchange point, the CIXP to pass data straight onto the dedicated networks for global exchange.

LHC Optical Private Network (LHCOPN)

CERN is connected to each of the Tier 1s around the world on a dedicated, private, high-bandwidth network called the LHC Optical Private Network (LHCOPN).

This consists of optical-fibre links working at 10 gigabits per second, spanning oceans and continents. 

You can see the current realtime activity of the LHCOPN here - just hover over the connection of interest and you'll see the latest statistics display. You can also see real-time activity in Google Earth by downloading the LHCOPN .kml file.

 

 

LHC Open Network Environment (LHCONE)

The aim of LHCONE is to provide a collection of access locations that are effectively entry points into a network that is private to the Tier 1, Tier 2 and Tier 3 sites.

Data Exchange

Exchanging data between the WLCG centres is managed by the Grid File Transfer Service, initially developed together with the Enabling Grids for E-science projects from 2002 onwards.  It has been tailored to support the special needs of grid computing, including authentication and confidentiality features, reliability and fault tolerance, and third party and partial file transfer.

Last 24 hours network throughput

More data available at WLCG Transfers Dashboard

Hardware

Each Grid centre manages a large collection of computers and storage systems. Installing and regularly upgrading the necessary software manually is labour intensive, so large-scale management systems (some, such as Quattor, developed at CERN) automate these services. They ensure that the correct software is installed from the operating system all the way to the experiment-specific physics libraries, and make this information available to the overall Grid scheduling system, which decides which centres are available to run a particular job.

Each of the 13 Tier 1 centres also maintains disk and tape servers, which need to be upgraded regularly. These centres use specialized storage tools – such as the  dCache system developed at the Deutsches Elektronen Synchrotron (DESY) laboratory in Germany, the ENSTORE system at Fermilab in the US or the CERN advanced storage system (CASTOR) developed at CERN – to allow access to data for simulation and analysis independent of the medium (tape or disk) that the information is stored on.

Middleware

Middleware is the software infrastructure which allows access to an enormous amount of distributed computing resources and archives, and is able to support powerful, complicated and time-consuming data analysis. This software is called "middleware" because it sits between the operating systems of the computers and the physics applications software that can solves a scientist's particular problem.

The most important middleware stacks used in the WLCG are from the European Middleware Initiative, which combines several middleware providers (ARCgLiteUNICORE and dCache); the Globus Toolkit developed by the Globus Alliance; and the Virtual Data Toolkit.

Physics analysis software

To analyse the big data that the LHC produces, physicists need software tools that go beyond what is commercially available. The immense and changing demands of the high energy physics environment require dedicated software to analyse vast amounts of data as efficiently as possible.

The main physics analysis software is ROOT, a set of object-oriented core libraries used by all the LHC experiments. It is a versatile open-source tool, developed at CERN and Fermilab (USA), and used for big data processing, statistical analysis, visualisation and storage.

 Example of a plot created with the help of the ROOT tool (Image: ROOT)