Structure

The four main component layers of the Worldwide LHC Computing Grid (WLCG) are networking, hardware, middleware and physics analysis software.

Networking

Perhaps one of the most impressive components of the WLCG is the networking and connectivity. Without networking, there would be no WLCG. WLCG can initiate the distribution of data to the hundreds of collaborating institutes worldwide thanks to the excellent connectivity and dedicated networking infrastructure set up at CERN and subsequently worldwide.

Last 24 hours WLCG network throughput

Significant elements of WLCG Networks

CIXP logo

CERN Internet Exchange Point (CIXP)

CERN has its own Internet Exchange Point (IXP). It was set up in 1989 to be able connect directly to major national and international networks. This helps to reduce costs, time and the number of different networks (the number of 'hops') the data needs to pass through into order to reach its destination. 

CERN's Tier-0 can take advantage of CERN's own internet exchange point, the CIXP to pass data straight onto the dedicated networks for global exchange.

 

LHC Optical Private Network (LHCOPN)

CERN is connected to each of the Tier 1s around the world on a dedicated, private, high-bandwidth network called the LHC Optical Private Network (LHCOPN).

This consists of optical-fibre links working between 10 to 100 gigabits per second, spanning oceans and continents. 

You can see the current realtime activity of the LHCOPN here.

 

Data Exchange - FTS

Exchanging data between the WLCG centres is managed by the Grid File Transfer Service, initially developed together with the Enabling Grids for E-science projects from 2002 onwards.  It has been tailored to support the special needs of grid computing, including authentication and confidentiality features, reliability and fault tolerance, and third party and partial file transfer.

Hardware

Each Grid centre manages a large collection of computers and storage systems. Installing and regularly upgrading the necessary software manually is labour intensive, so large-scale management systems (some, such as Quattor, developed at CERN) automate these services. They ensure that the correct software is installed from the operating system all the way to the experiment-specific physics libraries, and make this information available to the overall Grid scheduling system, which decides which centres are available to run a particular job.

Robots in the tape storage sector at CERN (Video: CERN)

Each of the Tier 1 centres also maintains disk and tape servers. These centres use specialized storage tools – such as the  dCache system developed at the Deutsches Elektronen Synchrotron (DESY) laboratory in Germany, the ENSTORE system at Fermilab in the US or the CERN advanced storage system (CASTOR)  and EOS developed at CERN – to allow access to data for simulation and analysis independent of the medium (tape or disk) that the information is stored on.

Middleware

Middleware is the software infrastructure which allows access to an enormous amount of distributed computing resources and archives, and is able to support powerful, complicated and time-consuming data analysis. This software is called "middleware" because it sits between the operating systems of the computers and the physics applications software that can solves a scientist's particular problem.

The most important middleware stacks used in the WLCG are from the European Middleware Initiative, which combines several middleware providers (ARCgLiteUNICORE and dCache); the Globus Toolkit developed by the Globus Alliance; and the Virtual Data Toolkit.

Physics analysis software

To analyse the big data that the LHC produces, physicists need software tools that go beyond what is commercially available. The immense and changing demands of the high energy physics environment require dedicated software to analyse vast amounts of data as efficiently as possible.

CMS and LHCb combined data

The main physics analysis software is ROOT, a set of object-oriented core libraries used by all the LHC experiments. It is a versatile open-source tool, developed at CERN and Fermilab (USA), and used for big data processing, statistical analysis, visualisation and storage.

For examples of ROOT-based analysis, see the ROOT gallery.