HPF Overview

Computer Nodes Computer Threads Total RAM (TB) Total Disk (TB)
268 10,272 34.3  201

Compute Nodes

CPU Architecture Number of Nodes Number of Cores per Node Ram per Node (GB)
Dual CPU Sandy Bridge E-2670 @ 2.6 GHz 72 32 128
Dual Ivy Bridge E5-2670v2 @ 2.5 GHz 196 40 128
Dual CPU Sandy Bridge E-2670 @ 2.6 GHz 2 64 512

Network

Standard compute nodes are distributed in 4 racks and networked with 12 x Mellanox SX6036 Infiniband (IB) switches. This FDR Infiniband network operates a 56 Gb/s and connects compute nodes to the storage through an IB-to-Ethernet gateway. The Ethernet network is formed by 4 x SX1024 Mellanox switches operating at 40 or 10 Gb/s, using jumbo frames and a fully redundant network topology. It has shown that the cluster network is capable of providing the performance of 80 Gb/s steady throughput from the compute nodes to the storage.

Storage

The cluster data is stored in a 2.4 PB EMC Isilon cluster (24 Isilon X400 nodes). Each node has dual CPUs, 48 GB of DDR3 RAM, 100TB of raw storage space distributed in 36 x 3TB SAS hard drives, 2 x 10 Gb/s network adapters for cluster front-end data transfers and a QDR infiniband card for back-end cluster services. The storage is running OneFS operating system with a redundant configuration of N+2:2.

The storage is attached to a 4.5 PB SpectraLogic tape system with LTO-6 tape technology managed by Research IT. The tape serves as an archive system using SGI DMF software and as a Disaster Recovery site using EMC networker. The tape library is located in the 2nd floor data centre of the SickKids’ PGCRL tower.