Available Resources

The ICEI project is currently able provide researchers with access to resources listed in the table below:





Total ICEI (100%)



Technical Details
Scalable Computing Services
Piz Daint Multicore CSCS (CH) 250 nodes 1 node

- Memory per node: 64 GB, 128 GB
- Compute nodes/processors: 1813 Cray XC40 nodes with Two Intel® Xeon® E5-2695 v4 @ 2.10GHz (2 x 18 cores) CPUs
- Interconnect configuration: Cray Aries

For more details click here.

JUSUF JSC (DE) 187 nodes 1 node - CPU: 2x AMD EPYC "ROME" 7742 (2x64 cores, 2.2 GHz base clock)
- CPU memory: 256 GByte DDR4-3200
- GPU (some nodes): 1x NVidia Volta V100 with 16 GByte memory
- Local storage: 960 GByte NVMe drive
- Interconnect: HDR100
Galileo100 CINECA (IT) 340 nodes 1 node Thin nodes
Interactive Computing Services
Piz Daint Hybrid CSCS (CH) 400 nodes 1 node

- Memory per node: 64 GB
- GPU memory: 16 GB CoWoS HBM2
- Compute nodes/processors: 5704 Cray XC50 nodes with Intel® Xeon® E5-2690 v3 @ 2.60GHz (12 cores) CPUs and NVIDIA® Tesla® P100 GPUs
- Interconnect configuration: Cray Aries

For more details click here.

JUSUF JSC (DE) 2 nodes 1 node - See above under “Scalable Computing Services”
- Interconnect: HDR100
Interactive Computing Cluster CEA (FR) 32 nodes 1 node

- 30 nodes with 2 CPUs (18 cores @ 2.6 GHz each), 1 GPU (NVidia V100, 32GB), 384GB of RAM.
- 2 nodes with extra large memory (3072GB) 4 CPUs and 1 GPU (Nvidia V100, 32GB).
- Infiniband HDR100 interconnect

Interactive Computing Cluster BSC (ES) 3 nodes 1 node Nodes for interactive access with dense memory
Galileo100 CINECA (IT) 214 nodes 1 node Fat nodes + GPU nodes
Virtual Machine (VM) Services
OpenStack Cluster CSCS (CH) 35 servers 1 VM

- 2 types of compute node:
     - Type 1:
     CPU: 2x Intel E5-2660 v4 14C
     RAM: 512 GBCPU
     - Type 2:
     CPU: 2x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz 8C
     RAM: 768 GB
 - VMs can be of various flavours and use several cores up to 16

For more details click here.

JUSUF JSC (DE) 16 nodes 1 VM

- See above under “Scalable Computing Services”
- Interconnect: 40GE

For more details click here.

Openstack Compute Cluster CEA (FR) 20 servers 1 VM 20 nodes with 2 CPUs (18 cores @ 2.6GHz each), 192GB of RAM
Nord3 BSC (ES) 84 nodes 1 node

- Nord3: Intel Sandybridge cluster being able to be used as VM host or scalable cluster. 84 nodes dx360m4

- Each node has the following configuration: 2x Intel SandyBridge-EP E5-2670/1600 20M 8-core at 2.6 GHz and 32 GB RAM

Galileo100 Openstack cluster CINECA (IT) 77 nodes 1 VM - 2x CPU 8260 Intel CascadeLake, 24 cores, 2.4 GHz
- 768 GB RAM DDR4 2933MT/s
- 2 TB SSD
Archival Data Repositories
Store POSIX and Object, including backup on Tape library (2x) CSCS (CH) 4000 TB 1 TB For more details click here.
Store filesystem CEA (FR) 7500 TB+ 1 TB HSM system (Lustre with HPSS backend)
Swift/OpenIO ARD CEA (FR) 7000 TB 1 TB OpenIO object store with Swift interface
Agora BSC (ES) 6000 TB 1 TB HSM system with Object storage interface based on Spectrum Scale and Spectrum Archive technology
Archival Data Repository CINECA (IT) 10000 TB (TBD) 1 TB Object storage with S3 interface
Active Data Repositories
Low latency storage tier (DataWarp) CSCS (CH) 80 TB 1 TB Non-Volatile Memory
HPST @ JUELICH JSC (DE) 2000 TB 10 TB Flash-based data cache based on DDN's IME technology
Work filesystem CEA (FR) 3500 TB 1 TB Lustre filesystem
Flash filesystem CEA (FR) 970 TB 1 TB Full-flash Lustre filesystem
HPC Storage @ BSC BSC (ES) 70 TB 1 TB GPFS Storage accessed from HPC clusters
HPC Storage @ CINECA CINECA (IT) 10500 TB 1 TB Storage for hot data (w DDN IME) accessed from SCC, IAC and VM services


Fenix Virtual Machine Services Models


For detailed information on the Fenix Virtual Machine Services (VM) Models, download the document in PDF. This file offers a description of the Fenix VM Models that may be useful for potential applicants.



Copyright 2023 © All Rights Reserved - Legal Notice
Follow us on Twitter
Follow us on Linkedin
Fenix has received funding from the European Union's Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.