Planned Resources

The ICEI project expects to be able to provide researchers with access to resources listed in the table below by end of 2020:


Component Site (Country) Total ICEI (100%) Minimum Request Technical Details Expected* Availability
Scalable Computing Services
JUSUF JSC (DE) 186 nodes 1 node

- CPU: 2x AMD EPYC "ROME" 7742 (2x64 cores, 2.2 GHz base clock)

- CPU memory: 256 GByte DDR4-3200

- GPU (some nodes): 1x NVidia Volta V100 with 16 GByte memory

- Local storage: 960 GByte NVMe drive

- Interconnect: HDR100

CINECA-ICEI CINECA (IT) > 400 nodes 1 node

- Thin nodes

- Fat Nodes + GPU

- Low latency Interconnect

September 2020 (delay due to covid-19 suspension of tendering procedure)
Interactive Computing Services
JUSUF JSC (DE) 5 nodes 1 node

- See above under “Scalable Computing Services”

- Interconnect: HDR100

Interactive Computing Cluster CEA (FR) 32 nodes 1 node - 30 nodes with 2 CPUs (18 cores @ 2.6 GHz each), 1 GPU (V100, 32GB), and 384GB of RAM
+ 2 nodes with extra large memory (3072GB) 4 CPUs and 1 GPU.
- Infiniband HDR100 interconnect
July 2020
Interactive Computing Cluster CINECA (IT) size TBD. Resources from scalable partition (see above) 1 node

- Hardware: see above under “Scalable Computing Services”

- Software: subject to tender


December 2020

Interactive Computing Cluster BSC (ES) 6 nodes 1 node Nodes for interactive access with dense memory October 2020
Virtual Machine (VM) Services
JUSUF JSC (DE) 4 nodes 1 VM

- See above under “Scalable Computing Services”

- Interconnect: 40GE

Openstack Compute Node CEA (FR) 20 servers
(up to 600 VMs)
1 VM 20 nodes with 2 CPUs (18 cores @ 2.6GHz each), 192GB of RAM September 2020
Nord3 BSC (ES) 84 nodes 1 node

- Nord3: Intel Sandybridge cluster being able to be used as VM host or scalable cluster. 84 nodes dx360m4.

- Each node has the following configuration: 2x Intel SandyBridge-EP E5-2670/1600 20M 8-core at 2.6 GHz and 32 GB RAM


October 2020

CINECA-ICEI Openstack cluster CINECA (IT) > 50 nodes 1 VM Interconnect: > 25GE September 2020 (delay due to covid-19 suspension of tendering procedure)
Archival Data Repositories
Store filesystem CEA (FR) 7500 TB+ 1 TB HSM system (Lustre with HPSS backend) Requestable
Swift/OpenIO CEA (FR) 7000 TB 1 TB OpenIO object store with Swift interface July/August 2020
Archival Data Repository CINECA (IT) > 10000 TB 1 TB TBD September 2020 (delay due to covid-19 suspension of tendering procedure)
Active Archive 2 BSC (ES) 6000 TB 1 TB HSM system with Object storage interface based on Spectrum Scale and Spectrum Archive technology


October 2020

Active Data Repositories



1 PB

10 TB



Work filesystem


3500 TB

1 TB

Lustre filesystem


Flash filesystem CEA (FR) 970 TB 1 TB Full-flash Lustre filesystem July 2020

HPC Storage @ CINECA


>1 PB

1 TB


September 2020 (delay due to covid-19 suspension of tendering procedure)

HPC Storage @ BSC


70 TB

1 TB

GPFS Storage accessed from HPC clusters


October 2020

*The provided time frame is subject to change


Fenix Virtual Machine Services Models


For detailed information on the Fenix Virtual Machine Services (VM) Models, download the document in PDF. This file offers a description of the Fenix VM Models that may be useful for potential applicants.


Copyright 2020 © All Rights Reserved - Legal Notice
Follow us on Twitter
Fenix has received funding from the European Union's Horizon 2020 research and innovation programme through the ICEI project under the grant agreement No. 800858.