Infrastructure

The ICEI project plans to deliver a set of e-infrastructure services that will be federated to form the Fenix Infrastructure. The distinguishing characteristic of this e-infrastructure is that data repositories and scalable supercomputing systems will be in close proximity and well integrated.

This section summarises the main information about the infrastructure.

Architectural Concepts

  • Service-oriented provisioning of resources
  • Focus on infrastructure services meeting the requirements of various science communities
  • Support for community specific platforms on top of these services
  • Encouragement and facilitation of community efforts
  • Federation of infrastructure services to:
    • Enhance availability of infrastructure services
    • Optimise for data locality
    • Broaden variety of available services

Fenix Communities

Service-oriented provisioning of computing and storage resources within Fenix aims on supporting science communities that develop, deploy and operate domain specific platform services. These services will run on top of the Fenix infrastructure services. The Human Brain Project will be the prime and lead customer of ICEI. Other science and engineering communities will be provided access through PRACE.

Resource Allocation Model

  • Fenix Resource Providers provide resources to Fenix Communities
  • Fenix Communities distribute resources to their users via a peer-review process

Planned Resources

The ICEI project expects to be able providing researchers with access to resources listed in the table below by end of 2019:

Component

Site (Country)

Total ICEI (100%)

Minimum request

Scalable computing services

Piz Daint Multicore

CSCS (CH)

250 nodes

1 node

Interactive computing services

ICCP@JUELICH

JSC (DE)

175 nodes

1 node

Interactive Computing Cluster

CEA (FR)

60 nodes

1 node

Piz Daint Hybrid

CSCS (CH)

400 nodes

1 node

T.B.D.

CINECA (IT)

350 nodes

1 node

T.B.D.

BSC (ES)

6 nodes

1 node

VM services

ICCP@JUELICH

JSC (DE)

25 nodes

1 VM

Openstack compute node

CEA (FR)

600 VM (20 nodes)

1 VM

Pollux Openstack compute node

CSCS (CH)

35 nodes

1 VM

Nord3

BSC (ES)

84 nodes

1 node

Archival data repositories

Archival

CEA (FR)

7000 TB

0

Archival Data Repository

CSCS (CH)

4000 TB

1 TB

T.B.D.

CINECA (IT)

5000 TB

1 TB

T.B.D.

BSC (ES)

6000 TB

1 TB

Active data repositories

HPST@JUELICH

JSC (DE)

1 PB

10 TB

Lustre Flash

CEA (FR)

800 TB

1 TB

Data Warp

CSCS (CH)

80 TB

1 TB

T.B.D.

CINECA (IT)

350 TB

1 TB

T.B.D.

BSC (ES)

70 TB

1 TB

 

Should you be interested in applying to the following call, please visit the PRACE website where you will find more information on how to apply.