ROMEO HPC Center in Champagne-Ardenne (France) is glad to announce the official commissioning of its new supercomputer at the end of October 2013.
A call for proposals for project access (méso-challenges) on the new hybrid cluster is now open for the scientific community in Europe and around the world. This call is sponsored by Equip@meso (research project managed by GENCI)in partnership with BULL (hardware provider and integrator of the supercomputer) and NVIDIA (Tesla accelerators provider).
5 million compute hours will be allocated to projects that demonstrate a requirement for a significant portion or the totality of this unique scientific platform, selected after a peer-review by ROMEO Scientific Committee.
Selection criteria for the peer-reviewing process:
- computationally intensive projects that would not be possible or productive without access to such resources;
- large-scale hybrid computation requirements for NVIDIA Tesla K20x accelerators
- demonstrate scientific excellence and commitment in fulfilling the challenge
- project potential for access national and European HPC centers
ROMEO technical team, as well as BULL and NVIDIA engineers will provide user support during the entire time for selected projects. A full access at the HPC center's software licenses will be granted. Access is devoted solely for open R&D research purposes. ROMEO terms of use will govern all access and use of the resources. All papers or publications which include results obtained through the access to the ROMEO platform shall include an acknowledgment statement as per the terms of use. Selected projects might be subject to citation in different dissemination material (press releases, workshop presentations, …). Selected applicants agree to present their results in at least one event organized by the ROMEO HPC Center in 2014. Scheduling of the selected projects is subject to technical constraints and the provisional quota of 5 million compute hours can be updated and reached in several stages.
Description of the romeo cluster available for meso-challenges:
- 130 Bullx R421E3 servers each consisting of:
- 2 NVIDIA Tesla accelerators K20X
- 2 Intel Ivy Bridge 8-core 2.6 GHz (E5-2650-v2)
- 32 GB DDR3
- Interconnection: Infiniband non-blocking QDR with GPUDirect
- Scratch file system: LUSTRE 87 TB
- Home file system: NFS 58 TB
- A wide environment for visualization, compilation, debugging and profiling
For any questions, contact us : romeo _at_ univ-reims.fr