Resources
Resource Summary
HPC4Research (BIH Cluster in the Research Network)
- General purpose nodes
- ca. 200 nodes (different generations), 16 cores/32 threads, 128-192 GB RAM each
- InfiniBand interconnect for 32 of these
- additional 32 nodes contributed by power users
- 4 high-Memory nodes
- 2 nodes with 0.5 TB RAM
- 2 nodes with 1.0 TB RAM
- featuring 80 cores each
- 5 GPU node
- 4 nodes with 4 V100 cards each
- old K20m node
- 2 PB fast GPFS/Spectrum-scale storage
- DDN hardware with native client access; 16x10 Gb/s ethernet
- Scheduling
- slurm
- some SGE (will be retired soon)
- Authenitfication with Charite or MDC account
HPC4Clinic (within Charite network)
- 24 compute nodes with 48 cores and 384 GB each
- 3 GPU nodes with 4 V100 cards each
- 1.2 TB scale-out storage (Isilon)
- Scheduler slurm
- Authenitfication with Charite account
- Currently test operation
Other Resources
- For incoming omics data: file server 0.5 PB; Dell hardware with ZFS
- For secure file exchange with external partners: DMZ server

DDN Storage - 2 PB