About ABCI

COMPUTING RESOURCES

ABCI consists of; 1,088 computing nodes, 10 multi-platform nodes, 22PB large-capacity storage system, high-speed InfiniBand network connecting nodes and storage systems, administrative servers, and network systems.

□ ABCI System Outline

en_img1.png

□ Block Diagram of the Computing Node


FUJITSU PRIMERGY Server( 2 servers in 2U)
CPU Intel Xeon Gold 6148(27.5M Cache, 2.40 GHz, 20 Core)×2
GPU NVIDIA Tesla V100(SXM2)×4
Memory 384GiB
Local Storage 1.6TB NVMe SSD(Intel SSD DC P4600 u.2)×1
Interconnect InfiniBand EDR×2


img2.png

Features

□ Computing Node

Each computing node accommodates: two Intel Xeon Gold 6148, four Nvidia Tesla V100 connected by SXM2, 1.6TB SSD connected with NVMe, and 384GiB main memory.
The theoretical performance of the computing node is 506 AI-TFLOPS for half precision (required for AI machine learning) and 34.2 TFLOPS for double precision (required for Engineering science and technology calculations).

□ High-Speed Interconnect

InfiniBand EDR is used for interconnecting among the computing nodes, the multi-platform nodes, the interactive nodes, and the large-capacity storage systems. 34 computing nodes are mounted on a rack. There are 32 racks for a total of 1,088 nodes. The interconnection within a rack is full bisection, while the interconnection between the 32 racks are over subscripted (1/3 bandwidth).

□ Large-Capacity Storage

22PB Large-capacity storage system for handling AI and Big Data consists of three sets of DDN SFA14KX. GPFS is supported by DDN GRIDScaler.

□ Interconnection Network

Since ABCI is connected to SINET-5(100Gbps), ABCI users may access ABCI through the internet. The connection is secured by firewalls (FortiGate 1500D) and two-stage authentication is adopted.