G series provide various GPU memory configurations and P series are suitable for scientific computing.
Provides a comprehensive ecological environment to support multiple GPU applications and deep learning frameworks.
Allows you to obtain various graphic workstations, supercomputing applications, deep learning frameworks, and computing clusters with a few clicks.
Featuring rent-and-use and elastic scaling, GACS supports the industry's latest GPU technology and seamlessly switches to the latest GPU hardware.
Each GPU contains thousands of computing units, providing outstanding parallel computing capabilities.
Allows big data transmission between neural networks.
Uses GPU Direct over RDMA for the 100 Gbit/s bandwidth at a latency of 2 us.
Allows instance provisioning down to the minute with only a few clicks.
Scientific computing has strict requirements for double-precision computing, storage bandwidth, and latency.
Offers up to 680,000 IOPS for optimal storage performance.
Has improved double-precision computing performance by 100X than CPUs.
Supports various scientific computing software.
Superb single- and dual-precision computing power
Transmission capabilities for large volumes of GPU cluster data
Professional video or graphics rendering
Developed using NVIDIA Tesla P100 GPUs.
A single GPU offers 9.3 TeraFLOPS of single-precision computing and 4.7 TeraFLOPS of double-precision computing.
Uses 16 GB HBM2 GPU memory and 732 Gbit/s bandwidth to increase bit width by 8X.
Developed using NVIDIA Tesla V100 GPUs.
A single GPU offers 15.6 TeraFLOPS of single-precision computing, 7.8 TeraFLOPS of double-precision computing, and 112 TeraFLOPS of deep learning.
Uses 16 GB HBM2 GPU memory and 900 Gbit/s bandwidth to improve deep learning performance by 3X and HPC performance by 1.5X.
Developed using NVIDIA Tesla P4 GPUs.
A single GPU offers 5.5 TeraFLOPS of single-precision computing and 22 TOPS INT8.
Uses 8 GB DDR5 GPU memory and 192 Gbit/s bandwidth to shorten the latency in traffic shaping computing by 1.5X and support 35-channel HD video decoding and real-time inference.
Compared with common SSDs, local NVMe SSDs used by the instances of full P series and certain PI series have improved IOPS and bandwidth by several times. In the event of large volumes of data, the ultra-low access latency and ultra-high storage bandwidth provided by local NVMe SSDs further improve the overall storage performance.
P1 and P2v instances provide up to 10 Gbit/s bandwidth. Additionally, a single BMS instance uses a 100 Gbit/s InfiniBand network, maximizing data transmission for computing clusters.
GPU Direct allows data exchange between GPUs. Working with NvLink, GPU Direct has increased data transmission efficiency between GPUs by 5X.
- G3 instances, developed using NVIDIA Tesla M60 GPUs and GPU passthrough, support 8/16 GB DDR GPU memory for heavy-load graphics design and video processing.
HPC and deep learning training
HPC and deep learning training
G5 (V100 vGPU)
Desktop cloud, 3D rendering, graphics-intensive remote workstation