FPGA-accelerated Cloud Server

FPGA-accelerated Cloud Server (FACS) provides tools and environment for you to develop and use FPGAs. With the FACSs, you can easily develop FPGA accelerators, deploy FPGA-based services, and provide simple-to-use, cost-effective, agile, secure FPGA cloud services.

  • FPGA high performance instances have been launched for free open beta test.
  • * After submitting an application, you will be contacted for the test.
Product Advantages
  • High-Performance Hardware

    Provides 100 Gbit/s PCIe interconnection channels, eight Xilinx VU9P FPGAs on each node, and 300 Gbit/s mesh optical interconnection channels between FPGAs in order to free applications from hardware restrictions.

  • Friendly Development Platform

    Provides optimized standard IP design databases, supports HDL, OpenCL and C/C++ development languages, offers a comprehensive simulation kit and verification environments to quickly build acceleration IP.

  • Economic Development Mode

    Pay-per-use FACSs relieve you of the need for a dedicated FPGA hardware platform. Verification components and reference designs reduce project costs and shorten R&D periods.

  • Various Acceleration IP

    Provides both Huawei standard and high-performance acceleration IP, allows third parties to develop and trade acceleration IP, and allows you to select acceleration IP from the Marketplace, reducing R&D and maintenance costs.

Application Scenarios
  • Video Processing

  • Deep Learning

  • Genomics Research

  • Financial Analysis

Video Processing

Video Processing

Video applications, such as image recognition, image searching, video transcoding, real-time rendering, Internet-based live programs, and AR/VR, require high real-time computing performance, which cannot be provided by common ECSs. FACSs offer cost-effective video solutions, which are ideal for video scenarios.

Advantages

High Performance

Flexible combination of highly parallel computing and RAM resources of FPGAs for video and graphics processing

Low Latency

Quick access of external memory in UHD video and Internet-based live program scenarios

Related Services

ecs

vpc

obs

evs

Deep Learning

Deep Learning

Multi-layer neural networks in machine learning require a large number of computing resources. The training process involves handling massive data, while the inference process requires ultra-low latency. In addition, machine learning algorithms are being continuously optimized. FACSs meet the preceding requirements due to their high parallel computing, programmable hardware, low power consumption, and low latency. FACSs dynamically provide the optimal hardware circuit design for different machine learning algorithms, meeting strict requirements for massive computing and ultra-low latency. Therefore, FACSs meet hardware requirements for machine learning.

Advantages

Flexible

Flexible architecture adjustment based on computing models

Cost-effective

High-performance, low power consumption solution at low costs

Related Services

ecs

vpc

obs

evs

Genomics Research

Genomics Research

Precision medicine can be implemented through gene sequencing and analysis as well as rapid analysis of massive biological and medical data. Many fields, such as pharmaceutical development and molecular breeding, also require the processing of massive data. These fields require hardware acceleration to resolve performance bottleneck problems for biological computation. FACSs meet such requirements due to their outstanding programmable hardware computing performance.

Advantages

High Throughput

Improved massive data processing performance

Low Latency

Custom hardware circuits for speeding up genetic algorithms and shortening latency

Related Services

ecs

vpc

obs

evs

Financial Analysis

Financial Analysis

The financial industry has strict requirements for computing capabilities and real-time performance based on ultra-low latency and high throughput for services, such as pricing tree model-based financial computing, high-frequency trading, fund or securities trading algorithms, financial risk analysis and decision-making, and transaction security assurance. Using programmable hardware acceleration, FACSs offer an optimal hardware acceleration solution for various scenarios. In certain scenarios, FACS performance has been improved by thousands of times when compared with the performance of stand-alone software.

Advantages

High Performance

Improved computing performance and analysis accuracy

Low Latency

Custom hardware circuits for ultra-low latency

Related Services

ecs

vpc

obs

evs

Functions

  • FPGA Development Kit

    Simple-to-use hardware development kit (HDK)

  • Software Development Kit

    Cost-effective software development kit (SDK)

FPGA Development Kit

  • Hardware Development Kit
    HDK includes accelerator samples, encoding environment, simulation platforms, code compile and encryption tools, and debug codes. You can develop and verify your FPGA hardware accelerators quickly based on the samples and operation guide.

Software Development Kit

  • Software Development Kit
    SDK includes application samples, hardware abstract interfaces, accelerator abstract interfaces, accelerator driver, runtime, and a version management tool. With the accelerator abstract interfaces, you can invoke FPGA accelerators as simple as invoking a software function library to develop high-performance applications.

  • FPGA Hardware Configuration

    Eight Xilinx VU9P FPGAs on each node, 300 Gbit/s mesh optical interconnection channels between FPGAs

  • Hardware Acceleration Resource Pool

    Pay-per-use resource allocation for hardware acceleration

FPGA Hardware Configuration

  • Superior Performance
    Each FACS provides up to 8 FPGAs, and each FPGA contains about 2.5 million logic units. An FACS provides 16 PCIe 3.0 interfaces, with the throughput reaching 100 Gbit/s. The mesh optical network between FPGAs can reach up to 300 Gbit/s. Each FPGA provides a 64 GB DDR4 with a maximum interface frequency of 2,133 MHz.

Hardware Acceleration Resource Pool

  • Pay-per-use Resource Allocation
    FACS hardware acceleration resources are pooled and allocated among nodes. FPGA virtualization, isolation, and resource distribution allow resource sharing within one node without affecting your services.

Recommended Configurations

FP1 (FPGA High Performance)

Basic hardware acceleration instance

FP1 (FPGA High Performance)

Basic hardware acceleration instance

Configuration

  • vCPUs 8
  • Memory 116 GB
  • FPGAs 1
  • NVMe 800 GB
  • InterLink N/A

Scenarios

  • Video processing
  • Deep learning
  • Genomics research
  • Financial analysis

FP1 (FPGA High Performance)

Enhanced hardware acceleration instance

FP1 (FPGA High Performance)

Enhanced hardware acceleration instance

Configuration

  • vCPUs 32
  • Memory 464 GB
  • FPGAs 4
  • NVMe 4 x 800 GB
  • InterLink 300 Gbit/s mesh

Scenarios

  • Video processing
  • Deep learning
  • Genomics research
  • Financial analysis
Coming soon

FP1 (FPGA High Performance)

Superior hardware acceleration instance

FP1 (FPGA High Performance)

Superior hardware acceleration instance

Configuration

  • vCPUs 64
  • Memory 928 GB
  • FPGAs 8
  • NVMe 8 x 800 GB
  • InterLink 300 Gbit/s mesh

Scenarios

  • Video processing
  • Deep learning
  • Genomics research
  • Financial analysis
Coming soon
Success Stories

Create an Account and Experience HUAWEI CLOUD for Free

Register Now