Ease of Use
Creation of container clusters in just a few clicks; one-stop deployment and O&M of containerized applications; out-of-the-box support for Kubernetes and Docker; easy to connect, secure, control, and observe services through Istio.
High availability (HA) through three-master HA setup on the cluster control plane and cross-AZ deployment of nodes and applications in a cluster; high-security, private clusters with role-based access control.
Support for high-performance cloud servers including VM, bare-metal, and GPU-accelerated servers; at least a three- to five-fold increase in AI computing performance thanks to bare-metal NUMA architecture and high-speed InfiniBand NICs.
Huawei's status as a founder and premium member of CNCF, one of the nine CNCF TOC representatives, one of the first CNCF-certified Kubernetes service providers, and a top contributor to Kubernetes and Docker communities; full compatibility with Kubernetes- and Docker-native versions.
Auto Cluster Scaling
Computing resources can be adjusted based on service requirements and preset strategies. The number of cloud servers or containers increases or decreases with service traffic changes, ensuring service stability.
Offers multiple scaling policies and scales containers in seconds when specified conditions are met.
Automatically detects the statuses of instances in auto-scaling groups and replaces unhealthy instances with new ones.
Charges you only for the cloud servers that you use.
Traffic Management Through Istio
Istio's out-of-the-box traffic management feature allows you to complete staged rollouts, observe your traffic, and control the flow of traffic without needing to change code.
Istio can be installed in just a few clicks and works seamlessly with CCE.
HTTP/TCP connection policies and security policies can be enforced without requiring you to rewrite code.
Graphical representations of application topology offer immediate insight into traffic health and service performance.
One-Stop Container Delivery
CCE can work with existing continuous integration and continuous delivery (CI/CD) pipelines to automatically complete code compilation, image building, dark launching, and containerization based on source code.
Reduces scripting workload by more than 80% through streamlined process interaction.
Provides various APIs to integrate with existing CI/CD systems, greatly facilitating customization.
Schedules tasks flexibly with a fully containerized architecture.
Applications and data can be seamlessly migrated between your on-premise network and the cloud, facilitating resource scheduling and disaster recovery (DR). This is made possible through environment-independent containers, network connectivity between private and public clouds, and the ability to collectively manage containers on CCE and your private cloud.
The HUAWEI CLOUD resource pool supports rapid capacity expansion during peak hours, for only a fraction of the cost involved in building private clouds from scratch.
The service system is deployed both on premises and on the cloud. The on-premises system provides services while the cloud ensures disaster recovery.
The hybrid cloud combines the technological advantages of the on-premises system and the cloud. The on-premises system can seamlessly work with other HUAWEI CLOUD services.
Running containers on high-performance GPU-accelerated cloud servers significantly improves AI computing performance, and GPU sharing among containers greatly reduces AI computing costs.
The bare-metal NUMA architecture and high-speed InfiniBand NICs drive a three- to five-fold improvement in AI computing performance.
GPUs are shared and scheduled among multiple containers, greatly reducing computing costs.
AI containers are compatible with all mainstream GPU models and have been used at scale in HUAWEI CLOUD's Enterprise Intelligence (EI) products.
Open beta release
Hitless rolling update
Container health checking
High application availability
Support for bare metal containers
Support for SFS
High cluster availability
Auto cluster scaling
Containers on high-performance GPU-accelerated cloud servers
Support for P100 GPU-accelerated cloud servers
Istio service mesh in beta testing
Auto scaling based on application load policies
Kubernetes RBAC authorization
Kubernetes v1.11 containerized applications supporting SFS Turbo
Support for Kubernetes v1.13
Creation of Kubernetes clusters in just a few clicks; auto deployment, auto O&M, and lifecycle management for containerized applications.
Three-master HA setup on the cluster control plane; cross-AZ deployment of nodes and applications in a cluster.
A variety of scheduling policies (affinity and anti-affinity between workloads, between workloads and AZs, and between workloads and nodes) to balance performance with reliability.
Elastic scaling of clusters and workloads; combined use of scaling policies.
Compatibility with Kubernetes/Docker-native APIs and commands; updates from Kubernetes and Docker communities incorporated every few months.
Delivery of containerized applications in just a few clicks without the need to compile Dockerfiles; process templates for improved delivery efficiency.