Ease of Use
One-click cluster creation; one-stop deployment and O&M of containerized applications; easy to connect, secure, control, and observe services through Istio.
High availability (HA) on the cluster control plane and cross-AZ deployment of nodes and applications in a cluster; private clusters with role-based access control.
At least a three- to five-fold increase in AI computing performance thanks to bare-metal NUMA and high-speed InfiniBand NICs; support for bare-metal and GPU servers.
A founder and premium member of CNCF, a K8S TOC representative, one of the first CNCF-certified K8S service providers, and a top contributor to K8S/Docker.
Auto Cluster Scaling
Computing resources can be adjusted based on service requirements and preset strategies. The number of cloud servers or containers increases or decreases with service traffic changes, ensuring service stability.
Offer multiple scaling policies and scale containers in seconds when specified conditions are met.
Automatically detect the statuses of pods in auto-scaling groups and replace unhealthy pods with new ones.
Charge you only for the cloud servers that you use.
Traffic Management Through Istio
Istio's out-of-the-box traffic management feature allows you to complete staged rollouts, observe your traffic, and control the flow of traffic without needing to change code.
Istio can be installed in just a few clicks and works seamlessly with CCE.
HTTP/TCP connection policies and security policies can be enforced without requiring you to rewrite code.
Graphical representations of application topology offer immediate insight into traffic health and service performance.
One-Stop Container Delivery
CCE can work with existing continuous integration and continuous delivery (CI/CD) pipelines to automatically complete code compilation, image building, dark launching, and containerization based on source code.
Reduce scripting workload by more than 80% through streamlined process interaction.
Provide various APIs to integrate with existing CI/CD systems, greatly facilitating customization.
Schedule tasks flexibly with a fully containerized architecture.
Applications and data can be seamlessly migrated between your on-premises network and the cloud, facilitating resource scheduling and disaster recovery (DR). This is made possible through environment-independent containers, network connectivity between private and public clouds, and the ability to collectively manage containers on CCE and your private cloud.
The HUAWEI CLOUD resource pool supports rapid capacity expansion during peak hours, for only a fraction of the cost involved in building private clouds from scratch.
The service system is deployed both on premises and on the cloud. The on-premises system provides services while the cloud ensures disaster recovery.
The hybrid cloud combines the technological advantages of the on-premises system and the cloud. The on-premises system can seamlessly work with other HUAWEI CLOUD services.
Running containers on high-performance GPU-accelerated cloud servers significantly improves AI computing performance, and GPU sharing among containers greatly reduces AI computing costs.
The bare-metal NUMA architecture and high-speed InfiniBand NICs drive a three- to five-fold improvement in AI computing performance.
GPUs are shared and scheduled among multiple containers, greatly reducing computing costs.
AI containers are compatible with all mainstream GPU models and have been used at scale in HUAWEI CLOUD's Enterprise Intelligence (EI) products.
Open beta release
Hitless rolling update
Container health checking
High application availability
Support for bare metal containers
Support for SFS
High cluster availability
Auto cluster scaling
Containers on high-performance GPU-accelerated cloud servers
Support for P100 GPU-accelerated cloud servers
Istio service mesh in beta testing
Auto scaling based on application load policies
Kubernetes RBAC authorization
Kubernetes v1.11 containerized applications supporting SFS Turbo
Add-on for running kubectl commands on a web interface
A hybrid cluster of both VM and PM nodes
Support for Kubernetes v1.13
Commercial release of Application Service Mesh
A cluster of ARM-based cloud servers
Creation of Kubernetes clusters in just a few clicks; auto deployment, auto O&M, and lifecycle management for containerized applications.
Three-master HA setup on the cluster control plane; cross-AZ deployment of nodes and applications in a cluster.
A variety of scheduling policies (affinity and anti-affinity between workloads, between workloads and AZs, and between workloads and nodes) to balance performance with reliability.
Elastic scaling of clusters and workloads; combined use of scaling policies.
Compatibility with Kubernetes/Docker-native APIs and commands; updates from Kubernetes and Docker communities incorporated every few months.
Delivery of containerized applications in just a few clicks without the need to compile Dockerfiles; process templates for improved delivery efficiency.
Integration with IaaS resources, such as computing (ECS, BMS), network (VPC, EIP), and storage (EVS, OBS, SFS) resources.