Cloud Native 2.0: Supercharging Digital Transformation for Enterprises

The concept of cloud native originated from the development and deployment practices of companies such as Netflix on public clouds since 2009. In 2015, Cloud Native Computing Foundation (CNCF) was founded, marking the shift of cloud native from a technical concept to an open source implementation. CNCF's definition of cloud native has been widely accepted: Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Container, service mesh, microservice, immutable infrastructure, and declarative APIs exemplify this approach.


Years of development now usher in Cloud Native 2.0, which includes the following changes:


1.The cloud native technology stack becomes more comprehensive. As the most active open source community for cloud native technology innovation, CNCF has attracted more than 150,000 contributors from 187 countries. The cloud native developer population has grown to 6.8 million. The number of projects has increased from 1 in 2015 to 123 now. These projects include Kubernetes, Envoy, Prometheus, Helm, Knative, KubeEdge, Operator, and Volcano.

2.An increasing number of industries are adopting cloud native to accelerate their digital transformation. Gartner predicts that 95% of digital services will run on cloud native platforms by 2025. By this time, cloud native technologies have been widely used in industries such as Internet, finance, automobile, energy, healthcare, and education.

Deployment at scale is happening. Scale can come with innovations. Real-world applications and developer engagement help polish cloud native technologies.


Two projects can showcase how cloud native meets increasingly complex demands in digital transformation.


The first project is Volcano, a cloud native batch computing platform to manage high-performance container workloads. Kubernetes is designed as a general orchestration framework based on Jobs. As applications grow in complexity, users want to run high-performance workloads on Kubernetes, such as Spark and TensorFlow jobs. Running these workloads requires some advanced features, such as fair scheduling, queue and job management, and data management. Volcano, a Kubernetes-based batch system, supports multiple domain frameworks such as TensorFlow, Spark, and MindSpore, helping users build a unified container platform on Kubernetes. Volcano provides functions such as job scheduling, job lifecycle management, multi-cluster scheduling, command line, data management, job view, and hardware acceleration.


The second project is KubeEdge for intelligent edge computing. To reduce response latency, cloud processing pressure, bandwidth costs, and data risks, some enterprise services need to be deployed at the edge or processed with edge-cloud collaboration. This requires a more efficient, lightweight, and collaborative edge platform. KubeEdge decouples and simplifies Kubernetes modules. It can run with a minimum of 70 MB memory and extends native containerized application orchestration and management to edge devices thanks to functions such as cloud-edge communications and edge offline autonomy. KubeEdge provides core infrastructure support for networks and applications. Applications deployed on both the cloud and edge can have their metadata synchronized.


In addition to open source projects, cloud vendors keep innovating their cloud native products to deliver higher performance. For example, Huawei Cloud accelerates computing, networking, and scheduling in Cloud Container Engine (CCE). To accelerate computing, container components are offloaded thanks to synergy-hardware synergy. To accelerate networking, the unique container passthrough networking solution flattens two network layers into one, halving the end-to-end connection time. To accelerate scheduling, the system becomes aware of the characteristics of AI, big data, and web services, application models, and network topologies to realize hybrid service deployment, intelligent scheduling, and automatic scheduling policy optimization.


There are two real-world cases of cloud native implementation.


The first is the adoption of cloud native in satellites. Initiated by Spacety and Shenzhen Institute of BUPT, the Tiansuan Constellation project aims to build an intelligent, open source platform for in-orbit space computing and provide technical support for technologies such as 6G networks and satellite Internet. There are six satellites in the first phase (2 main satellites, 2 auxiliary satellites, and 2 edge satellites). Conventional satellite communications systems are often limited by computing capacity and bandwidth. To run more services, satellite IT systems must improve their resource utilization. Also, to save data transmission bandwidth, satellites need to cleanse the collected data before sending it to the ground. For example, a flood monitoring satellite will delete a photo it takes if over 50% of the objects are obscured by clouds. This requires AI processing in addition to resource utilization improvement on the satellite. Cloud native can satisfy this demand. The test data shows that the collaborative inference between satellites and ground stations improves the calculation precision by more than 50% and reduces the amount of data sent back to the ground by 90%.


The other case is Jiangsu Provincial Department of Finance (Jiangsu Finance for short). China's financial systems are going digital. The economic aggregate of Jiangsu province exceeds CNY 10 trillion, ranking No. 2 in China. Its GDP per capita ranks top nation-wide for 12 consecutive years. Jiangsu Finance serves 80 million people living there and provides services covering economic and social development, healthcare, rural revitalization, environmental protection, education, science and culture and other undertakings in all aspects. Jiangsu Finance fully upgraded its IT system using cloud native technologies. The first step is containerizing the cloud infrastructure and improving resource utilization. Jiangsu Finance sorted out 185 service processes, including 14 service domains, 63 service application groups, and 591 application functions. Then, they shifted to microservices to run all their applications. This action fully upgraded their application architecture. They reconstructed the R&D processes with DevOps. A unified technical platform is built with consistent technical standards and development processes for multiple ISVs, which improves the development and collaboration efficiency as well as the delivery quality.


Catching the wave of cloud native, every enterprise can realize efficient digital transformation. At the end of 2020, Huawei Cloud proposed the Cloud Native 2.0 concept. Cloud Native 2.0 features a new technical architecture: distributed cloud, application-driven infrastructure, hybrid deployment, unified scheduling, decoupled compute-storage, automated data governance, trusted DevOps, serverless, heterogeneous integration based on soft bus, multi-modal iterative industry AI, and all-round security. These new architecture and technologies bring seven benefits to enterprise digital transformation: efficient resources, agile applications, Internet of Things, ultimate experience, service intelligence, security and trustworthiness, and industry enablement. Following the Everything as a Service strategy, Huawei Cloud is now offering enterprises Cloud Native 2.0 products to supercharge cloud native transformation.