From May 2nd through to the 5th, one of the most influential technical summits in the container field was held in Copenhagen, Denmark. Experts from IT giants such as Huawei, AWS, Azure, Google, IBM, Red Hat, and VMWare shared their technologies in a variety of sessions including Kubernetes, microservices, containers, container storage, DevOps, serverless, and GPU acceleration. In addition to technical discussions, these conferences also allotted plenty of time to sharing of customer cases, including those from CERN, Spotify, Wikipedia, Booking.com, YouTube, NVIDIA, Adidas, Financial Times, eBay, and the Norway Tax Management Center, all of whom shared their experience in using containers and the peripheral technologies in their production environments.
Commercial Use of Kubernetes Maturing, Multi-Cloud Becomes the Compelling Trend
Kubernetes, a core project of the Cloud Native Computing Foundation (CNCF), is the first project to be put into wide commercial use from the organization. The practices in production of the open-source platforming for managing containerized workloads became a focal point of the conference.
At the keynote on the first day, the production practices in adopting Kubernetes at CERN amazed attendees. As the largest particle physics research center in the world, CERN has huge computing requirements. The collider generates data into the PB range every second. Even after two levels of processing by hardware and software filters, there are still a few gigabytes to process every second. The higher value data is then further processed and analyzed.
To better process and analyze data, CERN uses 210 Kubernetes clusters to schedule and manage its layout comprising 320,000 cores and more than 10,000 hypervisors. These clusters are deployed at varying scales, with anywhere from dozens to thousands of nodes in each cluster.
To facilitate unified management of workloads in these clusters, CERN uses the Kubernetes Federation (cluster federation) project as a unified platform egress. CERN augments its resources with creation of Kubernetes clusters on such cloud platforms such as Open Telekom Cloud (a Huawei partner), Google Cloud, Azure, and AWS to help fulfill soaring compute and storage requirements.
Creating Kubernetes clusters and deploying workloads on two or more cloud platforms has become a common practice for many adopters of the management environment. Users can easily deploy services and enjoy the advantages that each cloud platform has to offer.
Container services running Kubernetes has become a standard configuration amongst all cloud vendors. At the time of writing, 55 providers, including Google, Azure, and AWS, and China’s HUAWEI CLOUD, Alibaba Cloud, and Tencent Cloud, have released certified Kubernetes services. An unmistakable trend of these organizations is how they implement multiple clouds to handle workloads. Kubernetes-based cloud native platforming allows users to build and operate vendor-agnostic cloud profiles, enabling them to easily implement cross-cloud and cross-cluster workload migration.
Eliminating Anxieties over Security in Cloud Native
CNCF has started to grow rapidly in the last couple of years. The community uses Kubernetes and container at the core of its platforming to supplement capabilities in observability, maintainability, and microservice discovery in builds of highly agile, ultra-scalable profiles.
As containers and microservices are applied more extensively in production environments, increasingly larger portions of users are concerned over security issues with all the emerging technologies. This is the biggest concern for CNCF moving forward.
Google developed its full container solution: gVisor. The solution provides a new sandbox container runtime environment, yielding the advantages of being lightweight while providing isolation capabilities similar to that of VMs. This is similar to the positioning of the KataContainer project announced last year at the KubeCon in Austin.
KataContainer borrows from Intel ClearContainer and hyper runV and is a lightweight virtualization container technology. It implements isolation between containers by adding a layer of tailored and optimized virtualized kernels outside the container.
Slightly different from KataContainer, gVisor provides powerful isolation boundaries by intercepting application system calls within the user space and acting as a guest kernel. Another difference is that gVisor does not need fixed resources and can adapt to changing resource conditions at any time, somewhat resembling a Linux process. gVisor is a hyper-virtualized operating system featuring better flexibility in resource utilization while keeping associated costs lower than running all VM environments. However, the tradeoff in flexibility is a higher system call cost and slightly inferior application compatibility.
The gVisor project provides new approaches in container security and enriches the ecosystem in security container technology. Although there is still a long way to go for commercial use, the project will undoubtedly gain momentum in the mainstream market.
Release of Kubeflow v0.1 Significantly Lowers the Threshold in Deploying ML Platforms on Kubernetes
Machine learning has developed rapidly over the last few years. A key question is how to use the advantages of Kubernetes as a deployment platform to provide a convenient and scalable machine learning framework. While many are scrambling to find the answer, the Kubeflow project aims to find a simple open source solution.
Since KubeCon + CloudNativeCon announced the project in North America last year, Kubeflow has attracted more than 70 contributors from more than 20 organizations, including Google, Microsoft, Red Hat, Huawei, and Alibaba Cloud. In just over five months, the Kuberflow project now has over 700 commits and over 3100 GitHub stars. The project is among the top 2% of Github projects. The release of version 0.1 provides a set of simplified software packages for users to develop, train, and deploy their machine learning frameworks.
Kubeflow v0.2 is scheduled for release within the next couple of months. The release will feature the following:
● Simplified settings on container configurations.
● Optimized GPU integration
● Support for more machine learning frameworks, such as Spark ML, XGBoost, and sklearn
● Support for automatic scaling of the TF service
● Programming for data conversions like tf.transform
After Kubeflow v1.0 is released at the end of this year, the project will seek a formal governance community hosted in CNCF or another community.
CloudEvent v0.1 Release in the Serverless Domain
As development of cloud technologies extends out, applications became more scattered, which in turn deepened the need for better integration. People were publishing events more frequently, using event-driven design patterns more intently, and passing events between environments at much greater numbers. This necessitated the need for serverless. Each cloud platform started to provide function services (event-driven computing services), and the number of supported events was increasing. However, different platforms had different descriptions of events. Developers had to learn platform-specific terms and semantics. Passing of events was hindered because the logics and infrastructures did not use consistent information to support intelligent decision processing and forwarding of events.
To solve the interoperability problem, after completing a white paper on the topic at the end of last year, the CNCF Serverless Work Group started to work on a standards specification in serverless events, which was named CloudEvent. Open source players, including Huawei, Google, Microsoft, IBM, and Red Hat, actively contribute to the project.
The scope of the CloudEvent 0.1 release is simple: Provide a set of consistent metadata that can be included in event data so that events are easier to apply to publishers, middleware, subscribers, and applications. In short, it is a standard event envelope.
CloudEvent’s general metadata makes it easier to route, fan-out, track, replay, and keep the data online. The data is more portable, transferable, and easier to transmit across the environments. The project is also developing a specification for mapping CloudEvent metadata to existing protocols. At present, network bandwidth, cost, and delay are still the main challenges. However, the simple metadata definition in CloudEvent will yield excellent data portability.
Kubernetes Acceleration and the Beginnings of a Cloud Native Programming Framework
Kubernetes was intentionally designed to be loosely coupled with architecture. In subsequent iterations following the initial conceptual release, multiple plug-in frameworks were added and expansibility was improved. The introduction of the "operator" concept standardized a large portion of the customization and extension requirements in the management environment.
However, the threshold was still high in operator development, testing, and O&M. Developers would often look for an existing operator to remove the original code and fill in the management logic related to their applications. To complete this process, operators need to have a deep understanding of the APIs in Kubernetes and have considerable experience and technical knowledge. Likewise, testing and maintenance required additional work.
The operator concept in the development framework aims to incorporate excellent practices and form a set of standards to help reduce the development, test, and O&M requirements of applications on Kubernetes. The development framework consists of three parts: SDK, lifecycle management, and monitoring.
The operator SDK provides tools for developers to build, test, and encapsulate operators. The lifecycle management component monitors the installation, updating, and continued maintenance of all operators (and associated services) running across Kubernetes clusters. The monitoring component slated for release in the next few months will monitor basic indicators (like CPU and memory usage) and allow for addition of customized indicators. The operator concept has lowered the development and O&M threshold, which is welcome news for enterprises with customization requirements in Kubernetes who find it difficult to operate and maintain their cloud profiles after service reconstruction.
The Big Stage Has Been Set, Actors Are Welcome to Join the Performance
In the past, distributed application developments like Erlang and other fault-tolerant programming frameworks were becoming increasingly bound to different programming communities and software stacks.
Kubernetes changed the game with its many amazing features and an ecosystem that gathered snowball momentum. Many distributed systems are now adopting Kubernetes as a standard as it is more suitable for developers and enterprises, and will eventually apply to the full scope of industries as a truly horizontally-adaptable technology.
Kubernetes offers an excellent development, monitoring, and O&M experience while accelerating innovation. In existing microservice governance and machine learning applications, Kubernetes has demonstrated usefulness as a standards base and a powerful booster in portability.
In the next few years, a slew of application definition tools and services are expected to come out, greatly simplifying cloud deployment and O&M applications. There is no doubt that if a piece of software cannot be directly run or plugged into Kubernetes in the future, no one will be willing to pay for it.
Kubernetes will change from being a whitebox to a blackbox. Developers won’t need to master a lot of Kubernetes-specific knowledge. Instead, they will only need to use a set of standard rules to implement distributed system-style programming. Developers won’t have to compile code, build images, test configurations of the production environment, or even execute such already simple commands as 'kubernetes just run my code'. An ordinary code submit action can trigger the entire delivery pipeline from compilation, building, and testing to production and O&M processes. Performing a rollback is also simple because the code submission operation is atomic.
Many think that Kubernetes is becoming increasingly more stable, albeit boring. However, basic infrastructure should be boring. The spin-offs in applications and customizations are what add the flavorings throughout the entire cloud native ecosystem. The big stage has been built, and the show is just beginning. Learn how you can be a part.