检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
This solution may affect the password change on the ECS console. Therefore, you must verify the solution before rectifying the fault. Obtain the network model and container CIDR of the cluster.
According to the official NVIDIA announcement, if your CCE cluster has a GPU-enabled node (ECS) and uses the recommended NVIDIA GPU driver (Tesla 396.37), your NVIDIA driver is not affected by this vulnerability.
Figure 1 SFS - file system mount address The VPC to which the file system belongs must be the same as the VPC of the ECS VM to which the workload is planned.
CCE is a one-stop platform integrating compute (ECS and BMS), networking (VPC, EIP, and ELB), storage (EVS, SFS, and OBS), and many other services. It supports heterogeneous compute architectures such as GPUs, NPUs, and Arm.
An ECS with an EIP bound has been created in the same VPC as the cluster, and the ECS can access the cluster using kubectl. For details about how to access a cluster from an ECS, see Connecting to a Cluster Using kubectl.
If an ECS node has a raw data disk attached (not using LVM), detach it before resetting the node. After resetting, the original attachment information will be cleared. Re-attach the disk to the ECS node to retain the data.
For example, if a pay-per-use ECS (which is billed on an hourly basis) is deleted at 08:30, you will still have expenditures for the 08:00 to 09:00 hour, but you will not be billed for the 08:00 to 09:00 hour until about 10:00.
Large-scale networking: Cloud Native 2.0 networks support a maximum of 2,000 ECS nodes and 100,000 pods.
Install kubectl on an existing ECS and access a cluster using kubectl. For details, see Accessing a Cluster Using kubectl. Run the following command to create a YAML file for the NodePort Service.
Allowed adding a taint to a spot ECS before its release for the node to evict pods. Synchronized time zones used by the add-on and the node.
Large-scale networking: Cloud Native Network 2.0 supports a maximum of 2,000 ECS nodes and 100,000 pods.
When you perform operations on underlying resources of an ECS, for example, changing its specifications, the configured NAT gateway rules become invalid. Delete the rules and reconfigure them.
Table 3 O&M reliability Category Check Item Type Impact FAQ & Example Project The quotas of ECS, VPC, subnet, EIP, and EVS resources must meet customer requirements. Deployment If the quota is insufficient, resources will fail to be created.
For example, you can add an ECS as a backend server of a load balancer.
Parameter Example Description Node Type Elastic Cloud Server (VM) Select a node type based on service requirements. The available node flavors will then be displayed in the Specifications area for you to choose from.
IAM authorization manages access to cloud services, including CCE clusters and associated resources like VPC, ELB, and ECS resources. RBAC-based namespace authorization manages access to cluster resources, such as creating workloads in a cluster.
Large-scale networking: Cloud Native 2.0 networks support a maximum of 2,000 ECS nodes and 100,000 pods.
Install kubectl on an existing ECS and access the cluster using kubectl. For details, see Accessing a Cluster Using kubectl. Deploy the vLLM service using gpu-deployment.yaml.
Parameter Example Description Node Type Elastic Cloud Server (VM) Select a node type based on service requirements. The available node flavors will then be displayed in the Specifications area for you to choose from.
Log in to the ECS where kubectl has been installed. Create a description file named wordpress-deployment.yaml. wordpress-deployment.yaml is an example file name.