检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Querying an APIGroup (/apis/networking.cci.io) Function This API is used to query an APIGroup (/apis/networking.cci.io). Calling Method For details, see Calling APIs.
Querying an APIGroup (/apis/rbac.authorization.k8s.io) Function get information of a group Calling Method For details, see Calling APIs.
Querying an APIGroup (/apis/batch) Function This API is used to query an APIGroup (/apis/batch). Calling Method For details, see Calling APIs.
It allows you to switch to an SAP HANA node from the NAT server using Secure Shell (SSH). SFS Scalable File Service (SFS) provides the file sharing service. Create a file system to provide the backup volumes and the shared path to SAP HANA nodes.
300 × 1,024 profile-controller (C + 1,000)/6,000 × 1,000 (C + 400)/1,200 × 1,000 (C + 1,000)/6,000 × 1,024 (C + 400)/1,200 × 1,024 proxy (P + 2,000)/12,000 × 1,000 (P + 800)/2,400 × 1,000 (P + 2,000)/12,000 × 1,024 (P + 800)/2,400 × 1,024 resource-syncer/bursting-resource-syncer (
Model Name Minimum Flavor GPU 0 DeepSeek-R1 DeepSeek-V3 p2s.16xlarge.8 V100 (32 GiB) × 8 GPUs × 8 nodes p2v.16xlarge.8 V100 (16 GiB) × 8 GPUs × 16 nodes pi2.4xlarge.4 T4 (16 GiB) × 8 GPUs × 16 nodes Manually Deploying a DeepSeek-R1 or DeepSeek-V3 model Using SGLang and Docker on Multi-GPU
CREATE TABLE myschema.mytable (firstcol int); Insert data into the table. INSERT INTO myschema.mytable values (100); View data in the table. SELECT * FROM myschema.mytable; | firstcol | ---+----------+ 1 | 100 | Update data in the table.
Figure 1 Initializing a custom model In the Initialize Custom Model dialog box, set the following parameters: VPC: Select vpc-fg (192.168.x.x/16). Subnet: Select subnet-fg (192.168.x.x/24). File System Type: Select SFS Turbo. File System: Select sfs-turbo-fg.
Server looks at X-Forwarded-For header or X-Real-Ip header or request.RemoteAddr (in that order) to get the client IP. versions Array of strings versions are the api versions that are available.
Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region.
Cluster Creation Table 5 Different cluster creation modes Cloud Container Engine (CCE) Cloud Container Instance (CCI) Configure basic information (name, Region, networking, and compute) > Create a worker node > Configure the cluster > Create a workload.
Changing Node Specifications (Discarded) Function This API is used to modify the node specifications of a cluster. It can only change the specifications of ess nodes (data nodes).
Model Name Minimum Flavor GPU Nodes 0 DeepSeek-R1 DeepSeek-V3 p2s.16xlarge.8 V100 (32 GiB) × 8 8 p2v.16xlarge.8 V100 (16 GiB) × 8 16 pi2.4xlarge.4 T4 (16 GiB) × 8 16 Contact Huawei Cloud technical support to select GPU ECSs suitable for your deployment.
Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). podAntiAffinity io.k8s.api.core.v1.PodAntiAffinity object Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as
For details about how to obtain the value, see How to Obtain Parameters in the API URI. nodepool_id Yes String Node pool ID. Request Parameters Table 2 Request header parameters Parameter Mandatory Type Description Content-Type Yes String Message body type (format).
System.out.println(e.getHttpStatusCode()); System.out.println(e.getRequestId()); System.out.println(e.getErrorCode()); System.out.println(e.getErrorMsg()); } } } This example changes the name of the source server with ID dcdbe339-b02d
Status Codes Status Code Description 200 The information about the source server with a specified ID was deleted. 403 Authentication failed. Error Codes For details, see Error Codes. Parent Topic: Source Server Management
Status Codes Status Code Description 200 Batch deleting source server records succeeded. 403 Authentication failed. Error Codes For details, see Error Codes. Parent Topic: Source Server Management
Support for Third-party JAR Packages on x86 and TaiShan Platforms Question How to enable Spark2x to support the third-party JAR packages (for example, custom UDF packages) if these packages have two versions (x86 and TaiShan)? Answer Use the hybrid solution.
Support for Third-party JAR Packages on x86 and TaiShan Platforms Question How to enable Spark2x to support the third-party JAR packages (for example, custom UDF packages) if these packages have two versions (x86 and TaiShan)? Answer Use the hybrid solution.