检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
-成长地图 | 华为云
As shown in the following figure, select an Ascend AI accelerator card based on the model when registering an edge node. For a node with Ascend AI accelerator card enabled, you can view the AI accelerator card information and check the healthy chip list on the node details page.
What Do I Do If a Containerized Application Cannot Access External IP Addresses What Do I Do If the Ascend AI Accelerator Card (NPU) Is Abnormal?
NPU (optional) Ascend AI processors NOTE: Currently, edge nodes integrated with Ascend Processors are supported, such as Atlas 300 inference cards, and Atlas 800 inference servers.
Accelerator Card If you try to register an edge node of the AI accelerator card type, make sure that the edge node supports NPUs and has an NPU driver installed.
AI Accelerator Card Ascend AI accelerator card: supports edge nodes that support Ascend processors. If you need to use Ascend 310, 310B, first select the AI accelerator card and then select the NPU type. Table 1 lists the NPU types supported by Ascend AI accelerator cards.
AI Accelerator Card Ascend AI accelerator card: supports edge nodes that support Ascend processors and NPU specifications of Ascend 310P and Ascend 310P virtualization partition.
NPU (optional) Ascend AI processors NOTE: Currently, edge nodes integrated with Ascend Processors are supported, such as Atlas 300 inference cards, and Atlas 800 inference servers.
NPU (optional) Ascend AI processors NOTE: Currently, edge nodes integrated with Ascend Processors are supported, such as Atlas 300 inference cards, and Atlas 800 inference servers.
AI Accelerator Card: Select Not installed. Retain the default values for other parameters. Select I have read and agree to the Huawei Cloud Service Level Agreement. Click Register in the lower right corner.
It uses Huawei general-purpose servers and AI hardware and is deeply integrated with Huawei Ascend chips to provide high-performance, low-cost edge AI inference computing power. IEF also supports TaiShan servers that use Huawei Kunpeng processors.
System Architecture As shown in Figure 1, IEF extends cloud capabilities such as AI applications to edge nodes, which are close to end devices. In this way, the edge nodes have the same capabilities as the cloud and can process device computing requirements in real time.
Container Specifications: Specify CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card: The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.
Container Specifications: Specify CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card: The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.
Container Specifications: Specify CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card: The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.
Container Specifications: Specify the CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card: The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.
Container Specifications: Specify CPU, memory, and AI accelerator card quotas. Figure 2 Container configuration Click Next. Select the edge node where the application is to be deployed. Leave the other parameters unspecified. Figure 3 Deployment configuration Click Next.
NPU (optional) Ascend AI processors NOTE: Currently, edge nodes integrated with Ascend Processors are supported, such as Atlas 300 inference cards, and Atlas 800 inference servers.
Container Specifications: Specify CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card: The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.
Container Specifications: Specify CPU, memory, Ascend AI accelerator card, and GPU quotas. Ascend AI accelerator card The AI accelerator card configuration of the containerized application must be the same as that of the edge node actually deployed.