检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Figure 7 Connecting to a notebook instance Parent topic: Using Notebook for AI Development and Debugging
How Can I Drain a GPU Node After Upgrading or Rolling Back the CCE AI Suite (NVIDIA GPU) Add-on?
MaaS integrates DeepSeek models and supports AI development on multiple platforms. For details, see Using ModelArts Studio (MaaS) DeepSeek API to Build AI Applications. Parent topic: DeepSeek Inference Applications on MaaS
When the storage read/write bandwidth no longer meets the AI training needs, for example, checkpoint saves and loads take a longer time or loading datasets slows down the training, you can expand the file system performance to reduce the data loading time.
Training Uploading Data to OBS and Preloading the Data to SFS Turbo Creating a Training Job Parent topic: Implementation Procedure
Creating Resources This best practice uses a VPC, an SFS Turbo HPC file system, an OBS bucket, and a ModelArts resource pool. To achieve optimal acceleration performance, you are advised to select the same region and AZ for the SFS Turbo HPC file system and ModelArts resource pool
Basic Configurations Configuring Network Passthrough Between ModelArts and SFS Turbo Configuring SFS Turbo and OBS Interworking Configuring Auto Data Export from SFS Turbo to OBS Configuring the SFS Turbo Data Eviction Policy Parent topic: Implementation Procedure
What Can I Do If Certain Alarms Are Displayed in the GPU Node Events After the CCE AI Suite (NVIDIA GPU) Add-on Is Upgraded?
Type CCE AI Suite (Ascend NPU) 1.x.x CCE AI Suite (Ascend NPU) 2.0.0 to 2.1.6 CCE AI Suite (Ascend NPU) 2.1.7 to the Latest Version 310 series card Driver version < 23.0.rc0 You must manually mount the drivers and npu-smi to a service pod.
Parent topic: Using FunctionGraph to Deploy Stable Diffusion for AI Drawing
Figure 7 Refreshing the bound domain name Parent topic: Using FunctionGraph to Deploy Stable Diffusion for AI Drawing
Parent topic: Using FunctionGraph to Deploy Stable Diffusion for AI Drawing
Workflow Development Configuring Workflow Parameters Configuring the Input and Output Paths of a Workflow Creating Workflow Phases Creating a Multi-Branch Workflow Creating a Workflow Publishing a Workflow Advanced Workflow Capabilities Parent topic: Using Workflows for Low-Code AI
Installing ma-cli Locally Autocompletion for ma-cli Commands ma-cli Authentication ma-cli image Commands for Building Images ma-cli ma-job Commands for Training Jobs ma-cli dli-job Commands for Submitting DLI Spark Jobs Using ma-cli to Copy OBS Data Parent topic: Using Notebook for AI
Parent topic: Managing AI Inspections (Self-Developed)
Can I apply for a refund for the AI platform consulting and planning service during delivery? Refunds are not supported in delivery. Parent topic: About Service Delivery
Viewing the Word Frequency The word frequency display function is used to collect statistics on hot words generated during calls after AI inspection is complete. Procedure Sign in to the AICC as a tenant administrator and choose Speech Text Analysis > Word Frequency Display.
Creating a Model Registration Phase Description This phase integrates capabilities of ModelArts AI application management. This enables trained models to be registered in AI Application Management for service deployment and update.
Figure 5 Notebook Job Definitions tab Figure 6 Configuring a scheduled job Parent topic: Using a Notebook Instance for AI Development Through JupyterLab
Searching for a Workflow Procedure On the workflow list page, you can use the search box to quickly search for workflows based on workflow properties. Log in to the ModelArts console. In the navigation pane, choose Development Workspace > Workflow. In the search box above the workflow