检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Access Mode Accessing a Real-Time Service (Public Network Channel) Accessing a Real-Time Service (VPC High-Speed Channel) Parent topic: Accessing Real-Time Services
Subscription & Use Searching for and Adding an Asset to Favorites Subscribing to Free Algorithms Subscribing to a Workflow
Publish & Share Publishing a Free Algorithm Publishing a Free Model
Mandatory when an ECS is used as the NFS server Parent topic: Using FunctionGraph to Deploy Stable Diffusion for AI Drawing
Parent topic: Using OBS+SFS Turbo for Storage Acceleration in AI Scenarios
As shown in the following figure, select an Ascend AI accelerator card based on the model when registering an edge node. For a node with Ascend AI accelerator card enabled, you can view the AI accelerator card information and check the healthy chip list on the node details page.
High-speed access through VPC peering is available only for the services deployed using the AI applications imported from custom images.
Parent topic: Managing AI Inspections (Self-Developed)
Implementation Procedure Creating Resources Basic Configurations Training Routine O&M Parent topic: Using OBS+SFS Turbo for Storage Acceleration in AI Scenarios
What are the service advantages of AI platform consulting and planning services? Rich industry experience: The team has a lot of industry delivery experience and has the experience and capability of modeling complex service scenarios.
You can subscribe to them in AI Gallery. Parent topic: Using Workflows for Low-Code AI Development
Accessing a Real-Time Service (Public Network Channel) Context By default, ModelArts inference uses the public network to access real-time services. After a real-time service is deployed, a standard RESTful API is provided for you to call. You can view the API URL on the Usage Guides
Uploading Files from a Local Path to JupyterLab JupyterLab provides multiple methods for uploading files. Methods for Uploading a File For a file not exceeding 100 MB, directly upload it to the target notebook instance. Detailed information, such as the file size, upload progress,
Parent topic: Managing AI Inspections (Self-Developed)
Using ModelArts Studio (MaaS) DeepSeek API to Build AI Applications You can use MaaS DeepSeek APIs together with Cherry Studio and Cursor to build AI applications. Cherry Studio: Use Cherry Studio to call a DeepSeek model deployed on MaaS to build a personal AI assistant.
Implementation Procedure Creating Resources Basic Configurations Training Routine O&M Parent topic: Using OBS+SFS Turbo for Storage Acceleration in AI Scenarios
Managing a Workflow Searching for a Workflow Viewing the Running Records of a Workflow Managing a Workflow Retrying, Stopping, or Running a Workflow Phase Parent topic: Using Workflows for Low-Code AI Development
When the storage read/write bandwidth no longer meets the AI training needs, for example, checkpoint saves and loads take a longer time or loading datasets slows down the training, you can expand the file system performance to reduce the data loading time.
What are the final deliverables for developing and implementing services using the AI platform? For details, see the deliverables listed in the delivery guide. Parent topic: About Service Delivery
What are the final deliverables for using the AI platform development and implementation service? For details, see the deliverables listed in the delivery guide. Parent topic: About Service Delivery