检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Cloud storage performance evaluation and optimization for large AI model training: CSOS offers faster and more cost-effective solutions to address cloud storage bottlenecks during large AI model training.
Yearly Mbps/Year 171 AI video transmission bandwidth Monthly Mbps/Month 17.1 AI video transmission bandwidth Pay-per-use Mbps/Month 17.1 AI video standard storage Yearly TB/Year 368.64 AI video standard storage Monthly TB/Month 36.86 AI video standard storage Pay-per-use GB/Month
Lite Cluster & Server Introduction ModelArts Lite is a cloud-native AI computing power cluster that combines hardware and software optimization. It provides an open, compatible, cost-effective, stable, and scalable platform for AI high-performance computing and other scenarios.
Faulty Model If the image used for creating a model is faulty, recreate the image by following the instructions provided in Creating a Custom Image and Using It to Create an AI Application.
Scenario The data volume and compute used for training vary among AI models. Select a proper storage and training solution to improve training efficiency and resource cost-effectiveness.
Adaptive Parallelism Scenario In AI data engineering scenarios involving massive data processing, it is necessary to call actors in parallel to enhance data processing efficiency and conduct distributed computing.
The cost of AI development in ModelArts mainly includes the storage fee and resource fee. If ModelArts is no longer used, stop or delete the services running in ModelArts and delete the data stored in OBS and EVS. Clearing Storage Data ModelArts data is stored in OBS.
Conversational Bot Service (CBS) is an AI cloud service that powers intelligent enterprise applications such as Question Answering Bot (QABot). If you plan to access CBS through an API, ensure that you are familiar with CBS concepts. For details, see Service Overview.
E2E O&M Solution of ModelArts Inference Services The end-to-end O&M of ModelArts inference services involves the entire AI process including algorithm development, service O&M, and service running.
In the distributed scenario, this system catalog is provided, but the AI capabilities are unavailable. Parent topic: System Catalogs
In the distributed scenario, this system catalog is provided, but the AI capabilities are unavailable. Parent topic: System Catalogs
Overview Video Intelligent Analysis Service (VIAS) is an integrated platform that provides multiple capabilities such as AI analysis, event reporting and warning, and edge resource pool management.
The CCE AI Suite (Ascend NPU) add-on of v2.1.23 or later has been installed in the cluster. For details, see CCE AI Suite (Ascend NPU). Notes and Constraints In a single pod, only one container can request NPU resources, and init containers cannot request NPU resources.
This parameter determines data used for training. ip name IP address of the host where the AI engine is deployed port integer Listening port number of the AI engine max_epoch integer Maximum number of iterations in an epoch learning_rate real Learning rate of model training.
This parameter determines data used for training. ip name IP address of the host where the AI engine is deployed port integer Listening port number of the AI engine max_epoch integer Maximum number of iterations in an epoch learning_rate real Learning rate of model training.
The AI engine used for training is PyTorch, and the resources are CPUs or GPUs. This section applies only to training jobs of the new version. Scenarios In this example, write a Dockerfile to create a custom image on a Linux x86_64 server running Ubuntu 18.04.
The AI engine used for training is PyTorch, and the resources are CPUs or GPUs. This section applies only to training jobs of the new version. Scenario In this example, write a Dockerfile to create a custom image on a Linux x86_64 server running Ubuntu 18.04.
AI Engine AI engine, which is automatically set according to the model storage path you select, used by the meta model. Container API This parameter is displayed when AI Engine is set to Custom. Set the protocol and port number of the inference API defined by the model.
For details, see "Default values and value ranges of hyperparameters" in section "DB4AI: Database-driven AI > Native DB4AI Engine" in Feature Guide.
Operation Process in JupyterLab ModelArts allows you to access notebook instances online using JupyterLab and develop AI models based on the PyTorch, TensorFlow, or MindSpore engines. The following figure shows the operation process.